You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For more information, see the [detailed documentation](/docs/connectors/SOURCE.md).
12
12
13
13
### Sink
14
-
Kafka Connect Redis Sink consumes Kafka records in a Redis command format and applies them to Redis. Several write-based commands are supported at this time (`SET`, `SADD`, and `GEOADD`) and more are slated for the near future.
14
+
Kafka Connect Redis Sink consumes Kafka records in a Redis command format (`SET`, `GEOADD`, etc.) and applies them to Redis.
15
15
16
16
For more information, see the [detailed documentation](/docs/connectors/SINK.md).
Records are partitioned using the [`DefaultPartitioner`](https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java) class. This means that the record key is used to determine which partition the record is assigned to.
79
79
80
-
If you would prefer a different partitioning strategy, you may implement your own [`Partitioner`](https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java) and configure the connector to use it via [`partitioner.class`](https://kafka.apache.org/documentation/#partitioner.class). Alternatively, you may also implement a custom [Single Message Transform (SMT)](https://docs.confluent.io/current/connect/transforms/index.html).
80
+
In the case of subscribing to Redis keyspace notifications, it may be useful to avoid partitioning the data so that multiple event types can arrive in order as a single event stream. This can be accomplished by configuring the connector to publish to a Kafka topic that only contains a single partition, forcing the DefaultPartitioner to only utilize the single partition.
81
+
82
+
The plugin can be configured to use an alternative partitioning strategy if desired. Set the configuration property `connector.client.config.override.policy` to value `All` on the Kafka Connect worker (the overall Kafka Connect application that runs plugins). This will allow the override of the internal Kafka producer and consumer configurations. To override the partitioner for an individual connector plugin, add the configuration property `producer.override.partitioner.class` to the connector plugin with a value that points to a class implementing the [Partitioner](https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java) interface, e.g. `org.apache.kafka.clients.producer.internals.DefaultPartitioner`.
81
83
82
84
## Parallelization
83
85
Splitting the workload between multiple tasks via the configuration property `max.tasks` is not supported at this time. Support for this will be added in the future.
84
86
85
87
## Configuration
86
88
### Connector Properties
87
-
| Name | Type | Default | Importance | Description |
Copy file name to clipboardExpand all lines: docs/demo/README.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Now that we have Kubernetes set up locally, we'll need a Docker image that conta
15
15
16
16
Navigate to `demo/docker/` in this repository and run the following commands **in a separate terminal** to download the plugin and build the image for minikube:
Copy file name to clipboardExpand all lines: docs/demo/SINK.md
+49-10Lines changed: 49 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,8 @@
3
3
Send a request to the Kafka Connect REST API to configure it to use Kafka Connect Redis:
4
4
5
5
### Avro
6
+
**IMPORTANT:** The Avro demo utilizes multiple topics in order to work around [a bug in the Avro console producer](https://github.com/confluentinc/schema-registry/issues/898). A fix has been merged but Confluent has not published a new Docker image for it yet (6.1.0+). Kafka Connect Redis works with Avro on a single topic; this is just a problem with the console producer provided by Confluent.
7
+
6
8
```bash
7
9
curl --request POST \
8
10
--url "$(minikube -n kcr-demo service kafka-connect --url)/connectors" \
**IMPORTANT:** The following Avro example does not completely work [due to a bug in the Avro console producer](https://github.com/confluentinc/schema-registry/issues/898). A fix has been merged but Confluent has not published a new Docker image for it yet (6.1.0+). In the meantime, you may only issue one Redis command type per topic if you are using the Avro console producer.
0 commit comments