Skip to content

Commit 69baf71

Browse files
Fix Redis cluster topology change handling in source connector (#25)
1 parent dfc486e commit 69baf71

File tree

16 files changed

+276
-47
lines changed

16 files changed

+276
-47
lines changed

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,14 @@ All notable changes to this project will be documented in this file.
44
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
55
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
66

7+
## [1.2.2] - 2021-07-22
8+
### Changed
9+
- Use capitalization in log messages
10+
11+
### Fixed
12+
- Fixed an issue with logging the number of records the source connector produced
13+
- Fixed how the source connector handles Redis cluster topology changes in order to better support keyspace notifications
14+
715
## [1.2.1] - 2021-07-17
816
### Changed
917
- Modified Confluent archive to follow new standards

docs/connectors/SOURCE.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
# Kafka Connect Redis - Source
22
Subscribes to Redis channels/patterns (including [keyspace notifications](https://redis.io/topics/notifications)) and writes the received messages to Kafka.
33

4+
**WARNING** Delivery of keyspace notifications is not reliable for Redis clusters. Keyspace notifications are node-local and adding new upstream nodes to your Redis cluster may involve a short period where events on the new node are not picked up until the connector discovers the node and issues a `SUBSCRIBE` command to it. This is a limitation of keyspace notifications that the Redis organization would like to overcome in the future.
5+
46
## Record Schema
57

68
### Key

docs/demo/README.md

Lines changed: 56 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ docker build -t jaredpetersen/redis:latest .
2727

2828
Next, we'll need to build a docker image for Kafka Connect Redis. Navigate to `demo/docker/kafka-connect-redis` and run the following commands:
2929
```bash
30-
curl -O https://repo1.maven.org/maven2/io/github/jaredpetersen/kafka-connect-redis/1.2.1/kafka-connect-redis-1.2.1.jar
30+
curl -O https://repo1.maven.org/maven2/io/github/jaredpetersen/kafka-connect-redis/1.2.2/kafka-connect-redis-1.2.2.jar
3131
docker build -t jaredpetersen/kafka-connect-redis:latest .
3232
```
3333

@@ -48,11 +48,66 @@ kubectl -n kcr-demo get pods
4848

4949
Be patient, this can take a few minutes.
5050

51+
### Redis Configuration
5152
Run the following command to configure Redis to run in cluster mode instead of standalone mode:
5253
```bash
5354
kubectl -n kcr-demo run -it --rm redis-client --image redis:6 -- redis-cli --pass IEPfIr0eLF7UsfwrIlzy80yUaBG258j9 --cluster create $(kubectl -n kcr-demo get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}') --cluster-yes
5455
```
5556

57+
#### Add New Cluster Node (Optional)
58+
You may find it useful to add a node to the Redis cluster later to simulate how the connector keeps up with topology changes.
59+
60+
To accomplish this, you need to update `kubernetes/redis/statefulset.yaml` to specify the new desired replica count and apply it with:
61+
```bash
62+
kubectl apply -k kubernetes
63+
```
64+
65+
Next, you need to add the new node to the cluster configuration.
66+
67+
Find the IP address number of the new node:
68+
```bash
69+
kubectl -n kcr-demo get pod redis-cluster-### -o jsonpath='{.status.podIP}'
70+
```
71+
72+
Find the IP address of one of the nodes already in the cluster:
73+
```bash
74+
kubectl -n kcr-demo get pods -l app=redis-cluster -o jsonpath='{.items[0].status.podIP}'
75+
```
76+
77+
Create Redis client pod so that we can update the cluster configuration:
78+
```bash
79+
kubectl -n kcr-demo run -it --rm redis-client --image redis:6 -- /bin/bash
80+
```
81+
82+
Save those two IP addresses -- and the Redis cluster password while we're at it -- as environment variables:
83+
```bash
84+
NEW_NODE=newnodeipaddress:6379
85+
EXISTING_NODE=existingnodeipaddress:6379
86+
PASSWORD=IEPfIr0eLF7UsfwrIlzy80yUaBG258j9
87+
```
88+
89+
Add the node to the cluster using the IP address information you collected earlier:
90+
```bash
91+
redis-cli --pass $PASSWORD --cluster add-node $NEW_NODE $EXISTING_NODE
92+
```
93+
94+
Connect to the cluster and confirm that there is now an additional entry in the cluster listing:
95+
```bash
96+
redis-cli -c -a $PASSWORD -u "redis://redis-cluster"
97+
redis-cluster:6379> CLUSTER NODES
98+
```
99+
100+
The new upstream node doesn't have any slots assigned to it. Without slots being assigned, it can't store any data. Let's fix that by rebalancing the cluster:
101+
```bash
102+
redis-cli --pass $PASSWORD --cluster rebalance $EXISTING_NODE --cluster-use-empty-masters
103+
```
104+
105+
Then confirm that the new node has been assigned slots:
106+
```bash
107+
redis-cli -c -a $PASSWORD -u "redis://redis-cluster"
108+
redis-cluster:6379> CLUSTER NODES
109+
```
110+
56111
## Usage
57112
[Source Connector](SOURCE.md)
58113

docs/demo/SINK.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ First, expose the Kafka Connect server:
77
kubectl -n kcr-demo port-forward service/kafka-connect :rest
88
```
99

10-
Kubectl will choose an available port for you that you will need to use for the cURLs (`$PORT`).
10+
Kubectl will choose an available port for you that you will need to use for the cURLs. Set this port to `$PORT`.
1111

1212
### Avro
1313
```bash

docs/demo/SOURCE.md

Lines changed: 29 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ First, expose the Kafka Connect server:
77
kubectl -n kcr-demo port-forward service/kafka-connect :rest
88
```
99

10-
Kubectl will choose an available port for you that you will need to use for the cURLs (`$PORT`).
10+
Kubectl will choose an available port for you that you will need to use for the cURLs. Set this port to `$PORT`.
1111

1212
### Avro
1313
```bash
@@ -53,32 +53,7 @@ curl --request POST \
5353
}'
5454
```
5555

56-
## Create Redis Events
57-
Create Redis client pod:
58-
```bash
59-
kubectl -n kcr-demo run -it --rm redis-client --image redis:6 -- /bin/bash
60-
```
61-
62-
Use redis-cli to connect to the cluster:
63-
```bash
64-
redis-cli -c -u 'redis://IEPfIr0eLF7UsfwrIlzy80yUaBG258j9@redis-cluster'
65-
```
66-
67-
Run commands to create some different events:
68-
```bash
69-
SET {user.1}.username jetpackmelon22 EX 2
70-
SET {user.2}.username anchorgoat74 EX 2
71-
SADD {user.1}.interests reading
72-
EXPIRE {user.1}.interests 2
73-
SADD {user.2}.interests sailing woodworking programming
74-
EXPIRE {user.2}.interests 2
75-
GET {user.1}.username
76-
GET {user.2}.username
77-
SMEMBERS {user.1}.interests
78-
SMEMBERS {user.2}.interests
79-
```
80-
81-
## Validate
56+
## Set up Kafka Topic Listener
8257
### Avro
8358
Create an interactive ephemeral query pod:
8459
```bash
@@ -92,8 +67,7 @@ kafka-avro-console-consumer \
9267
--property schema.registry.url='http://kafka-schema-registry:8081' \
9368
--property print.key=true \
9469
--property key.separator='|' \
95-
--topic redis.events \
96-
--from-beginning
70+
--topic redis.events
9771
```
9872

9973
### Connect JSON
@@ -108,6 +82,30 @@ kafka-console-consumer \
10882
--bootstrap-server kafka-broker-0.kafka-broker:9092 \
10983
--property print.key=true \
11084
--property key.separator='|' \
111-
--topic redis.events \
112-
--from-beginning
85+
--topic redis.events
86+
```
87+
88+
## Create Redis Events
89+
Create Redis client pod:
90+
```bash
91+
kubectl -n kcr-demo run -it --rm redis-client --image redis:6 -- /bin/bash
92+
```
93+
94+
Use redis-cli to connect to the cluster:
95+
```bash
96+
redis-cli -c -u 'redis://IEPfIr0eLF7UsfwrIlzy80yUaBG258j9@redis-cluster'
97+
```
98+
99+
Run commands to create some different events:
100+
```bash
101+
SET {user.1}.username jetpackmelon22 EX 2
102+
SET {user.2}.username anchorgoat74 EX 2
103+
SADD {user.1}.interests reading
104+
EXPIRE {user.1}.interests 2
105+
SADD {user.2}.interests sailing woodworking programming
106+
EXPIRE {user.2}.interests 2
107+
GET {user.1}.username
108+
GET {user.2}.username
109+
SMEMBERS {user.1}.interests
110+
SMEMBERS {user.2}.interests
113111
```

pom.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
<groupId>io.github.jaredpetersen</groupId>
77
<artifactId>kafka-connect-redis</artifactId>
8-
<version>1.2.1</version>
8+
<version>1.2.2</version>
99
<packaging>jar</packaging>
1010

1111
<name>Kafka Redis Connector (Sink and Source)</name>

src/main/java/io/github/jaredpetersen/kafkaconnectredis/sink/RedisSinkTask.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,8 @@ public void put(final Collection<SinkRecord> records) {
8383
return;
8484
}
8585

86-
LOG.info("writing {} record(s) to redis", records.size());
87-
LOG.debug("records: {}", records);
86+
LOG.info("Writing {} record(s) to redis", records.size());
87+
LOG.debug("Records: {}", records);
8888

8989
for (SinkRecord record : records) {
9090
put(record);

src/main/java/io/github/jaredpetersen/kafkaconnectredis/sink/writer/RecordConverter.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ public class RecordConverter {
2424
* @return Redis command.
2525
*/
2626
public RedisCommand convert(SinkRecord sinkRecord) {
27-
LOG.debug("converting record {}", sinkRecord);
27+
LOG.debug("Converting record {}", sinkRecord);
2828

2929
final Struct recordValue = (Struct) sinkRecord.value();
3030
final String recordValueSchemaName = recordValue.schema().name();

src/main/java/io/github/jaredpetersen/kafkaconnectredis/sink/writer/Writer.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ public Writer(RedisClusterCommands<String, String> redisClusterCommands) {
5656
* @param redisCommand Command to apply
5757
*/
5858
public void write(RedisCommand redisCommand) {
59-
LOG.debug("writing {}", redisCommand);
59+
LOG.debug("Writing {}", redisCommand);
6060

6161
switch (redisCommand.getCommand()) {
6262
case SET:

src/main/java/io/github/jaredpetersen/kafkaconnectredis/source/RedisSourceTask.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -123,8 +123,8 @@ public List<SourceRecord> poll() {
123123
}
124124
}
125125

126-
if (sourceRecords.size() > 1) {
127-
LOG.info("writing {} record(s) to kafka", sourceRecords.size());
126+
if (sourceRecords.size() >= 1) {
127+
LOG.info("Writing {} record(s) to kafka", sourceRecords.size());
128128
}
129129

130130
return sourceRecords;

src/main/java/io/github/jaredpetersen/kafkaconnectredis/source/listener/subscriber/RedisClusterChannelSubscriber.java

Lines changed: 37 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,16 @@
11
package io.github.jaredpetersen.kafkaconnectredis.source.listener.subscriber;
22

3+
import io.lettuce.core.cluster.event.ClusterTopologyChangedEvent;
34
import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection;
45
import java.util.List;
56
import java.util.concurrent.ConcurrentLinkedQueue;
7+
import lombok.extern.slf4j.Slf4j;
68

79
/**
810
* Redis cluster-aware pub/sub subscriber that listens to channels and caches the retrieved messages for later
911
* retrieval.
1012
*/
13+
@Slf4j
1114
public class RedisClusterChannelSubscriber extends RedisSubscriber {
1215
/**
1316
* Create a cluster-aware subscriber that listens to channels.
@@ -21,6 +24,39 @@ public RedisClusterChannelSubscriber(
2124
) {
2225
super(new ConcurrentLinkedQueue<>());
2326
redisClusterPubSubConnection.addListener(new RedisClusterListener(this.messageQueue));
24-
redisClusterPubSubConnection.sync().upstream().commands().subscribe(channels.toArray(new String[0]));
27+
subscribeChannels(redisClusterPubSubConnection, channels);
28+
}
29+
30+
/**
31+
* Subscribe to the provided channels. Re-issue subscriptions asynchronously when the cluster topology changes.
32+
*
33+
* @param redisClusterPubSubConnection Cluster pub/sub connection used to facilitate the subscription
34+
* @param channels Channels to subscribe and listen to
35+
*/
36+
private void subscribeChannels(
37+
StatefulRedisClusterPubSubConnection<String, String> redisClusterPubSubConnection,
38+
List<String> channels
39+
) {
40+
final String[] channelArray = channels.toArray(new String[0]);
41+
42+
// Perform an initial subscription
43+
redisClusterPubSubConnection.sync()
44+
.upstream()
45+
.commands()
46+
.subscribe(channelArray);
47+
48+
// Set up a listener to the Lettuce event bus so that we can issue subscriptions to nodes
49+
redisClusterPubSubConnection.getResources().eventBus().get()
50+
.filter(event -> event instanceof ClusterTopologyChangedEvent)
51+
.doOnNext(event -> {
52+
// Lettuce does its best to determine when the topology changed but there's always a possibility that
53+
LOG.info("Redis cluster topology changed, issuing new subscriptions");
54+
55+
redisClusterPubSubConnection.sync()
56+
.upstream()
57+
.commands()
58+
.subscribe(channelArray);
59+
})
60+
.subscribe();
2561
}
2662
}

src/main/java/io/github/jaredpetersen/kafkaconnectredis/source/listener/subscriber/RedisClusterPatternSubscriber.java

Lines changed: 37 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,16 @@
11
package io.github.jaredpetersen.kafkaconnectredis.source.listener.subscriber;
22

3+
import io.lettuce.core.cluster.event.ClusterTopologyChangedEvent;
34
import io.lettuce.core.cluster.pubsub.StatefulRedisClusterPubSubConnection;
45
import java.util.List;
56
import java.util.concurrent.ConcurrentLinkedQueue;
7+
import lombok.extern.slf4j.Slf4j;
68

79
/**
810
* Redis cluster-aware pub/sub subscriber that listens to patterns and caches the retrieved messages for later
911
* retrieval.
1012
*/
13+
@Slf4j
1114
public class RedisClusterPatternSubscriber extends RedisSubscriber {
1215
/**
1316
* Create a cluster-aware subscriber that listens to patterns.
@@ -21,6 +24,39 @@ public RedisClusterPatternSubscriber(
2124
) {
2225
super(new ConcurrentLinkedQueue<>());
2326
redisClusterPubSubConnection.addListener(new RedisClusterListener(this.messageQueue));
24-
redisClusterPubSubConnection.sync().upstream().commands().psubscribe(patterns.toArray(new String[0]));
27+
subscribePatterns(redisClusterPubSubConnection, patterns);
28+
}
29+
30+
/**
31+
* Subscribe to the provided channels. Re-issue subscriptions asynchronously when the cluster topology changes.
32+
*
33+
* @param redisClusterPubSubConnection Cluster pub/sub connection used to facilitate the subscription
34+
* @param patterns Patterns to subscribe and listen to
35+
*/
36+
private void subscribePatterns(
37+
StatefulRedisClusterPubSubConnection<String, String> redisClusterPubSubConnection,
38+
List<String> patterns
39+
) {
40+
final String[] patternArray = patterns.toArray(new String[0]);
41+
42+
// Perform an initial subscription
43+
redisClusterPubSubConnection.sync()
44+
.upstream()
45+
.commands()
46+
.psubscribe(patternArray);
47+
48+
// Set up a listener to the Lettuce event bus so that we can issue subscriptions to nodes
49+
redisClusterPubSubConnection.getResources().eventBus().get()
50+
.filter(event -> event instanceof ClusterTopologyChangedEvent)
51+
.doOnNext(event -> {
52+
// Lettuce does its best to determine when the topology changed but there's always a possibility that
53+
LOG.info("Redis cluster topology changed, issuing new subscriptions");
54+
55+
redisClusterPubSubConnection.sync()
56+
.upstream()
57+
.commands()
58+
.psubscribe(patternArray);
59+
})
60+
.subscribe();
2561
}
2662
}

src/main/java/io/github/jaredpetersen/kafkaconnectredis/source/listener/subscriber/RedisListener.java

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,18 +32,18 @@ public void message(String pattern, String channel, String message) {
3232
}
3333

3434
public void subscribed(String channel) {
35-
LOG.info("subscribed to channel {}", channel);
35+
LOG.info("Subscribed to channel {}", channel);
3636
}
3737

3838
public void psubscribed(String pattern) {
39-
LOG.info("psubscribed to pattern {}", pattern);
39+
LOG.info("Subscribed to pattern {}", pattern);
4040
}
4141

4242
public void unsubscribed(String channel) {
43-
LOG.info("unsubscribed from channel {}", channel);
43+
LOG.info("Unsubscribed from channel {}", channel);
4444
}
4545

4646
public void punsubscribed(String pattern) {
47-
LOG.info("unsubscribed from pattern {}", pattern);
47+
LOG.info("Unsubscribed from pattern {}", pattern);
4848
}
4949
}

src/main/java/io/github/jaredpetersen/kafkaconnectredis/util/VersionUtil.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ public class VersionUtil {
1616
PROPERTIES.load(VersionUtil.class.getClassLoader().getResourceAsStream("kafka-connect-redis.properties"));
1717
}
1818
catch (IOException exception) {
19-
LOG.error("failed to load properties", exception);
19+
LOG.error("Failed to load properties", exception);
2020
}
2121
}
2222

0 commit comments

Comments
 (0)