You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, JetStream supports limiting the delivery rate of messages using the RateLimit configuration, which is defined in bits per second (bps). While this is effective for managing bandwidth, it does not allow direct control over the number of messages delivered per second. It may be good to have the ability to configure a consumer to limit the delivery rate in terms of the number of messages per second via a new consumer configuration option, such as MessageRateLimit.
This feature should also apply to pull consumers, ensuring that they do not receive more than the specified number of messages per second. The server could enforce this rate limit by controlling the delivery of messages to match the defined threshold, regardless of how frequently the client issues pull requests. This would help maintain consistent behavior across both push and pull consumers while simplifying client-side rate control logic.
Example (java):
ConsumerConfiguration cc = ConsumerConfiguration.builder()
.durable("my-consumer")
.filterSubject("my-subject")
.messageRateLimit(5) // Deliver up to 5 messages per second
.build();
Use case
This would provide finer control for applications where the message frequency matters more than the data size, having such option developers can focus on defining the number of messages they need, ie: rate-limited processing pipelines, time-sensitive analytics systems or scenarios where external dependencies have strict request rate limits. Also, it can help in edge cases where "message bursts" can appear, helping to reduce the variability in message rates and protecting the client from being overloaded.
A similar feature is supported in some other messaging systems (e.g., RabbitMQ's consumer prefetch limits), however, they often require manual throttling or handling at the client level. Incorporating this feature natively in JetStream would simplify client implementations and improve usability.
Contribution
I am not familiar with the NATS codebase or Go, so my contribution would likely be limited to refining the idea and discussing potential use cases.
The text was updated successfully, but these errors were encountered:
Proposed change
Currently, JetStream supports limiting the delivery rate of messages using the RateLimit configuration, which is defined in bits per second (bps). While this is effective for managing bandwidth, it does not allow direct control over the number of messages delivered per second. It may be good to have the ability to configure a consumer to limit the delivery rate in terms of the number of messages per second via a new consumer configuration option, such as MessageRateLimit.
This feature should also apply to pull consumers, ensuring that they do not receive more than the specified number of messages per second. The server could enforce this rate limit by controlling the delivery of messages to match the defined threshold, regardless of how frequently the client issues pull requests. This would help maintain consistent behavior across both push and pull consumers while simplifying client-side rate control logic.
Example (java):
Use case
This would provide finer control for applications where the message frequency matters more than the data size, having such option developers can focus on defining the number of messages they need, ie: rate-limited processing pipelines, time-sensitive analytics systems or scenarios where external dependencies have strict request rate limits. Also, it can help in edge cases where "message bursts" can appear, helping to reduce the variability in message rates and protecting the client from being overloaded.
A similar feature is supported in some other messaging systems (e.g., RabbitMQ's consumer prefetch limits), however, they often require manual throttling or handling at the client level. Incorporating this feature natively in JetStream would simplify client implementations and improve usability.
Contribution
I am not familiar with the NATS codebase or Go, so my contribution would likely be limited to refining the idea and discussing potential use cases.
The text was updated successfully, but these errors were encountered: