From 241380985f873a56e91efd761fcea63b96043ffd Mon Sep 17 00:00:00 2001 From: Mark Hulbert <39801222+m-hulbert@users.noreply.github.com> Date: Wed, 8 Oct 2025 10:07:16 +0200 Subject: [PATCH] Clarify batched webhook rate and behavior --- src/pages/docs/platform/integrations/webhooks/index.mdx | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/pages/docs/platform/integrations/webhooks/index.mdx b/src/pages/docs/platform/integrations/webhooks/index.mdx index d0b7247b48..b867691b16 100644 --- a/src/pages/docs/platform/integrations/webhooks/index.mdx +++ b/src/pages/docs/platform/integrations/webhooks/index.mdx @@ -73,10 +73,11 @@ Ably will retry failed `5XX` requests. If the response times out, Ably will retr ## Batched requests -Batched requests are useful for endpoints that have the potential to be overloaded by requests, or have no requirement to process messages one-by-one. +Batched requests enable Ably to send more than one message at a time, which limits the rate of invocations of the webhook. They are useful for endpoints that have the potential to be overloaded by requests, or have no requirement to process messages one-by-one. -Batched requests are published at most once per second, but this may vary by integration. Once a batched request is triggered, all other events will be queued so that they can be delivered in a batch in the next request. The next request will be issued within one second with the following caveats: +Once a batched request is triggered, all other events will be queued so that they can be delivered in a batch in the next request. The next request will be issued with the following caveats: +* The actual rate of invocations depends on a few different factors, but essentially it is at most one per second from each of the separate instances in the Ably infrastructure that is processing the channels associated with that webhook. * Only a limited number of HTTP requests are in-flight at one time for each configured integration. Therefore, if you want to be notified quickly, you should accept requests quickly and defer any work to be done asynchronously. * If there are more than 1,000 events queued for a payload, the oldest 1,000 events will be bundled into this payload and the remaining events will be delivered in the subsequent payload. Therefore, if your sustained rate of events is expected to be more than 1,000 per second or your servers are slow to respond, then it is possible a backlog will build up and you will not receive all events.