-
Notifications
You must be signed in to change notification settings - Fork 324
Description
Summary
It would be extremely useful if InstantDB supported schema-level webhooks that are automatically invoked when data changes (create / update / delete).
The webhook would receive the changed record(s) and relevant metadata, allowing external systems (e.g. Cloudflare Workers, serverless jobs, background processors) to react to changes and optionally write results back into InstantDB.
This would enable event-driven workflows without requiring long-lived subscribers or polling.
⸻
Motivation
InstantDB already provides an excellent real-time client experience, but many backend workflows are inherently reactive and asynchronous:
• Triggering AI or media processing
• Kicking off background jobs
• Integrating with external services
• Performing server-side orchestration without keeping connections open
Today, these workflows typically require one of the following:
• Clients manually calling external APIs
• Long-lived backend subscribers
• Periodic polling or cron jobs
A first-class webhook mechanism would simplify these patterns significantly.
⸻
Proposed Capability
Allow webhooks to be declared at the schema or collection level, for example:
• On create
• On update
• On delete
• Optionally filtered by specific field changes or conditions
When a matching event occurs, InstantDB would:
1. Send an HTTP request to a configured endpoint
2. Include the changed data and relevant metadata
3. Retry safely on transient failure
⸻
Example (Conceptual)
collection jobs {
fields {
id: string
status: string
input: json
output: json?
}
webhooks {
on update when status == "queued" {
url: "https://my-worker.example/process"
method: POST
include: ["id", "status", "input"]
}
}
}
Example Payload
{
"event": "update",
"collection": "jobs",
"record_id": "job_123",
"before": { "status": "created" },
"after": { "status": "queued" },
"changed_fields": ["status"],
"record": {
"id": "job_123",
"status": "queued",
"input": { "...": "..." }
}
}
⸻
Example Use Cases
- AI / Media Processing Pipelines
• Client inserts or updates a record with status = "queued"
• InstantDB invokes a Cloudflare Worker or GPU-backed service
• Worker performs processing (LLM, transcription, video, etc.)
• Worker writes results back into InstantDB (status = "done", output = ...)
This allows InstantDB to act as the source of truth and orchestration hub.
⸻
- Serverless Background Jobs
• Data changes trigger:
• PDF or report generation
• Image or video rendering
• Notifications or emails
• No need for a persistent subscriber process
⸻
- External System Integration
• Sync changes to:
• Search indexes
• Analytics pipelines
• Third-party REST APIs
• Clean separation between client writes and backend side effects
⸻
- Replace Long-Lived Subscriptions
For some backend-only workloads, webhooks are preferable to:
• Maintaining always-on connections
• Durable objects or long-running workers
• Polling-based architectures
⸻
Design Considerations (Optional)
Some ideas that may help guide implementation:
• Secure signing (e.g. HMAC signature headers)
• Retry policy with exponential backoff
• Idempotency keys
• Delivery status and observability
• Rate limits per webhook
• Ability to disable or pause webhooks
⸻
Why This Fits InstantDB Well
• InstantDB already understands data changes deeply
• Webhooks complement real-time subscriptions rather than replace them
• Enables richer backend workflows without complicating the client model
• Makes InstantDB viable as an event-driven backend core
⸻
Closing
This feature would unlock a large class of event-driven, serverless, and AI-oriented workflows while keeping InstantDB as the authoritative data layer.
Happy to help iterate on API shape or provide real-world examples if useful.