You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add new feature documentation and update existing guides across the portal (#256)
* update docs (#250)
* update docs (#252)
Built in regex pattern
Legend and gridline config in dashboards
Env var for shared memtable
Partial data error in dashboards
Explore and analyze feature
Incorporate changes after aggregation cache and streaming aggregation are removed from UI
* update docs (#253)
- Pipeline history
- Alert history
- UDS for metrics and traces
- NATs for internal node coordination
* new doc: result-cache.md, refresh-cache-run-query.md and secondary-index.md doc update: delete.md, systemd.md, explain-analyze-query.md, full-text-search.md, environment-value.md file (#255)
* new doc: secondary-index.md update: delete.md, systemd.md, explain-analyze-query.md, full-text-search.md, environment-value.md file
* new doc: refresh-cache-run-query.md
Delete OpenObserve streams via API. Deletion is async and handled by the
4
4
compactor. Configure auto-deletion with data retention environment settings.
5
5
---
6
-
# Delete stream
6
+
## Delete stream
7
+
OpenObserve provides multiple deletion strategies to manage your data lifecycle: immediate complete stream deletion, targeted time-range deletion with job tracking, and automatic retention-based cleanup.
The data delete is an asynchronous operation. it will delete by `Compactor`.
59
+
#### Response fields
60
+
| Field | Type | Description |
61
+
|-------|------|-------------|
62
+
| code | integer | HTTP status code |
63
+
| message | string | Confirmation message |
64
+
65
+
#### Status codes
66
+
| Code | Meaning |
67
+
|------|---------|
68
+
| 200 | Stream deleted successfully |
69
+
| 400 | Invalid parameters |
70
+
| 404 | Stream not found |
71
+
| 500 | Internal server error |
72
+
73
+
#### Behavior
74
+
Deletion is asynchronous and does not happen immediately:
75
+
76
+
1. When you call this API, the deletion request is marked in the system.
77
+
2. The API responds immediately, you do not wait for actual deletion.
78
+
3. A background service called Compactor checks for pending deletions every 10 minutes.
79
+
4. When Compactor runs, it starts deleting your stream. This can take anywhere from seconds to several minutes depending on how much data the stream contains.
80
+
5. In the worst-case scenario (if you request deletion just before Compactor runs), the entire process could take up to 30 minutes total.
81
+
6. You do not need to wait. The deletion happens in the background. You can check the stream status later to confirm it has been deleted.
82
+
83
+
!!! note "Notes"
84
+
85
+
- This operation cannot be undone.
86
+
- Data is deleted from both the `file_list` table and object store.
87
+
- No job tracking is available for this endpoint
88
+
89
+
!!! note "Environment variables"
90
+
- You can change the `compactor` run interval: `ZO_COMPACT_INTERVAL=600`. Unit is second. default is `10 minutes`.
91
+
- You can configure data life cycle to auto delete old data: `ZO_COMPACT_DATA_RETENTION_DAYS=30`. The system will auto delete the data after `30` days. Note that the value must be greater than `0`.
26
92
27
-
> it will execute by an interval `10 minutes` as default. So maybe the data will delete after 30 minutes. You don't need to wait it done, you can confirm the delete result hours later.
93
+
### Delete stream data by time range
94
+
Delete stream data within a specific time period with job tracking.
28
95
29
-
You can change the `compactor` run interval by an environment:
0 commit comments