You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 17, 2024. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+5-2
Original file line number
Diff line number
Diff line change
@@ -88,8 +88,10 @@ For more background on the project motivations and design goals see the original
88
88
* Min 1GB RAM required for Docker setup. Just the gatherer needs <50MB if metric strore is up, otherwise metrics are cached in RAM up to a limit of 10k data points.
89
89
* 2 GBs of disk space should be enough for monitoring 1 DB for 1 month with InfluxDB. 1 month is also the default metrics
90
90
retention policy for Influx running in Docker (configurable). Depending on the amount of schema objects - tables, indexes, stored
91
-
procedures and especially on number of unique SQL-s, it could be also much more. With Postgres as metric store multiply it with ~5x.
92
-
There's also a "test data generation" mode in the collector to exactly determine disk footprint - see PW2_TESTDATA_DAYS and
91
+
procedures and especially on number of unique SQL-s, it could be also much more. With Postgres as metric store multiply it with ~5x,
92
+
but if disk size reduction is wanted for PostgreSQL storage then the simplest way is to use the TimescaleDB extension - it has
93
+
built-in compression and disk footprint is on the same level with InfluxDB, while retaining full SQL support.
94
+
There's also a "test data generation" mode in the collector to exactly determine disk footprint for your use case - see PW2_TESTDATA_DAYS and
93
95
PW2_TESTDATA_MULTIPLIER params for that (requires also "ad-hoc" mode params).
94
96
* A low-spec (1 vCPU, 2 GB RAM) cloud machine can easily monitor 100 DBs in "exhaustive" settings (i.e. almost all metrics
95
97
are monitored in 1-2min intervals) without breaking a sweat (<20% load). When a single node where the metrics collector daemon
@@ -143,6 +145,7 @@ If more complex scenarios/check conditions are required TICK stack and Kapacitor
143
145
-[InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/) Time Series Database for storing metrics.
144
146
-[PostgreSQL](https://www.postgresql.org/) - world's most advanced Open Source RDBMS (based on JSONB, 9.4+ required).
145
147
See "To use an existing Postgres DB for storing metrics" section below for setup details.
148
+
- NB! Also supported is the TimescaleDB time-series extension, enabling huge disk savings over standard Postgres.
146
149
-[Graphite](https://graphiteapp.org/) (no custom_tags and request batching support)
147
150
- JSON files (for testing / special use cases)
148
151
*[Grafana](http://grafana.org/) for dashboarding (point-and-click, a set of predefined dashboards is provided)
Copy file name to clipboardExpand all lines: pgwatch2/sql/metric_store/README.md
+7
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,13 @@ NB! Currently minimum effective retention period with this model is 30 days. Thi
23
23
24
24
For cases where the available presets are not satisfactory / applicable. All data inserted into "public.metrics" table and the user is responsible for re-routing with a trigger and possible partition management. In that case all table creations and data cleanup must be performed by the user.
25
25
26
+
## timescale
27
+
28
+
Assumes TimescaleDB (v1.7+) extension and "outsources" partition management for normal metrics to the extensions. Realtime
29
+
metrics still use the "metric-time" schema as sadly Timescale doesn't support unlogged tables. Additionally one can also
30
+
tune the chunking and historic data compression intervals - by default it's 2 days and 1 day. To change use the admin.timescale_change_chunk_interval() and admin.timescale_change_compress_interval()
31
+
functions.
32
+
26
33
# Data size considerations
27
34
28
35
When you're planning to monitor lots of databases or with very low intervals, i.e. generating a lot of data, but not selecting
0 commit comments