Skip to content

Commit bfa8bee

Browse files
committed
mixin: Fix all spelling issues discovered by codespell.
Signed-off-by: Mario Trangoni <[email protected]>
1 parent 3cef1b6 commit bfa8bee

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

mixin/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
> Note that everything is experimental and may change significantly at any time. Also it still has missing alert and dashboard definitions for certain components, e.g. rule and sidecar. Please feel free to contribute.
44
5-
This directory contains extensible and customizable monitoring definitons for Thanos. [Grafana](http://grafana.com/) dashboards, and [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) combined with documentation and scripts to provide easy monitoring experience for Thanos.
5+
This directory contains extensible and customizable monitoring definitions for Thanos. [Grafana](http://grafana.com/) dashboards, and [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) combined with documentation and scripts to provide easy monitoring experience for Thanos.
66

77
You can find more about monitoring-mixins in [the design document](https://github.com/monitoring-mixins/docs/blob/master/design.pdf), and you can check out other examples like [Prometheus Mixin](https://github.com/prometheus/prometheus/tree/master/documentation/prometheus-mixin).
88

mixin/alerts/query.libsonnet

+1-1
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@
145145
{
146146
alert: 'ThanosQueryOverload',
147147
annotations: {
148-
description: 'Thanos Query {{$labels.job}}%s has been overloaded for more than 15 minutes. This may be a symptom of excessive simultanous complex requests, low performance of the Prometheus API, or failures within these components. Assess the health of the Thanos query instances, the connnected Prometheus instances, look for potential senders of these requests and then contact support.' % location,
148+
description: 'Thanos Query {{$labels.job}}%s has been overloaded for more than 15 minutes. This may be a symptom of excessive simultaneous complex requests, low performance of the Prometheus API, or failures within these components. Assess the health of the Thanos query instances, the connected Prometheus instances, look for potential senders of these requests and then contact support.' % location,
149149
summary: 'Thanos query reaches its maximum capacity serving concurrent requests.',
150150
},
151151
expr: |||

mixin/dashboards/receive.libsonnet

+1-1
Original file line numberDiff line numberDiff line change
@@ -200,7 +200,7 @@ local utils = import '../lib/utils.libsonnet';
200200
)
201201
)
202202
.addPanel(
203-
g.panel('Errors', 'Shows ratio of errors compared to the total number of forwareded requests to other receive nodes.') +
203+
g.panel('Errors', 'Shows ratio of errors compared to the total number of forwarded requests to other receive nodes.') +
204204
g.qpsErrTotalPanel(
205205
'thanos_receive_forward_requests_total{%s}' % utils.joinLabels([thanos.receive.dashboard.selector, 'result="error"']),
206206
'thanos_receive_forward_requests_total{%s}' % thanos.receive.dashboard.selector,

mixin/runbook.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050
|ThanosQueryHighDNSFailures|Thanos Query is having high number of DNS failures.|Thanos Query {{$labels.job}} have {{$value humanize}}% of failing DNS queries for store endpoints.|warning|[https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryhighdnsfailures](https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryhighdnsfailures)|
5151
|ThanosQueryInstantLatencyHigh|Thanos Query has high latency for queries.|Thanos Query {{$labels.job}} has a 99th percentile latency of {{$value}} seconds for instant queries.|critical|[https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryinstantlatencyhigh](https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryinstantlatencyhigh)|
5252
|ThanosQueryRangeLatencyHigh|Thanos Query has high latency for queries.|Thanos Query {{$labels.job}} has a 99th percentile latency of {{$value}} seconds for range queries.|critical|[https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryrangelatencyhigh](https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryrangelatencyhigh)|
53-
|ThanosQueryOverload|Thanos query reaches its maximum capacity serving concurrent requests.|Thanos Query {{$labels.job}} has been overloaded for more than 15 minutes. This may be a symptom of excessive simultanous complex requests, low performance of the Prometheus API, or failures within these components. Assess the health of the Thanos query instances, the connnected Prometheus instances, look for potential senders of these requests and then contact support.|warning|[https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryoverload](https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryoverload)|
53+
|ThanosQueryOverload|Thanos query reaches its maximum capacity serving concurrent requests.|Thanos Query {{$labels.job}} has been overloaded for more than 15 minutes. This may be a symptom of excessive simultaneous complex requests, low performance of the Prometheus API, or failures within these components. Assess the health of the Thanos query instances, the connected Prometheus instances, look for potential senders of these requests and then contact support.|warning|[https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryoverload](https://github.com/thanos-io/thanos/tree/main/mixin/runbook.md#alert-name-thanosqueryoverload)|
5454

5555
## thanos-receive
5656

0 commit comments

Comments
 (0)