-
Notifications
You must be signed in to change notification settings - Fork 10
Added helm chart for observability #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- name: node_cpu_usage | ||
interval: 5s | ||
rules: | ||
- record: node_cpu_usage_percent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@balis I would recommend a 1-2 sentence per metric explaining what it does, so next people can have a less steep learning curve
replicas: 1 | ||
|
||
config: | ||
opensearch.yml: | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@balis this looks very worring, we need to talk about this
action: keep | ||
|
||
processors: | ||
batch: { } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@balis u sure this has to be defined as empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
via https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/batchprocessor
The following configuration options can be modified:
send_batch_size (default = 8192): Number of spans, metric data points, or log records after which a batch will be sent regardless of the timeout. send_batch_size acts as a trigger and does not affect the size of the batch. If you need to enforce batch size limits sent to the next component in the pipeline see send_batch_max_size.
timeout (default = 200ms): Time duration after which a batch will be sent regardless of size. If set to zero, send_batch_size is ignored as data will be sent immediately, subject to only send_batch_max_size.
send_batch_max_size (default = 0): The upper limit of the batch size. 0 means no upper limit of the batch size. This property ensures that larger batches are split into smaller units. It must be greater than or equal to send_batch_size.
@@ -173,6 +173,8 @@ hyperflow-engine: | |||
value: "${enableTracing}" | |||
- name: HF_VAR_ENABLE_OTEL | |||
value: "${enableOtel}" | |||
- name: HF_VAR_OPT_URL | |||
value: "http://hf-obs-opentelemetry-collector" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@balis I don't like the fact that this is defined here and here c12659e#diff-7800e510fef5761baa4ff5930e280adbc39c087c52583ca395d8aa5d38c86dc6R69
we should talk why it is in 2 places
- name: HF_VAR_ENABLE_OTEL | ||
value: "1" | ||
- name: HF_VAR_OPT_URL | ||
value: "http://hf-obs-opentelemetry-collector" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@balis or even 3 :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed one of them
c12659e
to
c76989e
Compare
No description provided.