You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a developer or administrator, you can use the {product-long-kafka} web console to view and inspect messages for a Kafka topic. You might use this functionality, for example, to verify that a client is producing messages to the expected topic partition, that your topic is storing messages correctly, or that messages have the expected content.
89
+
As a developer or administrator, you can use the {product-long-kafka} web console to view and inspect messages for a Kafka topic. You might use this functionalityto verify that a client is producing messages to the expected topic partition or that messages have the expected content.
90
90
91
-
When you select a topic in the console, you can use the *Messages* tab to view a list of messages for that topic. You can filter the list of messages in the following ways:
91
+
When you select a topic in the web console, you can use the *Messages* tab to view a list of messages for that topic. You can filter the list of messages in the following ways:
92
92
93
93
* Specify a partition and see messages sent to the partition.
94
94
* Specify a partition and offset and see messages sent to the partition from that offset.
@@ -100,14 +100,14 @@ When you select a topic in the console, you can use the *Messages* tab to view a
The following procedure shows how to filter and inspect a list of messages for a topic in the {product-kafka} web console.
103
+
The following procedure shows how to filter and inspect a list of messages for a topic in the {product-long-kafka} web console.
104
104
105
105
.Prerequisites
106
106
107
107
* You have a Kafka instance with a topic that contains some messages. To learn how to create your _first_ Kafka instance and topic and then send messages to the topic that will appear on the *Messages* page, see the following guides:
108
108
+
109
-
** {base-url}{getting-started-url-kafka}[_Getting started with {product-long-kafka}_^]
110
-
** {base-url}{kafka-bin-scripts-url-kafka}[_Configuring and connecting Kafka scripts with {product-long-kafka}_^]
109
+
** {base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]
110
+
** {base-url}{kafka-bin-scripts-url-kafka}[Configuring and connecting Kafka scripts with {product-long-kafka}^]
111
111
112
112
.Procedure
113
113
@@ -143,40 +143,39 @@ Similarly, if a message is encoded (for example, in a format such as UTF-8 or Ba
143
143
144
144
. To copy the full message value or header data, click the copy icon next to the data in the *Message* pane.
145
145
146
-
. To see messages for a different topic partition, select a new value in the *Partition* drop-down menu.
146
+
. To see messages for a different topic partition, select a new value in the *Partition* list.
147
147
+
148
-
NOTE: If you have many partitions, you can filter the list shown in the drop-down menu by typing a value in the field.
148
+
NOTE: If you have many partitions, you can filter the values shown in the *Partition* list by typing a value in the field.
149
149
150
150
. To further refine the list of messages in the table, use the filter controls at the top of the *Messages* page.
151
151
+
152
152
--
153
153
* To filter messages by topic partition and offset, perform the following actions:
154
154
... In the *Partition* field, select a topic partition.
155
-
... In the drop-down menu that shows a default value of `Offset`, keep this default value.
156
-
... In the *Offset* field, type an offset value.
155
+
... Click the arrow next to `Latest messages` and select `Offset` from the list.
156
+
... In the *Specify offset* field, type an offset value.
157
157
... To apply your filter settings, click the search (magnifying glass) icon.
158
158
159
159
* To filter messages by topic partition and date and time, perform the following actions:
160
160
... In the *Partition* field, select a topic partition.
161
-
... In the drop-down menu that shows a default value of `Offset`, change the value to `Timestamp`.
161
+
... Click the arrow next to `Latest messages` and select `Timestamp` from the list.
162
162
+
163
163
Additional selection tools appear.
164
164
... Use the additional selection tools to set date and time values. Alternatively, type a date and time value in the format shown in the field.
165
165
... To apply your filter settings, click the search (magnifying glass) icon.
166
166
167
167
* To filter messages by topic partition and epoch timestamp, perform the following actions:
168
168
... In the *Partition* field, select a topic partition.
169
-
... In the drop-down menu that shows a default value of `Offset`, change the value to `Epoch timestamp`.
169
+
... Click the arrow next to `Latest messages` and select `Epoch timestamp` from the list.
170
170
... In the *Epoch timestamp* field, type or paste an epoch timestamp value.
171
171
+
172
172
NOTE: You can easily convert a human-readable date and time to an epoch value using a https://www.epochconverter.com/[timestamp conversion tool^].
173
173
... To apply your filter settings, click the search (magnifying glass) icon.
174
174
175
175
--
176
176
+
177
-
Based on your filter settings, the *Messages* page automatically reloads the list of messsages in the table.
177
+
Based on your filter settings, the *Messages* page automatically reloads the list of messages in the table.
178
178
179
-
. To clear your existing offset, timestamp, or epoch timestamp selections and revert to seeing the latest messages in the selected partition, select `Latest messages` in the drop-down menu that has a default value of `Offset`.
As a developer or administrator, you can view metrics in {product-kafka} to visualize the performance and data usage for Kafka instances and topics that you have access to. You can view metrics directly in the {product-kafka} web console, or use the metrics API endpoint provided by {product-kafka} to import the data into your own metrics monitoring tool, such as Prometheus.
88
+
As a developer or administrator, you can view metrics in {product-long-kafka} to visualize the performance and data usage for Kafka instances and topics that you have access to. You can view metrics directly in the {product-kafka} web console, or use the metrics API endpoint provided by {product-kafka} to import the data into your own metrics monitoring tool, such as Prometheus.
89
89
90
90
//Additional line break to resolve mod docs generation error, not sure why. Leaving for now. (Stetson, 20 May 2021)
91
91
92
92
[id="ref-supported-metrics_{context}"]
93
93
== Supported metrics in {product-kafka}
94
94
95
95
[role="_abstract"]
96
-
{product-kafka} supports the following metrics for Kafka instances and topics. In the {product-kafka} web console, the *Dashboard* page of a Kafka instance displays a subset of these metrics. To learn more about the limits associated with both trial and production Kafka instance types, refer to https://access.redhat.com/articles/5979061[Red Hat OpenShift Streams for Apache Kafka Service Limits].
96
+
{product-long-kafka} supports the following metrics for Kafka instances and topics. In the {product-kafka} web console, the *Dashboard* page of a Kafka instance displays a subset of these metrics. To learn more about the limits associated with both trial and production Kafka instance types, see https://access.redhat.com/articles/5979061[Red Hat OpenShift Streams for Apache Kafka Service Limits].
97
97
98
98
99
99
Cluster metrics::
@@ -156,7 +156,7 @@ Topic metrics::
156
156
== Viewing metrics for a Kafka instance in {product-kafka}
157
157
158
158
[role="_abstract"]
159
-
After you produce and consume messages in your services using methods such as link:https://kafka.apache.org/downloads[Kafka] scripts, link:https://github.com/edenhill/kcat[Kafkacat], or a link:https://quarkus.io/[Quarkus] application, you can return to the Kafka instance in the web console and use the *Dashboard* page to view metrics for the instance and topics. The metrics help you understand the performance and data usage for your Kafka instance and topics.
159
+
After you produce and consume messages in your services using methods such as link:https://kafka.apache.org/downloads[Kafka^] scripts, link:https://github.com/edenhill/kcat[Kcat^], or a link:https://quarkus.io/[Quarkus^] application, you can return to the Kafka instance in the web console and use the *Dashboard* page to view metrics for the instance and topics. The metrics help you understand the performance and data usage for your Kafka instance and topics.
160
160
161
161
.Prerequisites
162
162
* You have access to a Kafka instance in {product-kafka} that contains topics. For more information about access management in {product-kafka}, see {base-url}{access-mgmt-url-kafka}[Managing account access in {product-long-kafka}^].
@@ -174,15 +174,15 @@ NOTE: In some cases, after you start producing and consuming messages, you might
174
174
== Configuring metrics monitoring for a Kafka instance in Prometheus
175
175
176
176
[role="_abstract"]
177
-
As an alternative to viewing metrics for a Kafka instance in the {product-kafka} web console, you can export your metrics to https://prometheus.io/docs/introduction/overview/[Prometheus] and integrate the metrics with your own metrics monitoring platform. {product-kafka} provides a `kafkas/{id}/metrics/federate` API endpoint that you can configure as a scrape target for Prometheus to use to collect and store metrics. You can then access the metrics in the https://prometheus.io/docs/visualization/browser/[Prometheus expression browser] or in a data-graphing tool such as https://prometheus.io/docs/visualization/grafana/[Grafana].
177
+
As an alternative to viewing metrics for a Kafka instance in the {product-long-kafka} web console, you can export your metrics to https://prometheus.io/docs/introduction/overview/[Prometheus^] and integrate the metrics with your own metrics monitoring platform. {product-kafka} provides a `kafkas/{id}/metrics/federate` API endpoint that you can configure as a scrape target for Prometheus to use to collect and store metrics. You can then access the metrics in the https://prometheus.io/docs/visualization/browser/[Prometheus expression browser^] or in a data-graphing tool such as https://prometheus.io/docs/visualization/grafana/[Grafana^].
178
178
179
-
This procedure follows the https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file[Configuration File] method defined by Prometheus for integrating third-party metrics. If you use the Prometheus Operator in your monitoring environment, you can also follow the https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration] method.
179
+
This procedure follows the https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file[Configuration File^] method defined by Prometheus for integrating third-party metrics. If you use the Prometheus Operator in your monitoring environment, you can also follow the https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration^] method.
180
180
181
181
.Prerequisites
182
182
* You have access to a Kafka instance that contains topics in {product-kafka}. For more information about access management in {product-kafka}, see {base-url}{access-mgmt-url-kafka}[Managing account access in {product-long-kafka}^].
183
183
* You have the ID and the SASL/OAUTHBEARER token endpoint for the Kafka instance. To relocate the Kafka instance ID and the token endpoint, select your Kafka instance in the {product-kafka} web console, select the options menu (three vertical dots), and click *Connection*.
184
184
* You have the generated credentials for your service account that has access to the Kafka instance. To reset the credentials, use the {service-accounts-url}[Service Accounts^] page in the *Application Services* section of the Red Hat Hybrid Cloud Console.
185
-
* You've installed a Prometheus instance in your monitoring environment. For installation instructions, see https://prometheus.io/docs/prometheus/latest/getting_started/[Getting Started] in the Prometheus documentation.
185
+
* You've installed a Prometheus instance in your monitoring environment. For installation instructions, see https://prometheus.io/docs/prometheus/latest/getting_started/[Getting Started^] in the Prometheus documentation.
186
186
187
187
.Procedure
188
188
. In your Prometheus configuration file, add the following information. Replace the variable values with your own Kafka instance and service account information.
@@ -206,10 +206,10 @@ The `<kafka_instance_id>` is the ID of the Kafka instance. The `<client_id>` and
206
206
207
207
The new scrape target becomes available after the configuration has reloaded.
208
208
--
209
-
. View your collected metrics in the Prometheus expression browser at `http://__<host>__:__<port>__/graph`, or integrate your Prometheus data source with a data-graphing tool such as Grafana. For information about Prometheus metrics in Grafana, see https://prometheus.io/docs/visualization/grafana/[Grafana Support for Prometheus] in the Grafana documentation.
209
+
. View your collected metrics in the Prometheus expression browser at `http://__<host>__:__<port>__/graph`, or integrate your Prometheus data source with a data-graphing tool such as Grafana. For information about Prometheus metrics in Grafana, see https://prometheus.io/docs/visualization/grafana/[Grafana Support for Prometheus^] in the Grafana documentation.
210
210
+
211
211
--
212
-
If you use Grafana with your Prometheus instance, you can import the predefined https://grafana.com/grafana/dashboards/15835[{product-long-kafka} Grafana dashboard] to set up your metrics display. For import instructions, see https://grafana.com/docs/grafana/v7.5/dashboards/export-import/#importing-a-dashboard[Importing a dashboard] in the Grafana documentation.
212
+
If you use Grafana with your Prometheus instance, you can import the predefined https://grafana.com/grafana/dashboards/15835[{product-long-kafka} Grafana dashboard^] to set up your metrics display. For import instructions, see https://grafana.com/docs/grafana/v7.5/dashboards/export-import/#importing-a-dashboard[Importing a dashboard^] in the Grafana documentation.
213
213
--
214
214
215
215
When you create a Kafka instance and add new topics, the metrics are initially empty. After you start producing and consuming messages in your services, you can return to your monitoring tool to view related metrics. For example, to use Kafka scripts to produce and consume messages, see {base-url}{kafka-bin-scripts-url-kafka}[Configuring and connecting Kafka scripts with {product-long-kafka}^].
@@ -218,7 +218,7 @@ NOTE: In some cases, after you start producing and consuming messages, you might
218
218
219
219
[NOTE]
220
220
====
221
-
If you use the Prometheus Operator in your monitoring environment, you can alternatively create a `kafka-federate.yaml` file as an additional scrape configuration in your Prometheus custom resource as shown in the following example commands. For more information about this method, see https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration] in the Prometheus documentation.
221
+
If you use the Prometheus Operator in your monitoring environment, you can alternatively create a `kafka-federate.yaml` file as an additional scrape configuration in your Prometheus custom resource as shown in the following example commands. For more information about this method, see https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration^] in the Prometheus documentation.
222
222
223
223
.Example `kafka-federate.yaml` file
224
224
[source,yaml,subs="+quotes"]
@@ -263,7 +263,7 @@ spec:
263
263
.Prerequisites
264
264
* You have successfully configured metrics monitoring for a Kafka instance in Prometheus.
265
265
* You use the Prometheus Operator in your monitoring environment.
266
-
* You can define https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/[alerting rules] in Prometheus and can deploy an https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/alerting.md/[Alertmanager cluster] in Prometheus Operator.
266
+
* You can define https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/[alerting rules^] in Prometheus and can deploy an https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/alerting.md/[Alertmanager cluster^] in Prometheus Operator.
267
267
268
268
269
269
.Procedure
@@ -296,9 +296,9 @@ spec:
296
296
* {base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]
297
297
* {base-url}{getting-started-rhoas-cli-url-kafka}[Getting started with the `rhoas` CLI for {product-long-kafka}^]
0 commit comments