Skip to content

Commit 6a8d45d

Browse files
authored
Updated Metrics and Messages guides according to agreed style guidelines (#588)
* Updated Metrics and Messages guides according to agreed style guidelines. * Implemented reviewer's feedback.
1 parent 14bc1db commit 6a8d45d

File tree

2 files changed

+25
-26
lines changed

2 files changed

+25
-26
lines changed

docs/kafka/message-browsing-kafka/README.adoc

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -86,9 +86,9 @@ ifdef::context[:parent-context: {context}]
8686
// Purpose statement for the assembly
8787
[role="_abstract"]
8888

89-
As a developer or administrator, you can use the {product-long-kafka} web console to view and inspect messages for a Kafka topic. You might use this functionality, for example, to verify that a client is producing messages to the expected topic partition, that your topic is storing messages correctly, or that messages have the expected content.
89+
As a developer or administrator, you can use the {product-long-kafka} web console to view and inspect messages for a Kafka topic. You might use this functionality to verify that a client is producing messages to the expected topic partition or that messages have the expected content.
9090

91-
When you select a topic in the console, you can use the *Messages* tab to view a list of messages for that topic. You can filter the list of messages in the following ways:
91+
When you select a topic in the web console, you can use the *Messages* tab to view a list of messages for that topic. You can filter the list of messages in the following ways:
9292

9393
* Specify a partition and see messages sent to the partition.
9494
* Specify a partition and offset and see messages sent to the partition from that offset.
@@ -100,14 +100,14 @@ When you select a topic in the console, you can use the *Messages* tab to view a
100100
[id="proc-browsing-messages-for-a-topic_{context}"]
101101
== Browsing messages for a topic
102102

103-
The following procedure shows how to filter and inspect a list of messages for a topic in the {product-kafka} web console.
103+
The following procedure shows how to filter and inspect a list of messages for a topic in the {product-long-kafka} web console.
104104

105105
.Prerequisites
106106

107107
* You have a Kafka instance with a topic that contains some messages. To learn how to create your _first_ Kafka instance and topic and then send messages to the topic that will appear on the *Messages* page, see the following guides:
108108
+
109-
** {base-url}{getting-started-url-kafka}[_Getting started with {product-long-kafka}_^]
110-
** {base-url}{kafka-bin-scripts-url-kafka}[_Configuring and connecting Kafka scripts with {product-long-kafka}_^]
109+
** {base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]
110+
** {base-url}{kafka-bin-scripts-url-kafka}[Configuring and connecting Kafka scripts with {product-long-kafka}^]
111111

112112
.Procedure
113113

@@ -143,40 +143,39 @@ Similarly, if a message is encoded (for example, in a format such as UTF-8 or Ba
143143

144144
. To copy the full message value or header data, click the copy icon next to the data in the *Message* pane.
145145

146-
. To see messages for a different topic partition, select a new value in the *Partition* drop-down menu.
146+
. To see messages for a different topic partition, select a new value in the *Partition* list.
147147
+
148-
NOTE: If you have many partitions, you can filter the list shown in the drop-down menu by typing a value in the field.
148+
NOTE: If you have many partitions, you can filter the values shown in the *Partition* list by typing a value in the field.
149149

150150
. To further refine the list of messages in the table, use the filter controls at the top of the *Messages* page.
151151
+
152152
--
153153
* To filter messages by topic partition and offset, perform the following actions:
154154
... In the *Partition* field, select a topic partition.
155-
... In the drop-down menu that shows a default value of `Offset`, keep this default value.
156-
... In the *Offset* field, type an offset value.
155+
... Click the arrow next to `Latest messages` and select `Offset` from the list.
156+
... In the *Specify offset* field, type an offset value.
157157
... To apply your filter settings, click the search (magnifying glass) icon.
158158

159159
* To filter messages by topic partition and date and time, perform the following actions:
160160
... In the *Partition* field, select a topic partition.
161-
... In the drop-down menu that shows a default value of `Offset`, change the value to `Timestamp`.
161+
... Click the arrow next to `Latest messages` and select `Timestamp` from the list.
162162
+
163163
Additional selection tools appear.
164164
... Use the additional selection tools to set date and time values. Alternatively, type a date and time value in the format shown in the field.
165165
... To apply your filter settings, click the search (magnifying glass) icon.
166166

167167
* To filter messages by topic partition and epoch timestamp, perform the following actions:
168168
... In the *Partition* field, select a topic partition.
169-
... In the drop-down menu that shows a default value of `Offset`, change the value to `Epoch timestamp`.
169+
... Click the arrow next to `Latest messages` and select `Epoch timestamp` from the list.
170170
... In the *Epoch timestamp* field, type or paste an epoch timestamp value.
171171
+
172172
NOTE: You can easily convert a human-readable date and time to an epoch value using a https://www.epochconverter.com/[timestamp conversion tool^].
173173
... To apply your filter settings, click the search (magnifying glass) icon.
174174

175175
--
176176
+
177-
Based on your filter settings, the *Messages* page automatically reloads the list of messsages in the table.
177+
Based on your filter settings, the *Messages* page automatically reloads the list of messages in the table.
178178

179-
. To clear your existing offset, timestamp, or epoch timestamp selections and revert to seeing the latest messages in the selected partition, select `Latest messages` in the drop-down menu that has a default value of `Offset`.
180179

181180
ifdef::parent-context[:context: {parent-context}]
182181
ifndef::parent-context[:!context:]

docs/kafka/metrics-monitoring-kafka/README.adoc

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -85,15 +85,15 @@ ifdef::context[:parent-context: {context}]
8585

8686
// Purpose statement for the assembly
8787
[role="_abstract"]
88-
As a developer or administrator, you can view metrics in {product-kafka} to visualize the performance and data usage for Kafka instances and topics that you have access to. You can view metrics directly in the {product-kafka} web console, or use the metrics API endpoint provided by {product-kafka} to import the data into your own metrics monitoring tool, such as Prometheus.
88+
As a developer or administrator, you can view metrics in {product-long-kafka} to visualize the performance and data usage for Kafka instances and topics that you have access to. You can view metrics directly in the {product-kafka} web console, or use the metrics API endpoint provided by {product-kafka} to import the data into your own metrics monitoring tool, such as Prometheus.
8989

9090
//Additional line break to resolve mod docs generation error, not sure why. Leaving for now. (Stetson, 20 May 2021)
9191

9292
[id="ref-supported-metrics_{context}"]
9393
== Supported metrics in {product-kafka}
9494

9595
[role="_abstract"]
96-
{product-kafka} supports the following metrics for Kafka instances and topics. In the {product-kafka} web console, the *Dashboard* page of a Kafka instance displays a subset of these metrics. To learn more about the limits associated with both trial and production Kafka instance types, refer to https://access.redhat.com/articles/5979061[Red Hat OpenShift Streams for Apache Kafka Service Limits].
96+
{product-long-kafka} supports the following metrics for Kafka instances and topics. In the {product-kafka} web console, the *Dashboard* page of a Kafka instance displays a subset of these metrics. To learn more about the limits associated with both trial and production Kafka instance types, see https://access.redhat.com/articles/5979061[Red Hat OpenShift Streams for Apache Kafka Service Limits].
9797

9898

9999
Cluster metrics::
@@ -156,7 +156,7 @@ Topic metrics::
156156
== Viewing metrics for a Kafka instance in {product-kafka}
157157

158158
[role="_abstract"]
159-
After you produce and consume messages in your services using methods such as link:https://kafka.apache.org/downloads[Kafka] scripts, link:https://github.com/edenhill/kcat[Kafkacat], or a link:https://quarkus.io/[Quarkus] application, you can return to the Kafka instance in the web console and use the *Dashboard* page to view metrics for the instance and topics. The metrics help you understand the performance and data usage for your Kafka instance and topics.
159+
After you produce and consume messages in your services using methods such as link:https://kafka.apache.org/downloads[Kafka^] scripts, link:https://github.com/edenhill/kcat[Kcat^], or a link:https://quarkus.io/[Quarkus^] application, you can return to the Kafka instance in the web console and use the *Dashboard* page to view metrics for the instance and topics. The metrics help you understand the performance and data usage for your Kafka instance and topics.
160160

161161
.Prerequisites
162162
* You have access to a Kafka instance in {product-kafka} that contains topics. For more information about access management in {product-kafka}, see {base-url}{access-mgmt-url-kafka}[Managing account access in {product-long-kafka}^].
@@ -174,15 +174,15 @@ NOTE: In some cases, after you start producing and consuming messages, you might
174174
== Configuring metrics monitoring for a Kafka instance in Prometheus
175175

176176
[role="_abstract"]
177-
As an alternative to viewing metrics for a Kafka instance in the {product-kafka} web console, you can export your metrics to https://prometheus.io/docs/introduction/overview/[Prometheus] and integrate the metrics with your own metrics monitoring platform. {product-kafka} provides a `kafkas/{id}/metrics/federate` API endpoint that you can configure as a scrape target for Prometheus to use to collect and store metrics. You can then access the metrics in the https://prometheus.io/docs/visualization/browser/[Prometheus expression browser] or in a data-graphing tool such as https://prometheus.io/docs/visualization/grafana/[Grafana].
177+
As an alternative to viewing metrics for a Kafka instance in the {product-long-kafka} web console, you can export your metrics to https://prometheus.io/docs/introduction/overview/[Prometheus^] and integrate the metrics with your own metrics monitoring platform. {product-kafka} provides a `kafkas/{id}/metrics/federate` API endpoint that you can configure as a scrape target for Prometheus to use to collect and store metrics. You can then access the metrics in the https://prometheus.io/docs/visualization/browser/[Prometheus expression browser^] or in a data-graphing tool such as https://prometheus.io/docs/visualization/grafana/[Grafana^].
178178

179-
This procedure follows the https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file[Configuration File] method defined by Prometheus for integrating third-party metrics. If you use the Prometheus Operator in your monitoring environment, you can also follow the https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration] method.
179+
This procedure follows the https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file[Configuration File^] method defined by Prometheus for integrating third-party metrics. If you use the Prometheus Operator in your monitoring environment, you can also follow the https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration^] method.
180180

181181
.Prerequisites
182182
* You have access to a Kafka instance that contains topics in {product-kafka}. For more information about access management in {product-kafka}, see {base-url}{access-mgmt-url-kafka}[Managing account access in {product-long-kafka}^].
183183
* You have the ID and the SASL/OAUTHBEARER token endpoint for the Kafka instance. To relocate the Kafka instance ID and the token endpoint, select your Kafka instance in the {product-kafka} web console, select the options menu (three vertical dots), and click *Connection*.
184184
* You have the generated credentials for your service account that has access to the Kafka instance. To reset the credentials, use the {service-accounts-url}[Service Accounts^] page in the *Application Services* section of the Red Hat Hybrid Cloud Console.
185-
* You've installed a Prometheus instance in your monitoring environment. For installation instructions, see https://prometheus.io/docs/prometheus/latest/getting_started/[Getting Started] in the Prometheus documentation.
185+
* You've installed a Prometheus instance in your monitoring environment. For installation instructions, see https://prometheus.io/docs/prometheus/latest/getting_started/[Getting Started^] in the Prometheus documentation.
186186

187187
.Procedure
188188
. In your Prometheus configuration file, add the following information. Replace the variable values with your own Kafka instance and service account information.
@@ -206,10 +206,10 @@ The `<kafka_instance_id>` is the ID of the Kafka instance. The `<client_id>` and
206206

207207
The new scrape target becomes available after the configuration has reloaded.
208208
--
209-
. View your collected metrics in the Prometheus expression browser at `http://__<host>__:__<port>__/graph`, or integrate your Prometheus data source with a data-graphing tool such as Grafana. For information about Prometheus metrics in Grafana, see https://prometheus.io/docs/visualization/grafana/[Grafana Support for Prometheus] in the Grafana documentation.
209+
. View your collected metrics in the Prometheus expression browser at `http://__<host>__:__<port>__/graph`, or integrate your Prometheus data source with a data-graphing tool such as Grafana. For information about Prometheus metrics in Grafana, see https://prometheus.io/docs/visualization/grafana/[Grafana Support for Prometheus^] in the Grafana documentation.
210210
+
211211
--
212-
If you use Grafana with your Prometheus instance, you can import the predefined https://grafana.com/grafana/dashboards/15835[{product-long-kafka} Grafana dashboard] to set up your metrics display. For import instructions, see https://grafana.com/docs/grafana/v7.5/dashboards/export-import/#importing-a-dashboard[Importing a dashboard] in the Grafana documentation.
212+
If you use Grafana with your Prometheus instance, you can import the predefined https://grafana.com/grafana/dashboards/15835[{product-long-kafka} Grafana dashboard^] to set up your metrics display. For import instructions, see https://grafana.com/docs/grafana/v7.5/dashboards/export-import/#importing-a-dashboard[Importing a dashboard^] in the Grafana documentation.
213213
--
214214

215215
When you create a Kafka instance and add new topics, the metrics are initially empty. After you start producing and consuming messages in your services, you can return to your monitoring tool to view related metrics. For example, to use Kafka scripts to produce and consume messages, see {base-url}{kafka-bin-scripts-url-kafka}[Configuring and connecting Kafka scripts with {product-long-kafka}^].
@@ -218,7 +218,7 @@ NOTE: In some cases, after you start producing and consuming messages, you might
218218

219219
[NOTE]
220220
====
221-
If you use the Prometheus Operator in your monitoring environment, you can alternatively create a `kafka-federate.yaml` file as an additional scrape configuration in your Prometheus custom resource as shown in the following example commands. For more information about this method, see https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration] in the Prometheus documentation.
221+
If you use the Prometheus Operator in your monitoring environment, you can alternatively create a `kafka-federate.yaml` file as an additional scrape configuration in your Prometheus custom resource as shown in the following example commands. For more information about this method, see https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md#additional-scrape-configuration[Additional Scrape Configuration^] in the Prometheus documentation.
222222
223223
.Example `kafka-federate.yaml` file
224224
[source,yaml,subs="+quotes"]
@@ -263,7 +263,7 @@ spec:
263263
.Prerequisites
264264
* You have successfully configured metrics monitoring for a Kafka instance in Prometheus.
265265
* You use the Prometheus Operator in your monitoring environment.
266-
* You can define https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/[alerting rules] in Prometheus and can deploy an https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/alerting.md/[Alertmanager cluster] in Prometheus Operator.
266+
* You can define https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/[alerting rules^] in Prometheus and can deploy an https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/alerting.md/[Alertmanager cluster^] in Prometheus Operator.
267267

268268

269269
.Procedure
@@ -296,9 +296,9 @@ spec:
296296
* {base-url}{getting-started-url-kafka}[Getting started with {product-long-kafka}^]
297297
* {base-url}{getting-started-rhoas-cli-url-kafka}[Getting started with the `rhoas` CLI for {product-long-kafka}^]
298298
* {base-url-cli}{command-ref-url-cli}[CLI command reference (rhoas)^]
299-
* https://prometheus.io/docs/prometheus/latest/getting_started/[Getting Started] in the Prometheus documentation
300-
* https://prometheus.io/docs/visualization/grafana/[Grafana Support for Prometheus]
301-
* https://grafana.com/docs/grafana/latest/datasources/prometheus/[Prometheus Data Source] in the Grafana documentation
299+
* https://prometheus.io/docs/prometheus/latest/getting_started/[Getting Started^] in the Prometheus documentation
300+
* https://prometheus.io/docs/visualization/grafana/[Grafana Support for Prometheus^]
301+
* https://grafana.com/docs/grafana/latest/datasources/prometheus/[Prometheus Data Source^] in the Grafana documentation
302302

303303
ifdef::parent-context[:context: {parent-context}]
304304
ifndef::parent-context[:!context:]

0 commit comments

Comments
 (0)