From 1aed35912d3f871a868dc5d05d5d0779fa17c7f8 Mon Sep 17 00:00:00 2001 From: thekofimensah Date: Tue, 25 Mar 2025 16:46:50 +0900 Subject: [PATCH 01/11] updated doc --- troubleshoot/elasticsearch/high-cpu-usage.md | 51 +++++++++++++++++--- 1 file changed, 43 insertions(+), 8 deletions(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 321bdcf06..182a96833 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -10,8 +10,6 @@ mapped_pages: If a thread pool is depleted, {{es}} will [reject requests](rejected-requests.md) related to the thread pool. For example, if the `search` thread pool is depleted, {{es}} will reject search requests until more threads are available. -You might experience high CPU usage if a [data tier](../../manage-data/lifecycle/data-tiers.md), and therefore the nodes assigned to that tier, is experiencing more traffic than other tiers. This imbalance in resource utilization is also known as [hot spotting](hotspotting.md). - ::::{tip} If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps significantly simplifies cluster management with performance recommendations, resource utilization visibility, and real-time issue detection with resolution paths. For more information, refer to [](/deploy-manage/monitor/autoops.md). :::: @@ -70,17 +68,54 @@ This API returns a breakdown of any hot threads in plain text. High CPU usage fr The following tips outline the most common causes of high CPU usage and their solutions. -**Scale your cluster** +**Check JVM garbage collection** + +High CPU usage is often caused by excessive JVM garbage collection (GC) activity. This excessive GC typically arises from configuration problems or inefficient queries causing increased heap memory usage. + +For optimal JVM performance, garbage collection should meet these criteria: + +1. Young GC completes quickly (ideally within 50 ms). +2. Young GC does not occur too frequently (approximately once every 10 seconds). +3. Old GC completes quickly (ideally within 1 second). +4. Old GC does not occur too frequently (once every 10 minutes or less frequently). + +Excessive JVM garbage collection usually indicates high heap memory usage. Common potential reasons for increased heap memory usage include: + +* Oversharding of indices +* Very large aggregation queries +* Excessively large bulk indexing requests +* Inefficient or incorrect mapping definitions +* Improper heap size configuration +* Misconfiguration of JVM new generation ratio (-XX:NewRatio) + +**Hot spotting** + +You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed—a scenario known as [hot spotting](hotspotting.md). This commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity (like hot-tier indices) have their shards concentrated on just one or a few nodes. + +For details on diagnosing and resolving these issues, see [hot spotting](hotspotting.md). + +**Oversharding** + +If your Elasticsearch cluster contains a large number of shards, you might be facing an oversharding issue. + +Oversharding occurs when there are too many shards, causing each shard to be smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes. + +If you have too many small shards, you can address this by: -Heavy indexing and search loads can deplete smaller thread pools. To better handle heavy workloads, add more nodes to your cluster or upgrade your existing nodes to increase capacity. +* Removing empty or unused indices. +* Deleting or closing indices containing outdated or unnecessary data. +* Reindexing smaller shards into fewer, larger shards to optimize cluster performance. -**Spread out bulk requests** +See [Size your shards](/deploy-manage/production-guidance/optimize-performance/size-shards.md) for more information. -While more efficient than individual requests, large [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk) or [multi-search](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-msearch) requests still require CPU resources. If possible, submit smaller requests and allow more time between them. +### Additional recommendations -**Cancel long-running searches** +To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps: -Long-running searches can block threads in the `search` thread pool. To check for these searches, use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks). +* Scale your cluster: Heavy indexing and search loads can deplete smaller thread pools.Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. +* Spread out bulk requests: Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests and space them out to avoid overwhelming thread pools. +* Cancel long-running searches: Regularly use the task management API to identify and cancel searches that consume excessive CPU time. To check +for these searches, use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list). ```console GET _tasks?actions=*search&detailed From 1cdee7820325294c5decb26497d12830ec40f722 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:40:37 +0800 Subject: [PATCH 02/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 4ab3e3732..cc5514cc7 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -75,7 +75,7 @@ This API returns a breakdown of any hot threads in plain text. High CPU usage fr The following tips outline the most common causes of high CPU usage and their solutions. -**Check JVM garbage collection** +### Check JVM garbage collection High CPU usage is often caused by excessive JVM garbage collection (GC) activity. This excessive GC typically arises from configuration problems or inefficient queries causing increased heap memory usage. From 1a5a96722b5331e8f9448e5b42f78e4f2ec0cbf1 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:41:25 +0800 Subject: [PATCH 03/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index cc5514cc7..8becf79ea 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -81,7 +81,7 @@ High CPU usage is often caused by excessive JVM garbage collection (GC) activity For optimal JVM performance, garbage collection should meet these criteria: -1. Young GC completes quickly (ideally within 50 ms). +* Young GC completes quickly, ideally within 50 milliseconds. 2. Young GC does not occur too frequently (approximately once every 10 seconds). 3. Old GC completes quickly (ideally within 1 second). 4. Old GC does not occur too frequently (once every 10 minutes or less frequently). From caa5dc15fb388e35a6f055973473f68a923c49d5 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:41:33 +0800 Subject: [PATCH 04/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 8becf79ea..78e584127 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -93,7 +93,7 @@ Excessive JVM garbage collection usually indicates high heap memory usage. Commo * Excessively large bulk indexing requests * Inefficient or incorrect mapping definitions * Improper heap size configuration -* Misconfiguration of JVM new generation ratio (-XX:NewRatio) +* Misconfiguration of JVM new generation ratio (`-XX:NewRatio`) **Hot spotting** From 2890ffb249d6d765705f90165c09e04c74b456a6 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:42:04 +0800 Subject: [PATCH 05/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 78e584127..d1010491c 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -97,7 +97,7 @@ Excessive JVM garbage collection usually indicates high heap memory usage. Commo **Hot spotting** -You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed—a scenario known as [hot spotting](hotspotting.md). This commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity (like hot-tier indices) have their shards concentrated on just one or a few nodes. +You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes. For details on diagnosing and resolving these issues, see [hot spotting](hotspotting.md). From 12703651a1a0bec1fb10809eadfabc6a4ef7be02 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:42:12 +0800 Subject: [PATCH 06/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index d1010491c..77d50e2d6 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -107,7 +107,7 @@ If your Elasticsearch cluster contains a large number of shards, you might be fa Oversharding occurs when there are too many shards, causing each shard to be smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes. -If you have too many small shards, you can address this by: +If you have too many small shards, you can address this by doing the following: * Removing empty or unused indices. * Deleting or closing indices containing outdated or unnecessary data. From 449bd66e5ae50e9657ba09ed2b368392d536c9ea Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:42:44 +0800 Subject: [PATCH 07/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 77d50e2d6..5b4d647fd 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -113,7 +113,7 @@ If you have too many small shards, you can address this by doing the following: * Deleting or closing indices containing outdated or unnecessary data. * Reindexing smaller shards into fewer, larger shards to optimize cluster performance. -See [Size your shards](/deploy-manage/production-guidance/optimize-performance/size-shards.md) for more information. +For more information, refer to [](/deploy-manage/production-guidance/optimize-performance/size-shards.md). ### Additional recommendations From 2634e958e63f35c8431786836f9ead29a3b077df Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:43:11 +0800 Subject: [PATCH 08/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 5b4d647fd..7c8036168 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -119,7 +119,7 @@ For more information, refer to [](/deploy-manage/production-guidance/optimize-pe To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps: -* Scale your cluster: Heavy indexing and search loads can deplete smaller thread pools.Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. +* **Scale your cluster**: Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. * Spread out bulk requests: Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests and space them out to avoid overwhelming thread pools. * Cancel long-running searches: Regularly use the task management API to identify and cancel searches that consume excessive CPU time. To check for these searches, use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list). From 7002d08d04aa49aa42a2a2aefbaaa584a6948490 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:43:20 +0800 Subject: [PATCH 09/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 7c8036168..439bea9d6 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -99,7 +99,7 @@ Excessive JVM garbage collection usually indicates high heap memory usage. Commo You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes. -For details on diagnosing and resolving these issues, see [hot spotting](hotspotting.md). +For details on diagnosing and resolving these issues, refer to [](hotspotting.md). **Oversharding** From 89e4274902e09245c2ef9d121af642cbc814d411 Mon Sep 17 00:00:00 2001 From: Kofi B Date: Sat, 12 Apr 2025 09:43:27 +0800 Subject: [PATCH 10/11] Update troubleshoot/elasticsearch/high-cpu-usage.md Co-authored-by: shainaraskas <58563081+shainaraskas@users.noreply.github.com> --- troubleshoot/elasticsearch/high-cpu-usage.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 439bea9d6..8c761c5b4 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -120,9 +120,8 @@ For more information, refer to [](/deploy-manage/production-guidance/optimize-pe To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps: * **Scale your cluster**: Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. -* Spread out bulk requests: Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests and space them out to avoid overwhelming thread pools. -* Cancel long-running searches: Regularly use the task management API to identify and cancel searches that consume excessive CPU time. To check -for these searches, use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list). +* **Spread out bulk requests**: Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests, and space them out to avoid overwhelming thread pools. +* **Cancel long-running searches**: Regularly use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list) to identify and cancel searches that consume excessive CPU time. ```console GET _tasks?actions=*search&detailed From 6a396b91a3eed6492eba355cac32eec36748dbe5 Mon Sep 17 00:00:00 2001 From: thekofimensah Date: Mon, 14 Apr 2025 09:29:32 +0800 Subject: [PATCH 11/11] Incorporated suggestions on headinds and subsections and rewording and clarifying section --- troubleshoot/elasticsearch/high-cpu-usage.md | 40 ++++++++++++-------- 1 file changed, 24 insertions(+), 16 deletions(-) diff --git a/troubleshoot/elasticsearch/high-cpu-usage.md b/troubleshoot/elasticsearch/high-cpu-usage.md index 8c761c5b4..b0320bd84 100644 --- a/troubleshoot/elasticsearch/high-cpu-usage.md +++ b/troubleshoot/elasticsearch/high-cpu-usage.md @@ -25,7 +25,7 @@ If you're using {{ech}}, you can use AutoOps to monitor your cluster. AutoOps si ## Diagnose high CPU usage [diagnose-high-cpu-usage] -**Check CPU usage** +### Check CPU usage [check-cpu-usage] You can check the CPU usage per node using the [cat nodes API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-nodes): @@ -60,7 +60,7 @@ To track CPU usage over time, we recommend enabling monitoring: :::::: ::::::: -**Check hot threads** +### Check hot threads [check-hot-threads] If a node has high CPU usage, use the [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads) to check for resource-intensive threads running on the node. @@ -75,16 +75,16 @@ This API returns a breakdown of any hot threads in plain text. High CPU usage fr The following tips outline the most common causes of high CPU usage and their solutions. -### Check JVM garbage collection +### Check JVM garbage collection [check-jvm-garbage-collection] High CPU usage is often caused by excessive JVM garbage collection (GC) activity. This excessive GC typically arises from configuration problems or inefficient queries causing increased heap memory usage. For optimal JVM performance, garbage collection should meet these criteria: -* Young GC completes quickly, ideally within 50 milliseconds. -2. Young GC does not occur too frequently (approximately once every 10 seconds). -3. Old GC completes quickly (ideally within 1 second). -4. Old GC does not occur too frequently (once every 10 minutes or less frequently). +| GC Type | Completion Time | Occurrence Frequency | +|---------|----------------|---------------------| +| Young GC | <50ms | ~once per 10 seconds | +| Old GC | <1s | ≤once per 10 minutes | Excessive JVM garbage collection usually indicates high heap memory usage. Common potential reasons for increased heap memory usage include: @@ -95,17 +95,15 @@ Excessive JVM garbage collection usually indicates high heap memory usage. Commo * Improper heap size configuration * Misconfiguration of JVM new generation ratio (`-XX:NewRatio`) -**Hot spotting** +### Hot spotting [high-cpu-usage-hot-spotting] -You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t properly balance requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes. +You might experience high CPU usage on specific data nodes or an entire [data tier](/manage-data/lifecycle/data-tiers.md) if traffic isn’t evenly distributed. This is known as [hot spotting](hotspotting.md). Hot spotting commonly occurs when read or write applications don’t evenly distribute requests across nodes, or when indices receiving heavy write activity, such as indices in the hot tier, have their shards concentrated on just one or a few nodes. For details on diagnosing and resolving these issues, refer to [](hotspotting.md). -**Oversharding** +### Oversharding [high-cpu-usage-oversharding] -If your Elasticsearch cluster contains a large number of shards, you might be facing an oversharding issue. - -Oversharding occurs when there are too many shards, causing each shard to be smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes. +Oversharding occurs when a cluster has too many shards, often times caused by shards being smaller than optimal. While Elasticsearch doesn’t have a strict minimum shard size, an excessive number of small shards can negatively impact performance. Each shard consumes cluster resources since Elasticsearch must maintain metadata and manage shard states across all nodes. If you have too many small shards, you can address this by doing the following: @@ -113,15 +111,25 @@ If you have too many small shards, you can address this by doing the following: * Deleting or closing indices containing outdated or unnecessary data. * Reindexing smaller shards into fewer, larger shards to optimize cluster performance. +If your shards are sized correctly but you are still experiencing oversharding, creating a more aggressive [index lifecycle management strategy](/manage-data/lifecycle/index-lifecycle-management.md) or deleting old indices can help reduce the number of shards. + For more information, refer to [](/deploy-manage/production-guidance/optimize-performance/size-shards.md). ### Additional recommendations To further reduce CPU load or mitigate temporary spikes in resource usage, consider these steps: -* **Scale your cluster**: Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. -* **Spread out bulk requests**: Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests, and space them out to avoid overwhelming thread pools. -* **Cancel long-running searches**: Regularly use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list) to identify and cancel searches that consume excessive CPU time. +#### Scale your cluster [scale-your-cluster] + +Heavy indexing and search loads can deplete smaller thread pools. Add nodes or upgrade existing ones to handle increased indexing and search loads more effectively. + +#### Spread out bulk requests [spread-out-bulk-requests] + +Submit smaller [bulk indexing](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-bulk-1) or multi-search requests, and space them out to avoid overwhelming thread pools. + +#### Cancel long-running searches [cancel-long-running-searches] + +Regularly use the [task management API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-tasks-list) to identify and cancel searches that consume excessive CPU time. ```console GET _tasks?actions=*search&detailed