Skip to content

Commit 7042abb

Browse files
authored
Merge pull request #92536 from openshift-cherrypick-robot/cherry-pick-91068-to-enterprise-4.19
[enterprise-4.19] TELCODOCS-2108 PerformanceProfile operations in a hosted cluster
2 parents 7a9e1c7 + bda1d41 commit 7042abb

6 files changed

+460
-2
lines changed

_topic_maps/_topic_map.yml

+2
Original file line numberDiff line numberDiff line change
@@ -3392,6 +3392,8 @@ Topics:
33923392
File: cnf-understanding-low-latency
33933393
- Name: Tuning nodes for low latency with the performance profile
33943394
File: cnf-tuning-low-latency-nodes-with-perf-profile
3395+
- Name: Tuning hosted control planes for low latency with the performance profile
3396+
File: cnf-tuning-low-latency-hosted-cp-nodes-with-perf-profile
33953397
- Name: Provisioning real-time and low latency workloads
33963398
File: cnf-provisioning-low-latency-workloads
33973399
- Name: Debugging low latency tuning
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,184 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/cnf-tuning-low-latency-hosted-cp-nodes-with-perf-profile.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="apply-performance-profile-hosted-cluster_{context}"]
7+
= Configuring low-latency tuning in a hosted cluster
8+
9+
To set low latency with the performance profile on the nodes in your hosted cluster, you can use the Node Tuning Operator. In {hcp}, you can configure low-latency tuning by creating config maps that contain `Tuned` objects and referencing those config maps in your node pools. The tuned object in this case is a `PerformanceProfile` object that defines the performance profile you want to apply to the nodes in a node pool.
10+
11+
.Procedure
12+
13+
. Export the management cluster `kubeconfig` file by running the following command:
14+
+
15+
[source,terminal]
16+
----
17+
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
18+
----
19+
20+
. Create the `ConfigMap` object in the management cluster by running the following command:
21+
+
22+
[source,terminal]
23+
----
24+
$ oc --kubeconfig="$MGMT_KUBECONFIG" apply -f my-hosted-cp-performance-profile.yaml
25+
----
26+
27+
. Edit the `NodePool` object in the `clusters` namespace adding the `spec.tuningConfig` field and the name of the created performance profile in that field by running the following command:
28+
+
29+
[source,terminal]
30+
----
31+
$ oc edit np -n clusters
32+
----
33+
+
34+
[source,yaml]
35+
----
36+
apiVersion: hypershift.openshift.io/v1beta1
37+
kind: NodePool
38+
metadata:
39+
annotations:
40+
hypershift.openshift.io/nodePoolCurrentConfig: 2f752a2c
41+
hypershift.openshift.io/nodePoolCurrentConfigVersion: 998aa3ce
42+
hypershift.openshift.io/nodePoolPlatformMachineTemplate: democluster-us-east-1a-3dff55ec
43+
creationTimestamp: "2025-04-09T09:41:55Z"
44+
finalizers:
45+
- hypershift.openshift.io/finalizer
46+
generation: 1
47+
labels:
48+
hypershift.openshift.io/auto-created-for-infra: democluster
49+
name: democluster-us-east-1a
50+
namespace: clusters
51+
ownerReferences:
52+
- apiVersion: hypershift.openshift.io/v1beta1
53+
kind: HostedCluster
54+
name: democluster
55+
uid: af77e390-c289-433c-9d29-3aee8e5dc76f
56+
resourceVersion: "53056"
57+
uid: 11efa47c-5a7b-476c-85cf-a274f748a868
58+
spec:
59+
tuningConfig:
60+
- name: performance
61+
arch: amd64
62+
clusterName: democluster
63+
management:
64+
----
65+
+
66+
[NOTE]
67+
====
68+
You can reference the same profile in multiple node pools. In {hcp}, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the `Tuned` custom resources to distinguish them. After you make the changes, the system detects that a configuration change is required and starts a rolling update of the nodes in that pool to apply the new configuration.
69+
====
70+
71+
.Verification
72+
73+
. List all node pools across all namespaces by running the following command:
74+
+
75+
[source,terminal]
76+
----
77+
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
78+
----
79+
+
80+
.Example output
81+
[source,terminal]
82+
----
83+
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
84+
clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
85+
----
86+
+
87+
[NOTE]
88+
====
89+
The `UPDATINGCONFIG` field indicates whether the node pool is in the process of updating its configuration. During this update, the `UPDATINGCONFIG` field in the node pool's status becomes `True`. The new configuration is considered fully applied only when the `UPDATINGCONFIG` field returns to `False`.
90+
====
91+
92+
. List all config maps in the `clusters-democluster` namespace by running the following command:
93+
+
94+
[source,terminal]
95+
----
96+
$ oc --kubeconfig="$MGMT_KUBECONFIG" get cm -n clusters-democluster
97+
----
98+
+
99+
.Example output
100+
[source,terminal]
101+
----
102+
NAME DATA AGE
103+
aggregator-client-ca 1 69m
104+
auth-config 1 68m
105+
aws-cloud-config 1 68m
106+
aws-ebs-csi-driver-trusted-ca-bundle 1 66m
107+
... 1 67m
108+
kubelet-client-ca 1 69m
109+
kubeletconfig-performance-democluster-us-east-1a 1 22m
110+
...
111+
ovnkube-identity-cm 2 66m
112+
performance-democluster-us-east-1a 1 22m
113+
...
114+
tuned-performance-democluster-us-east-1a 1 22m
115+
----
116+
+
117+
The output shows a kubeletconfig `kubeletconfig-performance-democluster-us-east-1a` and a performance profile `performance-democluster-us-east-1a` has been created. The Node Tuning Operator syncs the `Tuned` objects into the hosted cluster. You can verify which `Tuned` objects are defined and which profiles are applied to each node.
118+
119+
. List available secrets on the management cluster by running the following command:
120+
+
121+
[source,terminal]
122+
----
123+
$ oc get secrets -n clusters
124+
----
125+
+
126+
.Example output
127+
[source,terminal]
128+
----
129+
NAME TYPE DATA AGE
130+
builder-dockercfg-25qpp kubernetes.io/dockercfg 1 128m
131+
default-dockercfg-mkvlz kubernetes.io/dockercfg 1 128m
132+
democluster-admin-kubeconfig Opaque 1 127m
133+
democluster-etcd-encryption-key Opaque 1 128m
134+
democluster-kubeadmin-password Opaque 1 126m
135+
democluster-pull-secret Opaque 1 128m
136+
deployer-dockercfg-8lfpd kubernetes.io/dockercfg 1 128m
137+
----
138+
139+
. Extract the `kubeconfig` file for the hosted cluster by running the following command:
140+
+
141+
[source,terminal]
142+
----
143+
$ oc get secret <secret_name> -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
144+
----
145+
+
146+
.Example
147+
[source,terminal]
148+
----
149+
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
150+
----
151+
152+
. Export the hosted cluster kubeconfig by running the following command:
153+
+
154+
[source,terminal]
155+
----
156+
$ export HC_KUBECONFIG=<path_to_hosted-cluster-kubeconfig>
157+
----
158+
159+
. Verify that the kubeletconfig is mirrored in the hosted cluster by running the following command:
160+
+
161+
[source,terminal]
162+
----
163+
$ oc --kubeconfig="$HC_KUBECONFIG" get cm -n openshift-config-managed | grep kubelet
164+
----
165+
+
166+
.Example output
167+
[source,terminal]
168+
----
169+
kubelet-serving-ca 1 79m
170+
kubeletconfig-performance-democluster-us-east-1a 1 15m
171+
----
172+
173+
. Verify that the `single-numa-node` policy is set on the hosted cluster by running the following command:
174+
+
175+
[source,terminal]
176+
----
177+
$ oc --kubeconfig="$HC_KUBECONFIG" get cm kubeletconfig-performance-democluster-us-east-1a -o yaml -n openshift-config-managed | grep single
178+
----
179+
+
180+
.Example output
181+
[source,terminal]
182+
----
183+
topologyManagerPolicy: single-numa-node
184+
----
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,105 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * scalability_and_performance/low_latency_tuning/cnf-tuning-low-latency-nodes-with-perf-profile.adoc
4+
5+
:_mod-docs-content-type: PROCEDURE
6+
[id="gathering-data-about-your-hosted-cluster-using-must-gather_{context}"]
7+
= Gathering data about your hosted control planes cluster for the PPC
8+
9+
The Performance Profile Creator (PPC) tool requires `must-gather` data. As a cluster administrator, run the `must-gather` command to capture information about your cluster.
10+
11+
.Prerequisites
12+
13+
* You have `cluster-admin` role access to the management cluster.
14+
* You installed the {oc-first}.
15+
16+
.Procedure
17+
18+
. Export the management cluster `kubeconfig` file by running the following command:
19+
+
20+
[source,terminal]
21+
----
22+
$ export MGMT_KUBECONFIG=<path_to_mgmt_kubeconfig>
23+
----
24+
25+
. List all node pools across all namespaces by running the following command:
26+
+
27+
[source,terminal]
28+
----
29+
$ oc --kubeconfig="$MGMT_KUBECONFIG" get np -A
30+
----
31+
+
32+
.Example output
33+
[source,terminal]
34+
----
35+
NAMESPACE NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
36+
clusters democluster-us-east-1a democluster 1 1 False False 4.17.0 False True
37+
----
38+
+
39+
* The output shows the namespace `clusters` in the management cluster where the `NodePool` resource is defined.
40+
* The name of the `NodePool` resource, for example `democluster-us-east-1a`.
41+
* The `HostedCluster` this `NodePool` belongs to. For example, `democluster`.
42+
43+
. On the management cluster, run the following command to list available secrets:
44+
+
45+
[source,terminal]
46+
----
47+
$ oc get secrets -n clusters
48+
----
49+
+
50+
.Example output
51+
[source,terminal]
52+
----
53+
NAME TYPE DATA AGE
54+
builder-dockercfg-25qpp kubernetes.io/dockercfg 1 128m
55+
default-dockercfg-mkvlz kubernetes.io/dockercfg 1 128m
56+
democluster-admin-kubeconfig Opaque 1 127m
57+
democluster-etcd-encryption-key Opaque 1 128m
58+
democluster-kubeadmin-password Opaque 1 126m
59+
democluster-pull-secret Opaque 1 128m
60+
deployer-dockercfg-8lfpd kubernetes.io/dockercfg 1 128m
61+
----
62+
63+
. Extract the `kubeconfig` file for the hosted cluster by running the following command:
64+
+
65+
[source,terminal]
66+
----
67+
$ oc get secret <secret_name> -n <cluster_namespace> -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
68+
----
69+
+
70+
.Example
71+
[source,terminal]
72+
----
73+
$ oc get secret democluster-admin-kubeconfig -n clusters -o jsonpath='{.data.kubeconfig}' | base64 -d > hosted-cluster-kubeconfig
74+
----
75+
76+
. To create a `must-gather` bundle for the hosted cluster, open a separate terminal window and run the following commands:
77+
78+
.. Export the hosted cluster `kubeconfig` file:
79+
+
80+
[source,terminal]
81+
----
82+
$ export HC_KUBECONFIG=<path_to_hosted_cluster_kubeconfig>
83+
----
84+
+
85+
.Example
86+
[source,terminal]
87+
----
88+
$ export HC_KUBECONFIG=~/hostedcpkube/hosted-cluster-kubeconfig
89+
----
90+
91+
.. Navigate to the directory where you want to store the `must-gather` data.
92+
93+
.. Gather the troubleshooting data for your hosted cluster:
94+
+
95+
[source,terminal]
96+
----
97+
$ oc --kubeconfig="$HC_KUBECONFIG" adm must-gather
98+
----
99+
100+
.. Create a compressed file from the `must-gather` directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command:
101+
+
102+
[source,terminal]
103+
----
104+
$ tar -czvf must-gather.tar.gz must-gather.local.1203869488012141147
105+
----

0 commit comments

Comments
 (0)