Skip to content

Commit 26c7157

Browse files
authored
Merge pull request #193 from projectsyn/ocp-1003/harden-sudo
When impersonating the cluster admin, use system:admin instead of cluster-admin
2 parents 7de582f + e8230f3 commit 26c7157

File tree

6 files changed

+21
-21
lines changed

6 files changed

+21
-21
lines changed

docs/modules/ROOT/pages/how-tos/migrate-to-self-service-namespace-egress-ips.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ NOTE: This step assumes that the only `IsovalentEgressGatewayPolicy` resources o
7373
----
7474
kubectl get isovalentegressgatewaypolicy -l argocd.argoproj.io/instance=cilium -oyaml | \
7575
yq '.items[] |
76-
"kubectl --as=cluster-admin annotate namespace \(.metadata.name) cilium.syn.tools/egress-ip="
76+
"kubectl --as=system:admin annotate namespace \(.metadata.name) cilium.syn.tools/egress-ip="
7777
+ .metadata.annotations["cilium.syn.tools/egress-ip"]
7878
' | \
7979
bash <1>
@@ -93,5 +93,5 @@ NOTE: This step assumes that the only `IsovalentEgressGatewayPolicy` resources o
9393
+
9494
[source,bash]
9595
----
96-
kubectl --as=cluster-admin label isovalentegressgatewaypolicy -l argocd.argoproj.io/instance=cilium argocd.argoproj.io/instance-
96+
kubectl --as=system:admin label isovalentegressgatewaypolicy -l argocd.argoproj.io/instance=cilium argocd.argoproj.io/instance-
9797
----

docs/modules/ROOT/pages/runbooks/CiliumBpfOperationErrorRateHigh.adoc

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,12 @@ include::partial$runbooks/known_ebpf_maps.adoc[]
2020
NODE=<node name of affected node> <1>
2121
AGENT_POD=$(kubectl -n cilium get pods --field-selector=spec.nodeName=$NODE \
2222
-l app.kubernetes.io/name=cilium-agent -oname)
23-
kubectl -n cilium exec -it $AGENT_POD --as=cluster-admin -- cilium status <2>
24-
kubectl -n cilium exec -it $AGENT_POD --as=cluster-admin -- cilium status --verbose <3>
23+
kubectl -n cilium exec -it $AGENT_POD --as=system:admin -- cilium status <2>
24+
kubectl -n cilium exec -it $AGENT_POD --as=system:admin -- cilium status --verbose <3>
2525
kubectl -n cilium logs $AGENT_POD --tail=50 <4>
2626
----
2727
<1> The node indicated in the alert
28-
<2> `--as=cluster-admin` is required on VSHN managed clusters
28+
<2> `--as=system:admin` is required on VSHN managed clusters
2929
<2> Show the agent status on the node
3030
<3> Show verbose agent status on the node.
3131
In this output, you may see details about eBPF sync jobs which have errors.

docs/modules/ROOT/pages/runbooks/CiliumClustermeshRemoteClusterNotReady.adoc

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,9 @@ First, check the source cluster's overall cluster mesh status
3131

3232
[source,bash]
3333
----
34-
cilium -n cilium clustermesh status --as=cluster-admin <1>
34+
cilium -n cilium clustermesh status --as=system:admin <1>
3535
----
36-
<1> `--as=cluster-admin` is required on VSHN Managed OpenShift, may need to be left out on other clusters.
36+
<1> `--as=system:admin` is required on VSHN Managed OpenShift, may need to be left out on other clusters.
3737

3838
If the output indicates that all nodes are unable to connect to the remote cluster's clustermesh API, it's likely that the issue is either on the remote cluster, or in the network between the clusters.
3939

@@ -52,25 +52,25 @@ NODE=<node name of affected node> <1>
5252
AGENT_POD=$(kubectl -n cilium get pods --field-selector=spec.nodeName=$NODE \
5353
-l app.kubernetes.io/name=cilium-agent -oname)
5454
55-
kubectl -n cilium exec -it $AGENT_POD --as=cluster-admin -- cilium status <2>
56-
kubectl -n cilium exec -it $AGENT_POD --as=cluster-admin -- cilium troubleshoot clustermesh <3>
55+
kubectl -n cilium exec -it $AGENT_POD --as=system:admin -- cilium status <2>
56+
kubectl -n cilium exec -it $AGENT_POD --as=system:admin -- cilium troubleshoot clustermesh <3>
5757
----
5858
<1> Set this to the name of an affected node's `Node` object
5959
<2> Show a summary of the Cilium agent status.
6060
You should see in the output of this command whether the agent can't reach one or more of the remote cluster's nodes.
6161
<3> This command will show connection details to the remote cluster's cluster mesh API server or the local cache in case you're using KVStoreMesh.
6262

63-
TIP: `--as=cluster-admin` may need to be left out on some clusters.
63+
TIP: `--as=system:admin` may need to be left out on some clusters.
6464

6565
If the output of `cilium troubleshoot clustermesh` refers to the local cluster's cluster mesh API server, it's likely that you're using KVStoreMesh.
6666
In that case you can check the KVStoreMesh connection to the remote cluster mesh API server in the `clustermesh-apiserver` deployment:
6767

6868
[source,bash]
6969
----
70-
kubectl -n cilium --as=cluster-admin exec -it deploy/clustermesh-apiserver -c kvstoremesh -- \
70+
kubectl -n cilium --as=system:admin exec -it deploy/clustermesh-apiserver -c kvstoremesh -- \
7171
clustermesh-apiserver kvstoremesh-dbg status <1>
7272
73-
kubectl exec -it -n cilium --as=cluster-admin deploy/clustermesh-apiserver -c kvstoremesh -- \
73+
kubectl exec -it -n cilium --as=system:admin deploy/clustermesh-apiserver -c kvstoremesh -- \
7474
clustermesh-apiserver kvstoremesh-dbg troubleshoot <2>
7575
----
7676
<1> Show a connection summary of the KVStoreMesh
@@ -80,7 +80,7 @@ You can also run `cilium-health status --probe` in the agent pod to actively pro
8080

8181
[source,bash]
8282
----
83-
kubectl -n cilium exec -it $AGENT_POD --as=cluster-admin -- cilium-health status --probe
83+
kubectl -n cilium exec -it $AGENT_POD --as=system:admin -- cilium-health status --probe
8484
----
8585

8686
include::partial$runbooks/check-node-routing-tables.adoc[]

docs/modules/ROOT/pages/runbooks/CiliumKVStoreMeshRemoteClusterNotReady.adoc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,9 @@ First, check the source cluster's overall cluster mesh status
2828

2929
[source,bash]
3030
----
31-
cilium -n cilium clustermesh status --as=cluster-admin <1>
31+
cilium -n cilium clustermesh status --as=system:admin <1>
3232
----
33-
<1> `--as=cluster-admin` is required on VSHN Managed OpenShift, may need to be left out on other clusters.
33+
<1> `--as=system:admin` is required on VSHN Managed OpenShift, may need to be left out on other clusters.
3434

3535
include::partial$runbooks/investigating-clustermesh-api.adoc[]
3636

@@ -40,10 +40,10 @@ You can check the KVStoreMesh connection to the remote cluster mesh API server i
4040

4141
[source,bash]
4242
----
43-
kubectl -n cilium --as=cluster-admin exec -it deploy/clustermesh-apiserver -c kvstoremesh -- \
43+
kubectl -n cilium --as=system:admin exec -it deploy/clustermesh-apiserver -c kvstoremesh -- \
4444
clustermesh-apiserver kvstoremesh-dbg status <1>
4545
46-
kubectl exec -it -n cilium --as=cluster-admin deploy/clustermesh-apiserver -c kvstoremesh -- \
46+
kubectl exec -it -n cilium --as=system:admin deploy/clustermesh-apiserver -c kvstoremesh -- \
4747
clustermesh-apiserver kvstoremesh-dbg troubleshoot <2>
4848
----
4949
<1> Show a connection summary of the KVStoreMesh

docs/modules/ROOT/partials/runbooks/check-node-routing-tables.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ For setups which use static routes to make the nodes of the clusters participati
77
----
88
NODE=<node name of affected node>
99
REMOTE_NODE=<ip of a node in the remote cluster>
10-
oc -n syn-debug-nodes debug node/${NODE} --as=cluster-admin -- chroot /host ip r
11-
oc -n syn-debug-nodes debug node/${NODE} --as=cluster-admin -- chroot /host ping -c4 ${REMOTE_NODE}
10+
oc -n syn-debug-nodes debug node/${NODE} --as=system:admin -- chroot /host ip r
11+
oc -n syn-debug-nodes debug node/${NODE} --as=system:admin -- chroot /host ping -c4 ${REMOTE_NODE}
1212
----
1313

1414
.Other K8s

docs/modules/ROOT/partials/runbooks/debug_ebpf_map_pressure.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@ TIP: Add a section below if you're debugging a map for which there's no info yet
1414
NODE=<node name of affected node> <1>
1515
AGENT_POD=$(kubectl -n cilium get pods --field-selector=spec.nodeName=$NODE \
1616
-l app.kubernetes.io/name=cilium-agent -oname)
17-
kubectl -n cilium exec -it $AGENT_POD --as=cluster-admin -- cilium-dbg policy selectors <2>
17+
kubectl -n cilium exec -it $AGENT_POD --as=system:admin -- cilium-dbg policy selectors <2>
1818
----
1919
<1> The node indicated in the alert
20-
<2> `--as=cluster-admin` is required on VSHN managed clusters
20+
<2> `--as=system:admin` is required on VSHN managed clusters
2121
<2> List the Cilium policy selectors (including matched endpoint IDs) that need to be deployed on the node.
2222

2323
. Check output for any policies that match a large amount of endpoints and investigate if you can tune the associated network policy to reduce the amount of matched endpoints.

0 commit comments

Comments
 (0)