Skip to content

Commit 56dbcd6

Browse files
authored
Terminology: high-availability masters -> high-availability control plane (#28225)
* Change terminology: high availability masters -> high availability control plane * Fix typo * Add alias for old URI * Rename file
1 parent 9f8abfa commit 56dbcd6

File tree

1 file changed

+54
-46
lines changed

1 file changed

+54
-46
lines changed

content/en/docs/tasks/administer-cluster/highly-available-master.md content/en/docs/tasks/administer-cluster/highly-available-control-plane.md

+54-46
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,17 @@
11
---
22
reviewers:
33
- jszczepkowski
4-
title: Set up High-Availability Kubernetes Masters
4+
title: Set up a High-Availability Control Plane
55
content_type: task
6+
aliases: [ '/docs/tasks/administer-cluster/highly-available-master/' ]
67
---
78

89
<!-- overview -->
910

1011
{{< feature-state for_k8s_version="v1.5" state="alpha" >}}
1112

12-
You can replicate Kubernetes masters in `kube-up` or `kube-down` scripts for Google Compute Engine.
13-
This document describes how to use kube-up/down scripts to manage highly available (HA) masters and how HA masters are implemented for use with GCE.
13+
You can replicate Kubernetes control plane nodes in `kube-up` or `kube-down` scripts for Google Compute Engine.
14+
This document describes how to use kube-up/down scripts to manage a highly available (HA) control plane and how HA control planes are implemented for use with GCE.
1415

1516

1617

@@ -28,68 +29,70 @@ This document describes how to use kube-up/down scripts to manage highly availab
2829

2930
To create a new HA-compatible cluster, you must set the following flags in your `kube-up` script:
3031

31-
* `MULTIZONE=true` - to prevent removal of master replicas kubelets from zones different than server's default zone.
32-
Required if you want to run master replicas in different zones, which is recommended.
32+
* `MULTIZONE=true` - to prevent removal of control plane kubelets from zones different than server's default zone.
33+
Required if you want to run control plane nodes in different zones, which is recommended.
3334

3435
* `ENABLE_ETCD_QUORUM_READ=true` - to ensure that reads from all API servers will return most up-to-date data.
3536
If true, reads will be directed to leader etcd replica.
3637
Setting this value to true is optional: reads will be more reliable but will also be slower.
3738

38-
Optionally, you can specify a GCE zone where the first master replica is to be created.
39+
Optionally, you can specify a GCE zone where the first control plane node is to be created.
3940
Set the following flag:
4041

41-
* `KUBE_GCE_ZONE=zone` - zone where the first master replica will run.
42+
* `KUBE_GCE_ZONE=zone` - zone where the first control plane node will run.
4243

4344
The following sample command sets up a HA-compatible cluster in the GCE zone europe-west1-b:
4445

4546
```shell
4647
MULTIZONE=true KUBE_GCE_ZONE=europe-west1-b ENABLE_ETCD_QUORUM_READS=true ./cluster/kube-up.sh
4748
```
4849

49-
Note that the commands above create a cluster with one master;
50-
however, you can add new master replicas to the cluster with subsequent commands.
50+
Note that the commands above create a cluster with one control plane node;
51+
however, you can add new control plane nodes to the cluster with subsequent commands.
5152

52-
## Adding a new master replica
53+
## Adding a new control plane node
5354

54-
After you have created an HA-compatible cluster, you can add master replicas to it.
55-
You add master replicas by using a `kube-up` script with the following flags:
55+
After you have created an HA-compatible cluster, you can add control plane nodes to it.
56+
You add control plane nodes by using a `kube-up` script with the following flags:
5657

57-
* `KUBE_REPLICATE_EXISTING_MASTER=true` - to create a replica of an existing
58-
master.
58+
* `KUBE_REPLICATE_EXISTING_MASTER=true` - to create a replica of an existing control plane
59+
node.
5960

60-
* `KUBE_GCE_ZONE=zone` - zone where the master replica will run.
61-
Must be in the same region as other replicas' zones.
61+
* `KUBE_GCE_ZONE=zone` - zone where the control plane node will run.
62+
Must be in the same region as other control plane nodes' zones.
6263

6364
You don't need to set the `MULTIZONE` or `ENABLE_ETCD_QUORUM_READS` flags,
6465
as those are inherited from when you started your HA-compatible cluster.
6566

66-
The following sample command replicates the master on an existing HA-compatible cluster:
67+
The following sample command replicates the control plane node on an existing
68+
HA-compatible cluster:
6769

6870
```shell
6971
KUBE_GCE_ZONE=europe-west1-c KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
7072
```
7173

72-
## Removing a master replica
74+
## Removing a control plane node
7375

74-
You can remove a master replica from an HA cluster by using a `kube-down` script with the following flags:
76+
You can remove a control plane node from an HA cluster by using a `kube-down` script with the following flags:
7577

7678
* `KUBE_DELETE_NODES=false` - to restrain deletion of kubelets.
7779

78-
* `KUBE_GCE_ZONE=zone` - the zone from where master replica will be removed.
80+
* `KUBE_GCE_ZONE=zone` - the zone from where the control plane node will be removed.
7981

80-
* `KUBE_REPLICA_NAME=replica_name` - (optional) the name of master replica to remove.
81-
If empty: any replica from the given zone will be removed.
82+
* `KUBE_REPLICA_NAME=replica_name` - (optional) the name of control plane node to
83+
remove. If empty: any replica from the given zone will be removed.
8284

83-
The following sample command removes a master replica from an existing HA cluster:
85+
The following sample command removes a control plane node from an existing HA cluster:
8486

8587
```shell
8688
KUBE_DELETE_NODES=false KUBE_GCE_ZONE=europe-west1-c ./cluster/kube-down.sh
8789
```
8890

89-
## Handling master replica failures
91+
## Handling control plane node failures
9092

91-
If one of the master replicas in your HA cluster fails,
92-
the best practice is to remove the replica from your cluster and add a new replica in the same zone.
93+
If one of the control plane nodes in your HA cluster fails,
94+
the best practice is to remove the node from your cluster and add a new control plane
95+
node in the same zone.
9396
The following sample commands demonstrate this process:
9497

9598
1. Remove the broken replica:
@@ -98,26 +101,31 @@ The following sample commands demonstrate this process:
98101
KUBE_DELETE_NODES=false KUBE_GCE_ZONE=replica_zone KUBE_REPLICA_NAME=replica_name ./cluster/kube-down.sh
99102
```
100103

101-
<ol start="2"><li>Add a new replica in place of the old one:</li></ol>
104+
<ol start="2"><li>Add a new node in place of the old one:</li></ol>
102105

103106
```shell
104107
KUBE_GCE_ZONE=replica-zone KUBE_REPLICATE_EXISTING_MASTER=true ./cluster/kube-up.sh
105108
```
106109

107-
## Best practices for replicating masters for HA clusters
110+
## Best practices for replicating control plane nodes for HA clusters
108111

109-
* Try to place master replicas in different zones. During a zone failure, all masters placed inside the zone will fail.
112+
* Try to place control plane nodes in different zones. During a zone failure, all
113+
control plane nodes placed inside the zone will fail.
110114
To survive zone failure, also place nodes in multiple zones
111115
(see [multiple-zones](/docs/setup/best-practices/multiple-zones/) for details).
112116

113-
* Do not use a cluster with two master replicas. Consensus on a two-replica cluster requires both replicas running when changing persistent state.
114-
As a result, both replicas are needed and a failure of any replica turns cluster into majority failure state.
115-
A two-replica cluster is thus inferior, in terms of HA, to a single replica cluster.
117+
* Do not use a cluster with two control plane nodes. Consensus on a two-node
118+
control plane requires both nodes running when changing persistent state.
119+
As a result, both nodes are needed and a failure of any node turns the cluster
120+
into majority failure state.
121+
A two-node control plane is thus inferior, in terms of HA, to a cluster with
122+
one control plane node.
116123

117-
* When you add a master replica, cluster state (etcd) is copied to a new instance.
124+
* When you add a control plane node, cluster state (etcd) is copied to a new instance.
118125
If the cluster is large, it may take a long time to duplicate its state.
119-
This operation may be sped up by migrating etcd data directory, as described [here](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration)
120-
(we are considering adding support for etcd data dir migration in future).
126+
This operation may be sped up by migrating the etcd data directory, as described in
127+
the [etcd administration guide](https://etcd.io/docs/v2.3/admin_guide/#member-migration)
128+
(we are considering adding support for etcd data dir migration in the future).
121129

122130

123131

@@ -129,7 +137,7 @@ This operation may be sped up by migrating etcd data directory, as described [he
129137

130138
### Overview
131139

132-
Each of master replicas will run the following components in the following mode:
140+
Each of the control plane nodes will run the following components in the following mode:
133141

134142
* etcd instance: all instances will be clustered together using consensus;
135143

@@ -143,27 +151,27 @@ In addition, there will be a load balancer in front of API servers that will rou
143151

144152
### Load balancing
145153

146-
When starting the second master replica, a load balancer containing the two replicas will be created
154+
When starting the second control plane node, a load balancer containing the two replicas will be created
147155
and the IP address of the first replica will be promoted to IP address of load balancer.
148-
Similarly, after removal of the penultimate master replica, the load balancer will be removed and its IP address will be assigned to the last remaining replica.
156+
Similarly, after removal of the penultimate control plane node, the load balancer will be removed and its IP address will be assigned to the last remaining replica.
149157
Please note that creation and removal of load balancer are complex operations and it may take some time (~20 minutes) for them to propagate.
150158

151159
### Master service & kubelets
152160

153161
Instead of trying to keep an up-to-date list of Kubernetes apiserver in the Kubernetes service,
154162
the system directs all traffic to the external IP:
155163

156-
* in one master cluster the IP points to the single master,
164+
* in case of a single node control plane, the IP points to the control plane node,
157165

158-
* in multi-master cluster the IP points to the load balancer in-front of the masters.
166+
* in case of an HA control plane, the IP points to the load balancer in-front of the masters.
159167

160-
Similarly, the external IP will be used by kubelets to communicate with master.
168+
Similarly, the external IP will be used by kubelets to communicate with the control plane.
161169

162-
### Master certificates
170+
### Control plane node certificates
163171

164-
Kubernetes generates Master TLS certificates for the external public IP and local IP for each replica.
165-
There are no certificates for the ephemeral public IP for replicas;
166-
to access a replica via its ephemeral public IP, you must skip TLS verification.
172+
Kubernetes generates TLS certificates for the external public IP and local IP for each control plane node.
173+
There are no certificates for the ephemeral public IP for control plane nodes;
174+
to access a control plane node via its ephemeral public IP, you must skip TLS verification.
167175

168176
### Clustering etcd
169177

@@ -172,7 +180,7 @@ To make such deployment secure, communication between etcd instances is authoriz
172180

173181
### API server identity
174182

175-
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
183+
{{< feature-state state="alpha" for_k8s_version="v1.20" >}}
176184

177185
The API Server Identity feature is controlled by a
178186
[feature gate](/docs/reference/command-line-tools-reference/feature-gates/)

0 commit comments

Comments
 (0)