You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can replicate Kubernetes masters in `kube-up` or `kube-down` scripts for Google Compute Engine.
13
-
This document describes how to use kube-up/down scripts to manage highly available (HA) masters and how HA masters are implemented for use with GCE.
13
+
You can replicate Kubernetes control plane nodes in `kube-up` or `kube-down` scripts for Google Compute Engine.
14
+
This document describes how to use kube-up/down scripts to manage a highly available (HA) control plane and how HA control planes are implemented for use with GCE.
14
15
15
16
16
17
@@ -28,68 +29,70 @@ This document describes how to use kube-up/down scripts to manage highly availab
28
29
29
30
To create a new HA-compatible cluster, you must set the following flags in your `kube-up` script:
30
31
31
-
*`MULTIZONE=true` - to prevent removal of master replicas kubelets from zones different than server's default zone.
32
-
Required if you want to run master replicas in different zones, which is recommended.
32
+
*`MULTIZONE=true` - to prevent removal of control plane kubelets from zones different than server's default zone.
33
+
Required if you want to run control plane nodes in different zones, which is recommended.
33
34
34
35
*`ENABLE_ETCD_QUORUM_READ=true` - to ensure that reads from all API servers will return most up-to-date data.
35
36
If true, reads will be directed to leader etcd replica.
36
37
Setting this value to true is optional: reads will be more reliable but will also be slower.
37
38
38
-
Optionally, you can specify a GCE zone where the first master replica is to be created.
39
+
Optionally, you can specify a GCE zone where the first control plane node is to be created.
39
40
Set the following flag:
40
41
41
-
*`KUBE_GCE_ZONE=zone` - zone where the first master replica will run.
42
+
*`KUBE_GCE_ZONE=zone` - zone where the first control plane node will run.
42
43
43
44
The following sample command sets up a HA-compatible cluster in the GCE zone europe-west1-b:
## Best practices for replicating masters for HA clusters
110
+
## Best practices for replicating control plane nodes for HA clusters
108
111
109
-
* Try to place master replicas in different zones. During a zone failure, all masters placed inside the zone will fail.
112
+
* Try to place control plane nodes in different zones. During a zone failure, all
113
+
control plane nodes placed inside the zone will fail.
110
114
To survive zone failure, also place nodes in multiple zones
111
115
(see [multiple-zones](/docs/setup/best-practices/multiple-zones/) for details).
112
116
113
-
* Do not use a cluster with two master replicas. Consensus on a two-replica cluster requires both replicas running when changing persistent state.
114
-
As a result, both replicas are needed and a failure of any replica turns cluster into majority failure state.
115
-
A two-replica cluster is thus inferior, in terms of HA, to a single replica cluster.
117
+
* Do not use a cluster with two control plane nodes. Consensus on a two-node
118
+
control plane requires both nodes running when changing persistent state.
119
+
As a result, both nodes are needed and a failure of any node turns the cluster
120
+
into majority failure state.
121
+
A two-node control plane is thus inferior, in terms of HA, to a cluster with
122
+
one control plane node.
116
123
117
-
* When you add a master replica, cluster state (etcd) is copied to a new instance.
124
+
* When you add a control plane node, cluster state (etcd) is copied to a new instance.
118
125
If the cluster is large, it may take a long time to duplicate its state.
119
-
This operation may be sped up by migrating etcd data directory, as described [here](https://coreos.com/etcd/docs/latest/admin_guide.html#member-migration)
120
-
(we are considering adding support for etcd data dir migration in future).
126
+
This operation may be sped up by migrating the etcd data directory, as described in
127
+
the [etcd administration guide](https://etcd.io/docs/v2.3/admin_guide/#member-migration)
128
+
(we are considering adding support for etcd data dir migration in the future).
121
129
122
130
123
131
@@ -129,7 +137,7 @@ This operation may be sped up by migrating etcd data directory, as described [he
129
137
130
138
### Overview
131
139
132
-
Each of master replicas will run the following components in the following mode:
140
+
Each of the control plane nodes will run the following components in the following mode:
133
141
134
142
* etcd instance: all instances will be clustered together using consensus;
135
143
@@ -143,27 +151,27 @@ In addition, there will be a load balancer in front of API servers that will rou
143
151
144
152
### Load balancing
145
153
146
-
When starting the second master replica, a load balancer containing the two replicas will be created
154
+
When starting the second control plane node, a load balancer containing the two replicas will be created
147
155
and the IP address of the first replica will be promoted to IP address of load balancer.
148
-
Similarly, after removal of the penultimate master replica, the load balancer will be removed and its IP address will be assigned to the last remaining replica.
156
+
Similarly, after removal of the penultimate control plane node, the load balancer will be removed and its IP address will be assigned to the last remaining replica.
149
157
Please note that creation and removal of load balancer are complex operations and it may take some time (~20 minutes) for them to propagate.
150
158
151
159
### Master service & kubelets
152
160
153
161
Instead of trying to keep an up-to-date list of Kubernetes apiserver in the Kubernetes service,
154
162
the system directs all traffic to the external IP:
155
163
156
-
* in one master cluster the IP points to the single master,
164
+
* in case of a single node control plane, the IP points to the control plane node,
157
165
158
-
* in multi-master cluster the IP points to the load balancer in-front of the masters.
166
+
* in case of an HA control plane, the IP points to the load balancer in-front of the masters.
159
167
160
-
Similarly, the external IP will be used by kubelets to communicate with master.
168
+
Similarly, the external IP will be used by kubelets to communicate with the control plane.
161
169
162
-
### Master certificates
170
+
### Control plane node certificates
163
171
164
-
Kubernetes generates Master TLS certificates for the external public IP and local IP for each replica.
165
-
There are no certificates for the ephemeral public IP for replicas;
166
-
to access a replica via its ephemeral public IP, you must skip TLS verification.
172
+
Kubernetes generates TLS certificates for the external public IP and local IP for each control plane node.
173
+
There are no certificates for the ephemeral public IP for control plane nodes;
174
+
to access a control plane node via its ephemeral public IP, you must skip TLS verification.
167
175
168
176
### Clustering etcd
169
177
@@ -172,7 +180,7 @@ To make such deployment secure, communication between etcd instances is authoriz
0 commit comments