Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single Resource Group.
For this guide I am creating a resource group within Azure Region - Canada Central with Terraform.
The Kubernetes networking model assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired network policies can limit how groups of containers are allowed to communicate with each other and external network endpoints.
Setting up network policies is out of scope for this tutorial. This is on my wish-list for things to do.
In this section a dedicated Virtual Network (VNET) network will be setup to host the Kubernetes cluster.
- VNet Address Space:
10.0.0.0/16
- Total Hosts:
65534
For this guide I am creating a Virtual Network with Terraform.
A subnet must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
- Subnet Address Prefix:
10.0.2.0/24
- Total Hosts:
254
For this guide I am creating a Virtual Network with Terraform.
Create a network security rule within a Network Security Group that allows internal communication across specific protocols and ports:
Network Security Group Rules are required for:
- SSH (22)
- HTTP (80)
- HTTPS (443)
The following ports are communicated used but a rule is not required since it is internal traffic.
- ETCD[Control Plane] (2379 - 2380)
- Kubelet[Control Plane/Data Plane] (10250)
- Scheduler[Control Plane] (10251)
- Controller Manager[Control Plane (10252)
- Node Port services[Control Plane] (30000-32767)
# Network Security Group - Rule (SSH)
resource "azurerm_network_security_rule" "enable_ssh" {
name = "SSH"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = var.RESOURCE_GROUP_NAME
network_security_group_name = azurerm_network_security_group.this.name
depends_on = [
azurerm_network_security_group.this
]
}
# Network Security Group - Rule (SSH)
resource "azurerm_network_security_rule" "http" {
name = "http"
priority = 101
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "80"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = var.RESOURCE_GROUP_NAME
network_security_group_name = azurerm_network_security_group.kubernetes_nsg.name
depends_on = [
azurerm_network_security_group.kubernetes_nsg
]
}
# Network Security Group - Rule (Control Plane - Kubernetes API Server)
resource "azurerm_network_security_rule" "kubernetes_api_server" {
name = "kubernetes-api-server"
priority = 150
direction = "Inbound"
access = "Allow"
protocol = "TCP"
source_port_range = "*"
destination_port_range = "6443"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = var.RESOURCE_GROUP_NAME
network_security_group_name = azurerm_network_security_group.kubernetes_nsg.name
depends_on = [
azurerm_network_security_group.kubernetes_nsg
]
}
Allocate a static IP address that will be attached to the external load balancer fronting the Kubernetes API Servers:
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
Verify the kubernetes-the-hard-way
static IP address was created in your default compute region:
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
output
NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way XX.XXX.XXX.XXX EXTERNAL us-west1 RESERVED
The compute instances in this lab will be provisioned using Ubuntu Server 20.04, which has good support for the containerd container runtime. Each compute instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
Create 6 compute instances which will host the Kubernetes 3 instances for the control plane and the 3 instances for the data plane. Provisioning of these nodes is done via Terraform's Map Value specifying specific configuration for each node and looping through using each.value in the azurerm_linux_virtual_machine resource stanza:
List the compute instances in your default compute zone:
az vm list -d -g kubernetes -o table
output
Name ResourceGroup PowerState PublicIps Fqdns Location Zones
----------- --------------- ------------ ------------ ------- ------------- -------
masternode kubernetes-rg VM running 20.63.84.175 canadacentral
slavenode01 kubernetes-rg VM running 20.63.85.239 canadacentral
slavenode02 kubernetes-rg VM running 20.63.86.1 canadacentral