The kubernetes/contrib repo contains an Ansible playbook and set of roles that can be used to deploy a Kubernetes cluster using {product-title}. The hosts can be running on real hardware, VMs, things in a public cloud, etc. Anything that you can connect to via SSH. You can also use Vagrant to deploy the hosts on which to run this playbook. For more information, see the Vagrant Deployer section below.
-
Record the IP address/hostname of which machine you want to be your master (only support a single master)
-
Record the IP address/hostname of the machine you want to be your etcd server (often same as master, can be more than one)
-
Record the IP addresses/hostname of the machines you want to be your nodes. (the master can also be a node)
-
Make sure your ansible running machine has ansible 1.9 and python-netaddr installed.
Add the system information gathered above into a file called inventory
,
or create a new one for the cluster.
Place the inventory
file into the ./inventory
directory.
For example:
[masters]
kube-master-test.example.com
[etcd:children]
masters
[nodes]
kube-minion-test-[1:2].example.com
Look through all of the options in inventory/group_vars/all.yml
and
set the variables to reflect your needs. The options are described there
in full detail.
If you wish to use TLS certificates for your etcd cluster you have to specify TLS keypairs and https for etcd_url_scheme
/etcd_peer_url_scheme
. This will enable encrypted communication, but will not authenticate the clients certificate validity. To prevent unauthorized access to your etcd cluster, please set etcd_client_cert_auth
/etcd_peer_client_cert_auth
correspondingly to true.
After going through the setup, run the deploy-cluster.sh
script from within the scripts
directory:
$ cd scripts/ && ./deploy-cluster.sh
You may override the inventory file by running:
INVENTORY=myinventory ./deploy-cluster.sh
The directory containing myinventory
file must contain the default inventory/group_vars
directory as well (or its equivalent).
Otherwise variables defined in group_vars/all.yml
will not be set.
In general this will work on very recent Fedora, rawhide or F21. Future work to support RHEL7, CentOS, and possible other distros should be forthcoming.
You can just setup certain parts instead of doing it all.
By default, the Ansible playbook deploys Kubernetes addons as well. Addons consist of:
-
DNS (kubedns)
-
cluster monitoring (Grafana, Heapster, InfluxDB)
-
cluster logging (Kibana, ElasticSearch)
-
Kubernetes dashboard
-
Kubernetes dash
In order to skip addons deployment, run
$ ./deploy-cluster.sh --skip-tags=addons
In order to run addons deployment only (requires kubernetes master already deployed), run
$ ./deploy-cluster.sh --tags=addons
or $ ./deploy-addons.sh
Each component can be installed from various sources. For instance:
-
distribution packages
-
github release
-
kubernetes built from local source codes
By default, every component (etcd, docker, kubernetes, etc.) is installed via distribution package manager. Currently, the following component types are supported:
-
etcd_source_type
: foretcd
role -
flannel_source_type
: forflannel
role -
kube_source_type
: formaster
andnode
roles -
source_type
: for other roles (and components)
To see a full list of available types, see corresponding role’s default variables.
Available types (see kube_source_type
under roles/kubernetes/defaults/main.yml
):
-
packageManager
-
localBuild
-
github-release
-
distribution-rpm
In case of a package manager, the kube-apiserver
binary is shipped with cap_net_bind_service=ep
capability set.
The capability allows the apiserver to listen on 443
port.
In a case of localBuild
and github-release
, the capability is not set.
In order for apiserver to listen on a secure port, change the port (see kube_master_api_port
under roles/kubernetes/defaults/main.yml
). For instance to listen on 6443
.
In order to apply the distribution-rpm
type, location of an rpm must be specified.
See kube_rpm_url_base
and kube_rpm_url_sufix
variables under roles/kubernetes/defaults/main.yml
.
Install Vagrant if it’s not currently installed on your system.
You will need a functioning vagrant provider. Currently supported providers are openstack, libvirt, virtualbox, and aws. Vagrant comes with VirtualBox support by default. No matter what provider you choose, you need to install the OpenStack and aws Vagrant plugins, or comment them out in the Vagrantfile:
vagrant plugin install vagrant-openstack-provider --plugin-version ">= 0.6.1"
vagrant plugin install vagrant-aws --plugin-version ">= 0.7.2"
Vagrant uses Ansible to automate the Kubernetes deployment. Install Ansible (Mac OSX example):
sudo easy_install pip
sudo pip install ansible==2.0.0.2
Reference Ansible installation for additional installation instructions.
The DNS kubernetes-addon requires python-netaddr. Install netaddr (Mac OSX example):
sudo pip install netaddr
Reference the python-netaddr documentation for additional installation instructions.
Vagrant (1.7.x) does not properly select a provider. You will need to manually specify the provider. Refer to the Provider Specific Information section for using the proper vagrant up
command.
Vagrant prior version 1.8.0 doesn’t write group variables into Ansible inventory file, which is required for using Core OS images.
You can change some aspects of configuration using environment variables.
Note that these variables should be set for all vagrant commands invocations,
vagrant up
, vagrant provision
, vagrant destroy
, etc.
If you export an env variable such as
export NUM_NODES=4
The system will create that number of nodes. Default is 2.
You can specify which OS image to use on hosts:
export OS_IMAGE=centosatomic
For Fedora Atomic, use export OS_IMAGE=fedoraatomic
If you are not running Vagrant 1.7.x or older, then change to the vagrant directory and vagrant up
:
vagrant up
Vagrant up should complete with a successful Ansible playbook run:
....
PLAY RECAP *********************************************************************
kube-master-1 : ok=266 changed=78 unreachable=0 failed=0
kube-node-1 : ok=129 changed=39 unreachable=0 failed=0
kube-node-2 : ok=128 changed=39 unreachable=0 failed=0
Login to the Kubernetes master:
vagrant ssh kube-master-1
Verify the Kuberenetes cluster is up:
[vagrant@kube-master-1 ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
[vagrant@kube-master-1 ~]$ kubectl get nodes
NAME LABELS STATUS AGE
kube-node-1 kubernetes.io/hostname=kube-node-1 Ready 34m
kube-node-2 kubernetes.io/hostname=kube-node-2 Ready 34m
Make sure the STATUS shows Ready for each node. You are now ready to deploy Kubernetes resources. Try one of the examples from the Kubernetes project repo.
Vagrant tries to be intelligent and pick the first provider supported by your installation. If you want to specify a provider you can do so by running vagrant like so:
# virtualbox provider
vagrant up --provider=virtualbox
# openstack provider
vagrant up --provider=openstack
# libvirt provider
vagrant up --provider=libvirt
Make sure you installed the openstack provider for vagrant.
vagrant plugin install vagrant-openstack-provider --plugin-version ">= 0.6.1"
NOTE This is a more up-to-date provider than the similar vagrant-openstack-plugin
.
Also note that current (required) versions of vagrant-openstack-provider
are not compatible with ruby 2.2.
ggiamarchi/vagrant-openstack-provider#237
So make sure you get at least version 0.6.1.
To use the vagrant openstack provider you will need
- Copy openstack_config.yml.example
to openstack_config.yml
- Edit openstack_config.yml
to include your relevant details.
The libvirt vagrant provider is non-deterministic when launching VMs. This is a problem as we need ansible to only run after all of the VMs are running. To solve this when using libvirt one must do the following
vagrant up --no-provision
vagrant provision
If you just want to update the binaries on your systems (either pkgManager or localBuild) you can do so using the ansible binary-update tag. To do so with vagrant provision you would need to run
ANSIBLE_TAGS="binary-update" vagrant provision
After provisioning a cluster vith Vagrant you can run ansible in this directory for any additional provisioning -
ansible.cfg
provides configuration that will allow Ansible to connect to managed hosts.
For example:
$ ansible -m setup kube-master-1
kube-master-1 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"172.28.128.21",
"10.0.2.15"
],
...
File an issue here if the Vagrant Deployer does not work for you or if you find a documentation bug. Pull Requests are always welcome. Please review the contributing guidelines if you have not contributed in the past and feel free to ask questions on the kubernetes-users Slack channel.