Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-ansible doc #83

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
308 changes: 308 additions & 0 deletions cluster_setup/kubernetes/kubernetes_ansible.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,308 @@
[[kubernetes-ansible]]
= {product-title} Kubernetes Ansible Cluster Setup
{product-author}
{product-version}
:data-uri:
:icons:

The link:https://github.com/kubernetes/contrib/tree/master/ansible[kubernetes/contrib] repo contains an Ansible playbook and set of roles that can be used to deploy a Kubernetes cluster using {product-title}. The hosts can be running on real hardware, VMs, things in a public cloud, etc. Anything that you can connect to via SSH. You can also use Vagrant to deploy the hosts on which to run this playbook. For more information, see the Vagrant Deployer section below.

== Before starting

* Record the IP address/hostname of which machine you want to be your master (only support a single master)
* Record the IP address/hostname of the machine you want to be your etcd server (often same as master, can be more than one)
* Record the IP addresses/hostname of the machines you want to be your nodes. (the master can also be a node)
* Make sure your ansible running machine has ansible 1.9 and python-netaddr installed.

== Setup

=== Configure inventory

Add the system information gathered above into a file called `inventory`,
or create a new one for the cluster.
Place the `inventory` file into the `./inventory` directory.

For example:

```sh
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really sh. Should we use ini? Also, this is Markdown style code fencing, which does work. Do we want to be strict about only using Asciidoc-style code fencing?

[masters]
kube-master-test.example.com

[etcd:children]
masters

[nodes]
kube-minion-test-[1:2].example.com
```

=== Configure Cluster options

Look through all of the options in `inventory/group_vars/all.yml` and
set the variables to reflect your needs. The options are described there
in full detail.

=== Securing etcd

If you wish to use TLS certificates for your etcd cluster you have to specify TLS keypairs and https for `etcd_url_scheme`/`etcd_peer_url_scheme`. This will enable encrypted communication, but will not authenticate the clients certificate validity. To prevent unauthorized access to your etcd cluster, please set `etcd_client_cert_auth`/`etcd_peer_client_cert_auth` correspondingly to true.

== Running the playbook

After going through the setup, run the `deploy-cluster.sh` script from within the `scripts` directory:

`$ cd scripts/ && ./deploy-cluster.sh`

You may override the inventory file by running:

`INVENTORY=myinventory ./deploy-cluster.sh`

The directory containing ``myinventory`` file must contain the default ``inventory/group_vars`` directory as well (or its equivalent).
Otherwise variables defined in ``group_vars/all.yml`` will not be set.

In general this will work on very recent Fedora, rawhide or F21. Future work to
support RHEL7, CentOS, and possible other distros should be forthcoming.

=== Targeted runs

You can just setup certain parts instead of doing it all.

==== Etcd

`$ ./deploy-cluster.sh --tags=etcd`

==== Kubernetes master

`$ ./deploy-cluster.sh --tags=masters`

==== Kubernetes nodes

`$ ./deploy-cluster.sh --tags=nodes`

=== Addons

By default, the Ansible playbook deploys Kubernetes addons as well. Addons consist of:

* DNS (kubedns)
* cluster monitoring (Grafana, Heapster, InfluxDB)
* cluster logging (Kibana, ElasticSearch)
* Kubernetes dashboard
* Kubernetes dash

In order to skip addons deployment, run

`$ ./deploy-cluster.sh --skip-tags=addons`

In order to run addons deployment only (requires kubernetes master already deployed), run

`$ ./deploy-cluster.sh --tags=addons` or `$ ./deploy-addons.sh`

=== Component sources

Each component can be installed from various sources. For instance:

* distribution packages
* github release
* kubernetes built from local source codes

By default, every component (etcd, docker, kubernetes, etc.) is installed via distribution package manager.
Currently, the following component types are supported:

* `etcd_source_type`: for `etcd` role
* `flannel_source_type`: for `flannel` role
* `kube_source_type`: for `master` and `node` roles
* `source_type`: for other roles (and components)

To see a full list of available types, see corresponding role's default variables.

==== Kubernetes source type

Available types (see `kube_source_type` under `roles/kubernetes/defaults/main.yml`):

* `packageManager`
* `localBuild`
* `github-release`
* `distribution-rpm`

In case of a package manager, the `kube-apiserver` binary is shipped with `cap_net_bind_service=ep` capability set.
The capability allows the apiserver to listen on `443` port.
In a case of `localBuild` and `github-release`, the capability is not set.
In order for apiserver to listen on a secure port, change the port (see `kube_master_api_port` under `roles/kubernetes/defaults/main.yml`). For instance to listen on `6443`.

In order to apply the `distribution-rpm` type, location of an rpm must be specified.
See `kube_rpm_url_base` and `kube_rpm_url_sufix` variables under `roles/kubernetes/defaults/main.yml`.

=== Network Service

By changing the `networking` variable in the `inventory/group_vars/all.yml` file, you can choose the network-service to use. The default is flannel.

`$ ./deploy-cluster.sh --tags=network-service-install`

= Vagrant Deployer

== Before You Start

link:https://www.vagrantup.com/downloads.html[Install Vagrant] if it's not currently installed on your system.

You will need a functioning link:https://www.vagrantup.com/docs/providers/[vagrant provider]. Currently supported providers are openstack, libvirt, virtualbox, and aws. Vagrant comes with VirtualBox support by default. No matter what provider you choose, you need to install the OpenStack and aws Vagrant plugins, or comment them out in the Vagrantfile:

```
vagrant plugin install vagrant-openstack-provider --plugin-version ">= 0.6.1"
vagrant plugin install vagrant-aws --plugin-version ">= 0.7.2"
```

Vagrant uses Ansible to automate the Kubernetes deployment. Install Ansible (Mac OSX example):
```
sudo easy_install pip
sudo pip install ansible==2.0.0.2
```

Reference link:http://docs.ansible.com/ansible/intro_installation.html[Ansible installation] for additional installation instructions.

The DNS kubernetes-addon requires python-netaddr. Install netaddr (Mac OSX example):

```
sudo pip install netaddr
```

Reference the link:https://pythonhosted.org/netaddr/installation.html[python-netaddr documentation] for additional installation instructions.

== Caveats

Vagrant (1.7.x) does not properly select a provider. You will need to manually specify the provider. Refer to the Provider Specific Information section for using the proper `vagrant up` command.

Vagrant prior version 1.8.0 doesn't write group variables into Ansible inventory file, which is required for using Core OS images.

== Usage

You can change some aspects of configuration using environment variables.
Note that these variables should be set for all vagrant commands invocations,
`vagrant up`, `vagrant provision`, `vagrant destroy`, etc.

=== Configure number of nodes

If you export an env variable such as
```
export NUM_NODES=4
```

The system will create that number of nodes. Default is 2.

=== Configure OS to use

You can specify which OS image to use on hosts:

```
export OS_IMAGE=centosatomic
```

For Fedora Atomic, use `export OS_IMAGE=fedoraatomic`

=== Start your cluster

If you are not running Vagrant 1.7.x or older, then change to the vagrant directory and `vagrant up`:

```
vagrant up
```


Vagrant up should complete with a successful Ansible playbook run:
```
....

PLAY RECAP *********************************************************************
kube-master-1 : ok=266 changed=78 unreachable=0 failed=0
kube-node-1 : ok=129 changed=39 unreachable=0 failed=0
kube-node-2 : ok=128 changed=39 unreachable=0 failed=0
```

Login to the Kubernetes master:
```
vagrant ssh kube-master-1
```

Verify the Kuberenetes cluster is up:
```
[vagrant@kube-master-1 ~]$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Elasticsearch is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
Grafana is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb

[vagrant@kube-master-1 ~]$ kubectl get nodes
NAME LABELS STATUS AGE
kube-node-1 kubernetes.io/hostname=kube-node-1 Ready 34m
kube-node-2 kubernetes.io/hostname=kube-node-2 Ready 34m
```

Make sure the STATUS shows Ready for each node. You are now ready to deploy Kubernetes resources. Try one of the link:https://github.com/kubernetes/kubernetes/tree/master/examples[examples] from the Kubernetes project repo.

== Provider Specific Information
Vagrant tries to be intelligent and pick the first provider supported by your installation. If you want to specify a provider you can do so by running vagrant like so:
```
# virtualbox provider
vagrant up --provider=virtualbox

# openstack provider
vagrant up --provider=openstack

# libvirt provider
vagrant up --provider=libvirt
```

=== OpenStack
Make sure you installed the openstack provider for vagrant.
```
vagrant plugin install vagrant-openstack-provider --plugin-version ">= 0.6.1"
```
NOTE This is a more up-to-date provider than the similar `vagrant-openstack-plugin`.

Also note that current (required) versions of `vagrant-openstack-provider` are not compatible with ruby 2.2.
https://github.com/ggiamarchi/vagrant-openstack-provider/pull/237
So make sure you get at least version 0.6.1.

To use the vagrant openstack provider you will need
- Copy `openstack_config.yml.example` to `openstack_config.yml`
- Edit `openstack_config.yml` to include your relevant details.

=== Libvirt

The libvirt vagrant provider is non-deterministic when launching VMs. This is a problem as we need ansible to only run after all of the VMs are running. To solve this when using libvirt one must
do the following
```
vagrant up --no-provision
vagrant provision
```

=== VirtualBox
Nothing special should be required for the VirtualBox provisioner. `vagrant up --provider virtualbox` should just work.


== Additional Information
If you just want to update the binaries on your systems (either pkgManager or localBuild) you can do so using the ansible binary-update tag. To do so with vagrant provision you would need to run
```
ANSIBLE_TAGS="binary-update" vagrant provision
```

=== Running Ansible

After provisioning a cluster vith Vagrant you can run ansible in this directory for any additional provisioning -
`ansible.cfg` provides configuration that will allow Ansible to connect to managed hosts.

For example:

```
$ ansible -m setup kube-master-1
kube-master-1 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"172.28.128.21",
"10.0.2.15"
],
...
```

=== Issues
File an issue link:https://github.com/kubernetes/contrib/issues[here] if the Vagrant Deployer does not work for you or if you find a documentation bug. link:https://github.com/kubernetes/contrib/pulls[Pull Requests] are always welcome. Please review the link:https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md[contributing guidelines] if you have not contributed in the past and feel free to ask questions on the link:http://slack.kubernetes.io[kubernetes-users Slack] channel.