Skip to content
This repository was archived by the owner on Dec 16, 2020. It is now read-only.

Commit 1a06c66

Browse files
authored
Merge pull request #26 from gruntwork-io/yori-tf-tls-mgmt
Pure Terraform TLS Management
2 parents eb5af3b + 78a072c commit 1a06c66

25 files changed

+2266
-253
lines changed

.circleci/config.yml

+41
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,37 @@ jobs:
118118
- store_test_results:
119119
path: /tmp/logs
120120

121+
integration_tests_without_kubergrunt:
122+
<<: *defaults
123+
steps:
124+
- attach_workspace:
125+
at: /home/circleci
126+
127+
# The weird way you have to set PATH in Circle 2.0
128+
- run: echo 'export PATH=$HOME/terraform:$HOME/packer:$PATH' >> $BASH_ENV
129+
130+
- run:
131+
<<: *install_gruntwork_utils
132+
133+
- run:
134+
command: setup-minikube
135+
136+
# Execute main terratests
137+
- run:
138+
name: run integration tests
139+
command: |
140+
mkdir -p /tmp/logs
141+
run-go-tests --path test --timeout 60m --packages "-run TestK8STillerNoKubergrunt$ ." | tee /tmp/logs/all.log
142+
no_output_timeout: 3600s
143+
144+
- run:
145+
command: terratest_log_parser --testlog /tmp/logs/all.log --outputdir /tmp/logs
146+
when: always
147+
- store_artifacts:
148+
path: /tmp/logs
149+
- store_test_results:
150+
path: /tmp/logs
151+
121152
workflows:
122153
version: 2
123154
test-and-deploy:
@@ -134,6 +165,13 @@ workflows:
134165
tags:
135166
only: /^v.*/
136167

168+
- integration_tests_without_kubergrunt:
169+
requires:
170+
- setup
171+
filters:
172+
tags:
173+
only: /^v.*/
174+
137175
nightly:
138176
triggers:
139177
- schedule:
@@ -146,3 +184,6 @@ workflows:
146184
- integration_tests:
147185
requires:
148186
- setup
187+
- integration_tests_without_kubergrunt:
188+
requires:
189+
- setup

README.md

+12-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The general idea is to:
2626
1. Setup a `kubectl` config context that is configured to authenticate to the deployed cluster.
2727
1. Install the necessary prerequisites tools:
2828
- [`helm` client](https://docs.helm.sh/using_helm/#install-helm)
29-
- [`kubergrunt`](https://github.com/gruntwork-io/kubergrunt#installation)
29+
- (Optional) [`kubergrunt`](https://github.com/gruntwork-io/kubergrunt#installation)
3030
1. Provision a [`Namespace`](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) and
3131
[`ServiceAccount`](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) to house the
3232
Tiller instance.
@@ -65,6 +65,17 @@ This repo provides a Gruntwork IaC Package and has the following folder structur
6565
Provision a default set of RBAC roles to use in a `Namespace`.
6666
* [k8s-service-account](https://github.com/gruntwork-io/terraform-kubernetes-helm/tree/master/modules/k8s-service-account):
6767
Provision a Kubernetes `ServiceAccount`.
68+
* [k8s-tiller-tls-certs](https://github.com/gruntwork-io/terraform-kubernetes-helm/tree/master/modules/k8s-tiller-tls-certs):
69+
Generate a TLS Certificate Authority (CA) and using that, generate signed TLS certificate key pairs that can be
70+
used for TLS verification of Tiller. The certs are managed on the cluster using Kubernetes `Secrets`. **NOTE**:
71+
This module uses the `tls` provider, which means the generated certificate key pairs are stored in plain text in
72+
the Terraform state file. If you are sensitive to secrets in Terraform state, consider using `kubergrunt` for TLS
73+
management.
74+
* [k8s-helm-client-tls-certs](https://github.com/gruntwork-io/terraform-kubernetes-helm/tree/master/modules/k8s-helm-client-tls-certs):
75+
Generate a signed TLS certificate key pair from a previously generated CA certificate key pair. This TLS key pair
76+
can be used to authenticate a helm client to access a deployed Tiller instance. **NOTE**: This module uses the
77+
`tls` provider, which means the generated certificate key pairs are stored in plain text in the Terraform state
78+
file. If you are sensitive to secrets in Terraform state, consider using `kubergrunt` for TLS management.
6879

6980
* [examples](https://github.com/gruntwork-io/terraform-kubernetes-helm/tree/master/examples): This folder contains
7081
examples of how to use the Submodules.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,205 @@
1+
# Kubernetes Tiller Deployment With Kubergrunt On Minikube
2+
3+
This folder shows an example of how to use Terraform to call out to our `kubergrunt` utility for TLS management when
4+
deploying Tiller (the server component of Helm) onto a Kubernetes cluster. Here we will walk through a detailed guide on
5+
how you can setup `minikube` and use the modules in this repo to deploy Tiller onto it.
6+
7+
8+
## Background
9+
10+
We strongly recommend reading [our guide on Helm](https://github.com/gruntwork-io/kubergrunt/blob/master/HELM_GUIDE.md)
11+
before continuing with this guide for a background on Helm, Tiller, and the security model backing it.
12+
13+
14+
## Overview
15+
16+
In this guide we will walk through the steps necessary to get up and running with deploying Tiller using this module,
17+
using `minikube` to deploy our target Kubernetes cluster. Here are the steps:
18+
19+
1. [Install and setup `minikube`](#setting-up-your-kubernetes-cluster-minikube)
20+
1. [Install the necessary tools](#installing-necessary-tools)
21+
1. [Apply the terraform code](#apply-the-terraform-code)
22+
1. [Verify the deployment](#verify-tiller-deployment)
23+
1. [Granting access to additional roles](#granting-access-to-additional-users)
24+
1. [Upgrading the deployed Tiller instance](#upgrading-deployed-tiller)
25+
26+
27+
## Setting up your Kubernetes cluster: Minikube
28+
29+
In this guide, we will use `minikube` as our Kubernetes cluster to deploy Tiller to.
30+
[Minikube](https://kubernetes.io/docs/setup/minikube/) is an official tool maintained by the Kubernetes community to be
31+
able to provision and run Kubernetes locally your machine. By having a local environment you can have fast iteration
32+
cycles while you develop and play with Kubernetes before deploying to production.
33+
34+
To setup `minikube`:
35+
36+
1. [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
37+
1. [Install the minikube utility](https://kubernetes.io/docs/tasks/tools/install-minikube/)
38+
1. Run `minikube start` to provision a new `minikube` instance on your local machine.
39+
1. Verify setup with `kubectl`: `kubectl cluster-info`
40+
41+
**Note**: This module has been tested to work against GKE and EKS as well. You can checkout the examples in the
42+
respective repositories for how to deploy Tiller on those platforms. <!-- TODO: link to examples -->
43+
44+
45+
## Installing necessary tools
46+
47+
In addition to `terraform`, this guide uses `kubergrunt` to manage TLS certificates for the deployment of Tiller. You
48+
can read more about the decision behind this approach in [the Appendix](#appendix-a-why-kubergrunt) of this guide.
49+
50+
This means that your system needs to be configured to be able to find `terraform`, `kubergrunt`, and `helm` client
51+
utilities on the system `PATH`. Here are the installation guide for each:
52+
53+
1. [`terraform`](https://learn.hashicorp.com/terraform/getting-started/install.html)
54+
1. [`helm` client](https://docs.helm.sh/using_helm/#installing-helm)
55+
1. [`kubergrunt`](https://github.com/gruntwork-io/kubergrunt#installation), minimum version: v0.3.6
56+
57+
Make sure the binaries are discoverable in your `PATH` variable. See [this stackoverflow
58+
post](https://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux-unix) for instructions on
59+
setting up your `PATH` on Unix, and [this
60+
post](https://stackoverflow.com/questions/1618280/where-can-i-set-path-to-make-exe-on-windows) for instructions on
61+
Windows.
62+
63+
64+
## Apply the Terraform Code
65+
66+
Now that we have a working Kubernetes cluster, and all the prerequisite tools are installed, we are ready to deploy
67+
Tiller! To deploy Tiller, we will use the example Terraform code in this folder:
68+
69+
1. If you haven't already, clone this repo:
70+
- `git clone https://github.com/gruntwork-io/terraform-kubernetes-helm.git`
71+
1. Make sure you are in the example folder:
72+
- `cd terraform-kubernetes-helm/examples/8s-tiller-kubergrunt-minikube`
73+
1. Initialize terraform:
74+
- `terraform init`
75+
1. Apply the terraform code:
76+
- `terraform apply`
77+
- Fill in the required variables based on your needs. <!-- TODO: show example inputs here -->
78+
79+
The Terraform code creates a few resources before deploying Tiller:
80+
81+
- A Kubernetes `Namespace` (the `tiller-namespace`) to house the Tiller instance. This namespace is where all the
82+
Kubernetes resources that Tiller needs to function will live. In production, you will want to lock down access to this
83+
namespace as being able to access these resources can compromise all the protections built into Helm.
84+
- A Kubernetes `Namespace` (the `resource-namespace`) to house the resources deployed by Tiller. This namespace is where
85+
all the Helm chart resources will be deployed into. This is the namespace that your devs and users will have access
86+
to.
87+
- A Kubernetes `ServiceAccount` (`tiller-service-account`) that Tiller will use to apply the resources in Helm charts.
88+
Our Terraform code grants enough permissions to the `ServiceAccount` to be able to have full access to both the
89+
`tiller-namespace` and the `resource-namespace`, so that it can:
90+
- Manage its own resources in the `tiller-namespace`, where the Tiller metadata (e.g release tracking information) will live.
91+
- Manage the resources deployed by helm charts in the `resource-namespace`.
92+
- Using `kubergrunt`, generate a TLS CA certificate key pair and a set of signed certificate key pairs for the server
93+
and the client. These will then be uploaded as `Secrets` on the Kubernetes cluster.
94+
95+
These resources are then passed into the `k8s-tiller` module where the Tiller `Deployment` resources will be created.
96+
Once the resources are applied to the cluster, this will wait for the Tiller `Deployment` to roll out the `Pods` using
97+
`kubergrunt helm wait-for-tiller`.
98+
99+
Finally, to allow you to use `helm` right away, this code also sets up the local `helm` client. This involves:
100+
101+
- Using the CA TLS certificate key pair, create a signed TLS certificate key pair to use to identify the client.
102+
- Upload the certificate key pair to the `tiller-namespace`.
103+
- Grant the RBAC entity access to:
104+
- Get the client certificate `Secret` (`kubergrunt helm configure` uses this to install the client certificate
105+
key pair locally)
106+
- Get and List pods in `tiller-namespace` (the `helm` client uses this to find the Tiller pod)
107+
- Create a port forward to the Tiller pod (the `helm` client uses this to make requests to the Tiller pod)
108+
109+
- Install the client certificate key pair to the helm home directory so the client can use it.
110+
111+
At the end of the `apply`, you should now have a working Tiller deployment with your `helm` client configured to access
112+
it. So let's verify that in the next step!
113+
114+
115+
## Verify Tiller Deployment
116+
117+
To start using `helm` with the configured credentials, you need to specify the following things:
118+
119+
- enable TLS verification
120+
- use TLS credentials to authenticate
121+
- the namespace where Tiller is deployed
122+
123+
These are specified through command line arguments. If everything is configured correctly, you should be able to access
124+
the Tiller that was deployed with the following args:
125+
126+
```
127+
helm version --tls --tls-verify --tiller-namespace NAMESPACE_OF_TILLER
128+
```
129+
130+
If you have access to Tiller, this should return you both the client version and the server version of Helm.
131+
132+
Note that you need to pass the above CLI argument every time you want to use `helm`. This can be cumbersome, so
133+
`kubergrunt` installs an environment file into your helm home directory that you can dot source to set environment
134+
variables that guide `helm` to use those options:
135+
136+
```
137+
. ~/.helm/env
138+
helm version
139+
```
140+
141+
<!-- TODO: Mention windows -->
142+
143+
144+
## Granting Access to Additional Users
145+
146+
Now that you have deployed Tiller and setup access for your local machine, you are ready to start using `helm`! However,
147+
you might be wondering how do you share the access with your team? To do so, you can rely on `kubergrunt helm grant`.
148+
149+
In order to allow other users access to the deployed Tiller instance, you need to explicitly grant their RBAC entities
150+
permission to access it. This involves:
151+
152+
- Granting enough permissions to access the Tiller pod
153+
- Generating and sharing TLS certificate key pairs to identify the client
154+
155+
`kubergrunt` automates this process in the `grant` and `configure` commands. For example, suppose you wanted to grant
156+
access to the deployed Tiller to a group of users grouped under the RBAC group `dev`. You can grant them access using
157+
the following command:
158+
159+
```
160+
kubergrunt helm grant --tiller-namespace NAMESPACE_OF_TILLER --rbac-group dev --tls-common-name dev --tls-org YOUR_ORG
161+
```
162+
163+
This will generate a new certificate key pair for the client and upload it as a `Secret`. Then, it will bind new RBAC
164+
roles to the `dev` RBAC group that grants it permission to access the Tiller pod and the uploaded `Secret`.
165+
166+
This in turn allows your users to configure their local client using `kubergrunt`:
167+
168+
```
169+
kubergrunt helm configure --tiller-namespace NAMESPACE_OF_TILLER --rbac-group dev
170+
```
171+
172+
At the end of this, your users should have the same helm client setup as above.
173+
174+
175+
## Appendix A: Why kubergrunt?
176+
177+
This Terraform example is not idiomatic Terraform code in that it relies on an external binary, `kubergrunt` as opposed
178+
to implementing the functionalities using pure Terraform providers. This approach has some noticeable drawbacks:
179+
180+
- You have to install extra tools to use, so it is not a minimal `terraform init && terraform apply`.
181+
- Portability concerns to setup, as there is no guarantee the tools work cross platform. We make every effort to test
182+
across the major operating systems (Linux, Mac OSX, and Windows), but we can't possibly test every combination and so
183+
there are bound to be portability issues.
184+
- You don't have the declarative Terraform features that you come to love, such as `plan`, updates through `apply`, and
185+
`destroy`.
186+
187+
That said, we decided to use this approach because of limitations in the existing providers to implement the
188+
functionalities here in pure Terraform code.
189+
190+
`kubergrunt` fulfills the role of generating and managing TLS certificate key pairs using Kubernetes `Secrets` as a
191+
database. This allows us to deploy Tiller with TLS verification enabled. We could instead use the `tls` and `kubernetes`
192+
providers in Terraform, but this has a few drawbacks:
193+
194+
- The [TLS provider](https://www.terraform.io/docs/providers/tls/index.html) stores the certificate key pairs in plain
195+
text into the Terraform state.
196+
- The Kubernetes Secret resource in the provider [also stores the value in plain text in the Terraform
197+
state](https://www.terraform.io/docs/providers/kubernetes/r/secret.html).
198+
- The grant and configure workflows are better suited as CLI tools than in Terraform.
199+
200+
`kubergrunt` works around this by generating the TLS certs and storing them in Kubernetes `Secrets` directly. In this
201+
way, the generated TLS certs never leak into the Terraform state as they are referenced by name when deploying Tiller as
202+
opposed to by value.
203+
204+
Note that we intend to implement a pure Terraform version of this functionality, but we plan to continue to maintain the
205+
`kubergrunt` approach for folks who are wary of leaking secrets into Terraform state.

0 commit comments

Comments
 (0)