diff --git a/README.md b/README.md index f05a2af..2f403b8 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,33 @@ -# Honestbee TF Workshop +# Jakarta TF Workshop Working files for TF Workshop Request workstation assigned to you and password from trainer + +Trainees access: + +- [x] ability to launch EC2 instances only if: + - Owner tag == trainee name + - region = ap-southeast-1 + - instance type = t2.micro + +- [ ] ability to create IAM policy for s3 bucket? +- [ ] ability to create s3 bucket? + +Exercises: + +1. [x] launch instance +1. [x] launch multiple instances (using count) +1. [ ] modules exercise (which one??) + - Could create S3 bucket + bucket policy (using module) + - Could ask them to use an existing policy and only give them s3 rights... + + problem with s3 exercise is the need for IAM permissions + problem with RDS exercise is the need to control instance type and cost... it might be better though + +1. [x] state manipulation (using consul) + +training requirements: +- [ ] tf-modules push to s3 bucket +- [x] tf-modules server over s3 +- [x] DNS for tf-modules server diff --git a/kops/README.md b/kops/README.md deleted file mode 100644 index 51ade8a..0000000 --- a/kops/README.md +++ /dev/null @@ -1,247 +0,0 @@ -# Kops Workshop - -This workshop will cover usage of [kubernetes/kops](https://github.com/kubernetes/kops) and utilities such as `channels`. - -## Workshop Introduction - -The `kops` tool aims to manage Kubernetes clusters in the same way Kubernetes itself manages resources: Through desired state manifests. - -Kubernetes uses Etcd for state storage and similarly, Kops uses a **state store** which can either be Google Cloud Storage or S3 buckets. - -An S3 bucket for state storage is created as part of this Workshop setup. - -Additionally, `kops` bundles a utility to deploy kubernetes add-ons called `channels` which we will cover in this workshop as well. - -## Kops cluster maintenance - -### Load env vars - -On your workstation, an `.env` file has been created with all configuration kops needs for the following exercises. - -Verify the contents of the `.env` file, then load these variables into your shell environment: - -```bash -export $(cat .env | xargs) -``` - -### Create cluster spec - -Similar to `kubectl` ... `kops` provides imperative commands to generate cluster definitions: - -```bash -kops create cluster \ - --node-count 3 \ - --zones ap-southeast-1a \ - --master-zones ap-southeast-1a \ - --node-size t2.medium \ - --master-size t2.medium \ - --ssh-public-key ~/.ssh/kops_key.pub \ - ${CLUSTER_NAME} -``` - -Now verify that the cluster definition was created in the kops state store. - -```bash -kops get cluster - -# Also list instance groups related to the cluster -kops get --name $CLUSTER_NAME instancegroups -``` - -### Get/Set cluster definitions - -Ideally we keep these cluster definitions as manifests under source control (Infrastructure as code). - -To download these manifests, similarly to Kubernetes, use the `get` subcommand and `--output yaml`: - -Get: - -```bash -kops get cluster ${CLUSTER_NAME} -o yaml > ${CLUSTER_NAME}-cluster.yaml -kops get --name ${CLUSTER_NAME} instancegroups -o yaml > ${CLUSTER_NAME}-ig.yaml -``` - -**Note** use the `--full` flag to see all defaults - -Review / Edit the cluster and instnace group manifests - -```bash -vim ${CLUSTER_NAME}-cluster.yaml -``` - -```bash -vim ${CLUSTER_NAME}-ig.yaml -``` - -Read more about these manifests: - -- [Cluster Spec](https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md) -- [Instance Groups](https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md) - -During cluster bootstrap, manifests are read from the state store by the bootstrapping components. -Thus, we need to ensure the manifests are updated into the state store. - -Set: - -```bash -kops replace -f ${CLUSTER_NAME}-cluster.yaml -kops replace -f ${CLUSTER_NAME}-ig.yaml -``` - -### Generate Terraform config - -```bash -kops update cluster --name ${CLUSTER_NAME} \ - --target=terraform \ - --out=modules/clusters/${CLUSTER_NAME} -``` - -**Note** Add this stage, `kops` will automatically configure your `kubeconfig` as well. -We can also manually get the `kubeconfig`: - -```bash -kops export --name ${CLUSTER_NAME} kubecfg -``` - -#### Build cluster - -As we heavily use Terraform modules and manage infrastructure outside of Kubernetes using Terraform, we import the kops generated module into our `main.tf` file: - -```hcl -module "cluster-bee02" { - source = "./modules/clusters/bee02-cluster.training.honestbee.com" -} -``` - -Initialise, plan and apply the Terraform configuration: - -```bash -terraform init -terraform plan -terraform apply -``` - -Wait for the cluster to be ready... - -```bash -until kubectl cluster-info; do (( i++ ));echo "Cluster not available yet, waiting for 5 seconds ($i)"; sleep 5; done -``` - -**Troubleshooting** - -- Get the public IP from the master and ssh into it - - ``` - ssh -i ~/.ssh/kops_key admin@54.254.203.127 - ``` - -- Check the status of the systemd units (kubelet / docker) - - ``` - sudo systemctl status kubelet - sudo systemctl status docker - ``` - -- Follow the `kubelet` journal logs and look for errors - - ``` - sudo journalctl -u kubelet - ``` - -- Follow the `api-server` logs and look for errors - - ``` - sudo tail -f /var/log/kube-apiserver.log - ``` - - -### Rolling Updates - -...To be completed (note about the danger of unbalanced clusters) - -## Kops addon channels - -Kubernetes addons are bundles of resources that provide specific functionality (such as dashboards, auto scaling, ...). Multiple addons can be versioned together and managed through the concept of -addon channels. The `channels` tool bundled with `kops` aims to simplify the management of addons. The `channels` tool is similar to Helm, but without the need for a server side component - yet it can not provide the templating and release management provided by Helm. - -Addon channels are defined as a list of addons stored in an `addons.yaml` file. This list keeps track of all addon versions applicable for a particular channel. Each addon may have multiple -kubernetes resource manifests streamed into a single yaml file. The `channels` tool keeps track of which addon version is deployed in a cluster and automates -the creation of all addons in the channel. - -### Deploy upstream channels - -There are several upstream channels such as dashboard and heapster, we may install these as follows: - -```bash -channels apply channel monitoring-standalone --yes -channels apply channel kubernetes-dashboard --yes -``` - -> Currently `channels` is hardcoded to prefix simple channel names such as `kubernetes-dashboard` by searching `master` in [kubernetes/kops/addons](https://github.com/kubernetes/kops/tree/master/addons/kubernetes-dashboard) for the `addons.yaml` list. -> See [channels/pkg/cmd/apply_channel.go](https://sourcegraph.com/github.com/kubernetes/kops@1.7.1/-/blob/channels/pkg/cmd/apply_channel.go#L90) source - -At this stage, we can review all addons that were deployed by `channels` (notice several addons were deployed as part of kops cluster bootstrap) - -```bash -channels get addons -``` - -> Good to know: behind the scene, `channels` uses annotations on the `kube-systems` namespace to keep track of deployed addon versions: - -We can get similar output using `jq`: - -```bash -kubectl get ns kube-system -o json | jq '.metadata.annotations | with_entries(select(.value | contains("addons"))) | map_values(fromjson | .version)' -``` - -Now that the dashboard is deployed - notice that as we did not make our cluster private, we can access the dashboard form anywhere (requires basic-auth): - -https://api.bee02-cluster.training.honestbee.com/ui - -Once we accepted the untrusted root cluster certificate, we can get a list of basic-auth credentials from our `kubeconfig`: - -```bash -kubectl config view -o json | jq '[.users[] | select(.name | contains("basic-auth")) | {(.name): {(.user.username): .user.password}}]' -``` - -### Deploy custom Honestbee - beekeeper channel - -As Honestbee depends on Helm for all of its deployments, we created our own addons channel called `beekeeper` to bootstrap Helm and -other core Kubernetes addons (namespaces, service accounts, registry secrets, rbac, ...). On your workstation are sample addons for -practice purposes. - -``` -beekeeper/ -├── addons.yaml -├── kube-state-metrics.addons.k8s.io -│   ├── README.md -│   ├── v1.0.1.yaml -│   └── v1.1.0-rc.0.yaml -├── namespaces.honestbee.io -│   └── k8s-1.7.yaml -└── tiller.addons.k8s.io - └── k8s-1.7.yaml -``` - -To apply this channel to the cluster, run the following command: - -``` -channels apply channel -f beekeeper/addons.yaml --yes -``` - - -## Cleaning up - -### Delete cluster - -As the cloud resources are managed through Terraform, the only thing we want to do is delete the manifest: - -```bash -kops delete cluster --name ${CLUSTER_NAME} --unregister --yes -``` - -## Todo - -- Add section about rolling updates -- Add section about `kops toolbox template` -- Add section on how to clean up clusters diff --git a/kops/beekeeper/addons.yaml b/kops/beekeeper/addons.yaml deleted file mode 100644 index 18b35dd..0000000 --- a/kops/beekeeper/addons.yaml +++ /dev/null @@ -1,24 +0,0 @@ -kind: Addons -metadata: - name: beekeeper -spec: - addons: - - name: tiller.addons.k8s.io - manifest: tiller.addons.k8s.io/k8s-1.7.yaml - kubernetesVersion: '>=1.7.0' - id: k8s-1.7 - selector: - k8s-addon: tiller.addons.k8s.io - version: 2.7.2 #helm version - - name: namespaces.honestbee.io - manifest: namespaces.honestbee.io/k8s-1.7.yaml - kubernetesVersion: '>=1.7.0' - selector: - k8s-addon: namespaces.honestbee.io - version: 1.1.2 - - name: kube-state-metrics.addons.k8s.io - manifest: kube-state-metrics.addons.k8s.io/v1.1.0-rc.0.yaml - kubernetesVersion: '>=1.7.0' - selector: - k8s-addon: kube-state-metrics.addons.k8s.io - version: v1.1.0-rc.0 diff --git a/kops/beekeeper/kube-state-metrics.addons.k8s.io/README.md b/kops/beekeeper/kube-state-metrics.addons.k8s.io/README.md deleted file mode 100644 index 412bb6a..0000000 --- a/kops/beekeeper/kube-state-metrics.addons.k8s.io/README.md +++ /dev/null @@ -1,2 +0,0 @@ -## Usages -channels apply channel kube-state-metrics --yes diff --git a/kops/beekeeper/kube-state-metrics.addons.k8s.io/v1.0.1.yaml b/kops/beekeeper/kube-state-metrics.addons.k8s.io/v1.0.1.yaml deleted file mode 100644 index ee5a283..0000000 --- a/kops/beekeeper/kube-state-metrics.addons.k8s.io/v1.0.1.yaml +++ /dev/null @@ -1,158 +0,0 @@ -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: kube-state-metrics -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: kube-state-metrics -subjects: -- kind: ServiceAccount - name: kube-state-metrics - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: kube-state-metrics -rules: -- apiGroups: [""] - resources: - - nodes - - pods - - services - - resourcequotas - - replicationcontrollers - - limitranges - - persistentvolumeclaims - - namespaces - verbs: ["list", "watch"] -- apiGroups: ["extensions"] - resources: - - daemonsets - - deployments - - replicasets - verbs: ["list", "watch"] -- apiGroups: ["apps"] - resources: - - statefulsets - verbs: ["list", "watch"] -- apiGroups: ["batch"] - resources: - - cronjobs - - jobs - verbs: ["list", "watch"] ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: kube-state-metrics - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: kube-state-metrics - spec: - serviceAccountName: kube-state-metrics - containers: - - name: kube-state-metrics - image: quay.io/coreos/kube-state-metrics:v1.0.1 - ports: - - name: http-metrics - containerPort: 8080 - readinessProbe: - httpGet: - path: /healthz - port: 8080 - initialDelaySeconds: 5 - timeoutSeconds: 5 - resources: - requests: - memory: 100Mi - cpu: 100m - limits: - memory: 500Mi - cpu: 200m - - name: addon-resizer - image: gcr.io/google_containers/addon-resizer:1.0 - resources: - limits: - cpu: 100m - memory: 30Mi - requests: - cpu: 100m - memory: 30Mi - env: - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - command: - - /pod_nanny - - --container=kube-state-metrics - - --cpu=100m - - --extra-cpu=1m - - --memory=100Mi - - --extra-memory=2Mi - - --threshold=5 - - --deployment=kube-state-metrics ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: RoleBinding -metadata: - name: kube-state-metrics - namespace: kube-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: kube-state-metrics-resizer -subjects: -- kind: ServiceAccount - name: kube-state-metrics - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: Role -metadata: - namespace: kube-system - name: kube-state-metrics-resizer -rules: -- apiGroups: [""] - resources: - - pods - verbs: ["get"] -- apiGroups: ["extensions"] - resources: - - deployments - resourceNames: ["kube-state-metrics"] - verbs: ["get", "update"] ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: kube-state-metrics - namespace: kube-system ---- -apiVersion: v1 -kind: Service -metadata: - name: kube-state-metrics - namespace: kube-system - labels: - k8s-app: kube-state-metrics - annotations: - prometheus.io/scrape: 'true' -spec: - ports: - - name: http-metrics - port: 8080 - targetPort: http-metrics - protocol: TCP - selector: - k8s-app: kube-state-metrics diff --git a/kops/beekeeper/kube-state-metrics.addons.k8s.io/v1.1.0-rc.0.yaml b/kops/beekeeper/kube-state-metrics.addons.k8s.io/v1.1.0-rc.0.yaml deleted file mode 100644 index 1bd0cde..0000000 --- a/kops/beekeeper/kube-state-metrics.addons.k8s.io/v1.1.0-rc.0.yaml +++ /dev/null @@ -1,158 +0,0 @@ -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: kube-state-metrics -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: kube-state-metrics -subjects: -- kind: ServiceAccount - name: kube-state-metrics - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: kube-state-metrics -rules: -- apiGroups: [""] - resources: - - nodes - - pods - - services - - resourcequotas - - replicationcontrollers - - limitranges - - persistentvolumeclaims - - namespaces - verbs: ["list", "watch"] -- apiGroups: ["extensions"] - resources: - - daemonsets - - deployments - - replicasets - verbs: ["list", "watch"] -- apiGroups: ["apps"] - resources: - - statefulsets - verbs: ["list", "watch"] -- apiGroups: ["batch"] - resources: - - cronjobs - - jobs - verbs: ["list", "watch"] ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - name: kube-state-metrics - namespace: kube-system -spec: - replicas: 1 - template: - metadata: - labels: - k8s-app: kube-state-metrics - spec: - serviceAccountName: kube-state-metrics - containers: - - name: kube-state-metrics - image: quay.io/coreos/kube-state-metrics:v1.1.0-rc.0 - ports: - - name: http-metrics - containerPort: 8080 - readinessProbe: - httpGet: - path: /healthz - port: 8080 - initialDelaySeconds: 5 - timeoutSeconds: 5 - resources: - requests: - memory: 100Mi - cpu: 100m - limits: - memory: 500Mi - cpu: 200m - - name: addon-resizer - image: gcr.io/google_containers/addon-resizer:1.0 - resources: - limits: - cpu: 100m - memory: 30Mi - requests: - cpu: 100m - memory: 30Mi - env: - - name: MY_POD_NAME - valueFrom: - fieldRef: - fieldPath: metadata.name - - name: MY_POD_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - command: - - /pod_nanny - - --container=kube-state-metrics - - --cpu=100m - - --extra-cpu=1m - - --memory=100Mi - - --extra-memory=2Mi - - --threshold=5 - - --deployment=kube-state-metrics ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: RoleBinding -metadata: - name: kube-state-metrics - namespace: kube-system -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: kube-state-metrics-resizer -subjects: -- kind: ServiceAccount - name: kube-state-metrics - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: Role -metadata: - namespace: kube-system - name: kube-state-metrics-resizer -rules: -- apiGroups: [""] - resources: - - pods - verbs: ["get"] -- apiGroups: ["extensions"] - resources: - - deployments - resourceNames: ["kube-state-metrics"] - verbs: ["get", "update"] ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: kube-state-metrics - namespace: kube-system ---- -apiVersion: v1 -kind: Service -metadata: - name: kube-state-metrics - namespace: kube-system - labels: - k8s-app: kube-state-metrics - annotations: - prometheus.io/scrape: 'true' -spec: - ports: - - name: http-metrics - port: 8080 - targetPort: http-metrics - protocol: TCP - selector: - k8s-app: kube-state-metrics diff --git a/kops/beekeeper/namespaces.honestbee.io/k8s-1.7.yaml b/kops/beekeeper/namespaces.honestbee.io/k8s-1.7.yaml deleted file mode 100644 index 37f45ca..0000000 --- a/kops/beekeeper/namespaces.honestbee.io/k8s-1.7.yaml +++ /dev/null @@ -1,79 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - labels: - k8s-addon: namespaces.honestbee.io - name: "frontend" ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: Role -metadata: - labels: - k8s-addon: namespaces.honestbee.io - namespace: "frontend" - name: "frontend-manager" -rules: -- apiGroups: [""] - resources: ["*"] - verbs: ["*"] -- apiGroups: [""] - resources: - - pods/portforward - verbs: - - create ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: RoleBinding -metadata: - labels: - k8s-addon: namespaces.honestbee.io - namespace: "frontend" - name: "frontend-manager" -roleRef: - apiGroup: "" - kind: Role - name: "frontend-manager" -subjects: -- kind: Group - name: honestbee:frontend-staging - apiGroup: rbac.authorization.k8s.io ---- -apiVersion: v1 -kind: Namespace -metadata: - labels: - k8s-addon: namespaces.honestbee.io - name: "backend" ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: Role -metadata: - labels: - k8s-addon: namespaces.honestbee.io - namespace: "backend" - name: "backend-manager" -rules: -- apiGroups: [""] - resources: ["*"] - verbs: ["*"] -- apiGroups: [""] - resources: - - pods/portforward - verbs: - - create ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: RoleBinding -metadata: - labels: - k8s-addon: namespaces.honestbee.io - namespace: "backend" - name: "backend-manager" -roleRef: - apiGroup: "" - kind: Role - name: "backend-manager" -subjects: -- kind: Group - name: honestbee:backend-staging - apiGroup: rbac.authorization.k8s.io diff --git a/kops/beekeeper/tiller.addons.k8s.io/k8s-1.7.yaml b/kops/beekeeper/tiller.addons.k8s.io/k8s-1.7.yaml deleted file mode 100644 index b2d2c05..0000000 --- a/kops/beekeeper/tiller.addons.k8s.io/k8s-1.7.yaml +++ /dev/null @@ -1,86 +0,0 @@ -apiVersion: v1 -kind: ServiceAccount -metadata: - labels: - k8s-addon: tiller.addons.k8s.io - name: tiller - namespace: kube-system ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - labels: - k8s-addon: tiller.addons.k8s.io - name: tiller -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: cluster-admin -subjects: -- kind: ServiceAccount - name: tiller - namespace: kube-system ---- -apiVersion: extensions/v1beta1 -kind: Deployment -metadata: - labels: - k8s-addon: tiller.addons.k8s.io - app: helm - name: tiller - name: tiller-deploy - namespace: kube-system -spec: - replicas: 1 - selector: - matchLabels: - app: helm - name: tiller - strategy: - rollingUpdate: - maxSurge: 1 - maxUnavailable: 1 - type: RollingUpdate - template: - metadata: - creationTimestamp: null - labels: - app: helm - name: tiller - spec: - containers: - - env: - - name: TILLER_NAMESPACE - value: kube-system - - name: TILLER_HISTORY_MAX - value: "10" - image: gcr.io/kubernetes-helm/tiller:v2.7.2 - imagePullPolicy: IfNotPresent - livenessProbe: - failureThreshold: 3 - httpGet: - path: /liveness - port: 44135 - scheme: HTTP - initialDelaySeconds: 1 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - name: tiller - ports: - - containerPort: 44134 - name: tiller - protocol: TCP - readinessProbe: - failureThreshold: 3 - httpGet: - path: /readiness - port: 44135 - scheme: HTTP - initialDelaySeconds: 1 - periodSeconds: 10 - successThreshold: 1 - timeoutSeconds: 1 - dnsPolicy: ClusterFirst - serviceAccount: tiller - serviceAccountName: tiller diff --git a/kops/env.tpl b/kops/env.tpl deleted file mode 100644 index 0bef876..0000000 --- a/kops/env.tpl +++ /dev/null @@ -1,4 +0,0 @@ -AWS_ACCESS_KEY="$aws_key" -AWS_SECRET_KEY="$aws_secret" -KOPS_STATE_STORE="s3://$state_bucket_name" -CLUSTER_NAME="$cluster_name" diff --git a/setup/iam_custom_policies.tf b/setup/iam_custom_policies.tf new file mode 100644 index 0000000..98d5697 --- /dev/null +++ b/setup/iam_custom_policies.tf @@ -0,0 +1,213 @@ +resource "aws_iam_policy" "trainee_ec2" { + count = "${length(var.users)}" + name = "${var.users[count.index]}_ec2_policy" + path = "/" + policy = "${element(data.aws_iam_policy_document.trainee_ec2.*.json,count.index)}" +} + +resource "aws_iam_policy" "trainee_rds" { + count = "${length(var.users)}" + name = "${var.users[count.index]}_rds_policy" + path = "/" + policy = "${element(data.aws_iam_policy_document.trainee_rds.*.json,count.index)}" +} + +data "aws_iam_policy_document" "trainee_ec2" { + # https://aws.amazon.com/blogs/security/demystifying-ec2-resource-level-permissions/ + count = "${length(var.users)}" + + statement { + sid = "AllowDescribeForAllResources" + + actions = [ + "ec2:Describe*", + ] + + resources = [ + "*", + ] + } + + statement { + sid = "OnlyAllowCertainInstanceTypesToBeCreated" + + actions = [ + "ec2:RunInstances", + ] + + resources = [ + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:instance/*", + ] + + condition { + test = "StringEquals" + variable = "ec2:InstanceType" + + values = [ + "t2.micro", + ] + } + } + + statement { + sid = "AllowUserToTagInstances" + + actions = [ + "ec2:CreateTags", + ] + + resources = [ + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:instance/*", + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:security-group/*", + ] + + # allow any tag, but if tag is Owner, force it to username + condition { + test = "StringEquals" + variable = "aws:RequestTag/Owner" + + values = [ + "${var.users[count.index]}", + ] + } + } + + statement { + sid = "AllowAdditionalResourcesToSupportLaunchingEC2Instances" + + actions = [ + "ec2:RunInstances", + ] + + resources = [ + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:key-pair/*", + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:security-group/*", + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:volume/*", + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:network-interface/*", + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:subnet/*", + "arn:aws:ec2:${var.aws_region}::image/ami-*", + ] + } + + statement { + sid = "AllowUserToStopStartDeleteUntagHisInstances" + + actions = [ + "ec2:TerminateInstances", + "ec2:StopInstances", + "ec2:StartInstances", + "ec2:DeleteTags", + ] + + resources = [ + "arn:aws:ec2:${var.aws_region}:${data.aws_caller_identity.current.account_id}:instance/*", + ] + + condition { + test = "StringEquals" + variable = "ec2:ResourceTag/Owner" + + values = [ + "${var.users[count.index]}", + ] + } + } +} + +data "aws_iam_policy_document" "trainee_rds" { + # INCOMPLETE - these IAM permissions for RDS are currently not finalised + # https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAM.AccessControl.IdentityBased.html + count = "${length(var.users)}" + + statement { + sid = "OnlyAllowCertainPostgressInstanceCreate" + + actions = [ + "rds:CreateDBInstance", + ] + + resources = [ + "arn:aws:rds:${var.aws_region}:${data.aws_caller_identity.current.account_id}:db:*", + ] + + condition { + test = "StringEquals" + variable = "rds:DatabaseEngine" + + values = [ + "postgres", + ] + } + + condition { + test = "Bool" + variable = "rds:MultiAz" + + values = [ + false, + ] + } + + condition { + test = "StringEquals" + variable = "rds:DatabaseClass" + + values = [ + "db.t2.micro", + "db.t2.medium", + ] + } + } + + # statement { + # sid = "AllowUserToCreateSG" + + + # actions = [ + # "ec2:*SecurityGroup*", + # ] + + + # resources = [ + # "*", + # ] + # } + + statement { + sid = "AllowMisc" + + actions = [ + "rds:CreateDBSecurityGroup", + "rds:CreateDBSnapshot", + "rds:CreateDBSubnetGroup", + "rds:StartDBInstance", + "rds:StopDBInstance", + "rds:Delete*", + ] + + resources = [ + "*", + ] + } + statement { + sid = "DenyPIOPSCreate" + effect = "Deny" + + actions = [ + "rds:CreateDBInstance", + ] + + resources = [ + "*", + ] + + condition { + test = "NumericNotEquals" + variable = "rds:Piops" + + values = [ + "0", + ] + } + } +} diff --git a/setup/kops.tf b/setup/kops.tf deleted file mode 100644 index 9f7a2ce..0000000 --- a/setup/kops.tf +++ /dev/null @@ -1,43 +0,0 @@ -## Added for kops workshop - -resource "aws_iam_user_policy_attachment" "aws_users_s3" { - count = "${length(var.users)}" - user = "${element(aws_iam_user.aws_users.*.name,count.index)}" - policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess" -} - -resource "aws_iam_user_policy_attachment" "aws_users_iam" { - count = "${length(var.users)}" - user = "${element(aws_iam_user.aws_users.*.name,count.index)}" - policy_arn = "arn:aws:iam::aws:policy/IAMFullAccess" -} - -resource "aws_iam_user_policy_attachment" "aws_users_vpc" { - count = "${length(var.users)}" - user = "${element(aws_iam_user.aws_users.*.name,count.index)}" - policy_arn = "arn:aws:iam::aws:policy/AmazonVPCFullAccess" -} - -data "aws_caller_identity" "current" {} - -resource "aws_s3_bucket" "state_store" { - count = "${length(var.users)}" - bucket = "${data.aws_caller_identity.current.account_id}-${element(aws_iam_user.aws_users.*.name,count.index)}-kops-state-store" - region = "${var.aws_region}" - acl = "private" - - force_destroy = "true" - - tags { - builtWith = "terraform" - system = "kops" - } - - versioning { - enabled = true - } - - lifecycle { - # prevent_destroy = true - } -} diff --git a/setup/main.tf b/setup/main.tf index 856aa0e..73b9a90 100644 --- a/setup/main.tf +++ b/setup/main.tf @@ -32,9 +32,6 @@ variable "users" { "rider01", ] - # "rider02", - # "rider03", - # "rider04", # "rider02", # "rider03", # "rider04", @@ -44,6 +41,12 @@ variable "users" { # "rider08", # "rider09", # "rider10", + # "rider11", + # "rider12", + # "rider13", + # "rider14", + # "rider15", + # "rider16", } provider "aws" { @@ -120,6 +123,8 @@ data "aws_subnet" "default_b" { availability_zone = "${var.aws_region}b" } +data "aws_caller_identity" "current" {} + resource "aws_security_group" "workshop" { name = "allow_all" description = "Workshop - Allow all inbound traffic" @@ -148,22 +153,22 @@ resource "aws_iam_user" "aws_users" { force_destroy = true } -resource "aws_iam_user_policy_attachment" "aws_users" { +resource "aws_iam_user_policy_attachment" "aws_users_ec2" { count = "${length(var.users)}" user = "${element(aws_iam_user.aws_users.*.name,count.index)}" - policy_arn = "arn:aws:iam::aws:policy/AmazonEC2FullAccess" + policy_arn = "${element(aws_iam_policy.trainee_ec2.*.arn,count.index)}" } -resource "aws_iam_user_policy_attachment" "aws_users_rds" { +resource "aws_iam_user_policy_attachment" "aws_users_rds_ro" { count = "${length(var.users)}" user = "${element(aws_iam_user.aws_users.*.name,count.index)}" - policy_arn = "arn:aws:iam::aws:policy/AmazonRDSFullAccess" + policy_arn = "arn:aws:iam::aws:policy/AmazonRDSReadOnlyAccess" } -resource "aws_iam_user_policy_attachment" "aws_users_r53" { +resource "aws_iam_user_policy_attachment" "aws_users_rds" { count = "${length(var.users)}" user = "${element(aws_iam_user.aws_users.*.name,count.index)}" - policy_arn = "arn:aws:iam::aws:policy/AmazonRoute53FullAccess" + policy_arn = "${element(aws_iam_policy.trainee_rds.*.arn,count.index)}" } resource "aws_iam_access_key" "aws_keys" { @@ -200,7 +205,7 @@ data "template_file" "cloudconfig" { sigil_version = "0.4.0" kubectl_version = "v1.9.3" helm_version = "v2.8.2" - docker_version = "17.09.0~ce-0~ubuntu" + docker_version = "18.06.0~ce~3-0~ubuntu" usql_version = "0.5.0" consul_version = "1.0.0" kops_version = "1.9.0" @@ -216,8 +221,6 @@ data "template_file" "cloudconfig" { subnet_a = "${data.aws_subnet.default_a.id}" subnet_b = "${data.aws_subnet.default_b.id}" ami = "${data.aws_ami.ubuntu.id}" - state_bucket_name = "${element(aws_s3_bucket.state_store.*.id,count.index)}" - cluster_name = "${var.users[count.index]}-cluster.${var.subdomain}.${var.domain}.${var.tld}" } } @@ -248,77 +251,48 @@ resource "aws_instance" "workstations" { key_name = "${var.users[count.index]}" tags { - Name = "${var.users[count.index]}" + Name = "${var.users[count.index]}-workstation" } lifecycle { ignore_changes = ["user_data", "ami"] } +} - connection { - type = "ssh" - user = "ubuntu" - private_key = "${element(tls_private_key.user-ssh-keys.*.private_key_pem, count.index)}" - } - - provisioner "file" { - content = "${tls_private_key.deploy-key.private_key_pem}" - destination = "/home/ubuntu/.ssh/deploy_key" - } - - provisioner "file" { - content = "${element(tls_private_key.user-ssh-keys.*.private_key_pem, count.index)}" - destination = "/home/ubuntu/.ssh/kops_key" - } - - provisioner "file" { - content = "${element(tls_private_key.user-ssh-keys.*.public_key_openssh, count.index)}" - destination = "/home/ubuntu/.ssh/kops_key.pub" - } - - provisioner "file" { - content = < rds/main.tf sigil -p -f rds/terraform.tfvars.tpl aws_key=${aws_key} aws_secret=${aws_secret} aws_region=${aws_region} sg_group=${sg_group} subnet_a=${subnet_a} subnet_b=${subnet_b}> rds/terraform.tfvars # sigil -p -f dns/terraform.tfvars.tpl aws_key=${aws_key} aws_secret=${aws_secret} > dns/terraform.tfvars - sigil -p -f kops/env.tpl aws_key=${aws_key} aws_secret=${aws_secret} state_bucket_name=${state_bucket_name} cluster_name=${cluster_name} > kops/.env rm *.tpl rm rds/*.tpl # rm dns/*.tpl - rm kops/*.tpl + # rm kops/*.tpl cd .. chown -R training:training ${ws_dir}/ # re-use for Kubernetes / Helm training diff --git a/setup/templates/tf-modules-cloud-config.tpl b/setup/templates/tf-modules-cloud-config.tpl new file mode 100644 index 0000000..8e18431 --- /dev/null +++ b/setup/templates/tf-modules-cloud-config.tpl @@ -0,0 +1,56 @@ +#cloud-config +# Order of cloud-init execution - https://stackoverflow.com/a/37190866/138469 +hostname: modules +repo_update: true +repo_upgrade: all +packages: + - zip + - jq + # docker requirements + - apt-transport-https + - ca-certificates + - software-properties-common + +# one time setup +runcmd: + - /usr/local/sbin/install_docker.sh + - /usr/local/sbin/install_sigil.sh + - /usr/local/sbin/setup.sh + - docker run -d -p 80:8080 -v /tmp:/tmp --env-file ~ubuntu/modules-env quay.io/honestbee/s3server:latest -bucket=s3://${modules_bucket} -s3region=${aws_region} + +output: + all: '| tee -a /var/log/cloud-init-output.log' + +groups: + - training +# see http://cloudinit.readthedocs.io/en/latest/topics/modules.html#users-and-groups +users: + - default + +write_files: + - path: /usr/local/sbin/install_sigil.sh + permissions: '0755' + content: | + #!/bin/bash + VERSION=${sigil_version} + ARCH=$(uname -sm|tr \ _) + curl -L https://github.com/gliderlabs/sigil/releases/download/v$${VERSION}/sigil_$${VERSION}_$${ARCH}.tgz | tar -zxC /usr/local/bin + - path: /usr/local/sbin/setup.sh + permissions: '0755' + content: | + #!/bin/bash + echo "AWS_ACCESS_KEY_ID=${aws_key}" >> ~ubuntu/modules-env + echo "AWS_SECRET_ACCESS_KEY=${aws_secret}" >> ~ubuntu/modules-env + - path: /usr/local/sbin/install_docker.sh + permissions: '0755' + content: | + #!/bin/bash + VERSION=${docker_version} + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - + sudo add-apt-repository \ + "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ + $(lsb_release -cs) \ + stable" + sudo apt-get update + sudo apt-get install docker-ce=$${VERSION} -y + sudo usermod -aG docker ubuntu