|
| 1 | +# PvC Base and ECS on AWS IaaS |
| 2 | + |
| 3 | +> Constructs CDP Private Cloud Base and ECS clusters cluster running on AWS IaaS. |
| 4 | +
|
| 5 | +A summary of the infrastructure and cluster configuration is given below. |
| 6 | + |
| 7 | +| Item | | |
| 8 | +| ------------------------------ | ------ | |
| 9 | +| _**PvC Base Version**_ | 7.1.9 | |
| 10 | +| _**Cloudera Manager Version**_ | 7.11.3 | |
| 11 | +| _**ECS Version**_ | 1.5.3 | |
| 12 | +| _**DNS & Directory Service**_ | FreeIPA server deployed as part of automation | |
| 13 | +| _**Infrastructure Platform**_ | AWS IaaS | |
| 14 | +| _**Num Nodes Created**_ | 4 | |
| 15 | +| _FreeIPA Server Nodes_ | 1 | |
| 16 | +| _Base Master Nodes_ | 1 | |
| 17 | +| _Base Worker Nodes_ | 2 | |
| 18 | +| _ECS Master Nodes_ | 1 | |
| 19 | +| _ECS Worker Nodes_ | 3 | |
| 20 | + |
| 21 | +## Known Issues |
| 22 | + |
| 23 | +| Issue | Description | Workaround | |
| 24 | +|-------|-------------|------------| |
| 25 | +| Cluster instances unavailable after the `external_setup.yml` Playbook | The cluster EC2 instances become unavailable after the `external_setup.yml` Playbook. During subsequent playbooks the hosts becomes unreachable and in the EC2 console the VM instances fail the reachability health check. | Restart the EC2 instances via the console. | |
| 26 | + |
| 27 | +## Requirements |
| 28 | + |
| 29 | +To run, you need: |
| 30 | + |
| 31 | +* Docker (or a Docker alternative) |
| 32 | +* `ansible-navigator` |
| 33 | +* AWS credentials |
| 34 | +* CDP Private Cloud Base license file |
| 35 | +* SSH key(s) for bastion/jump host and cluster |
| 36 | + |
| 37 | +### Configuration Variables |
| 38 | + |
| 39 | +Configuration is passed via environment variables and an user-managed configuration file. |
| 40 | + |
| 41 | +#### Environment Variables |
| 42 | + |
| 43 | +* Set up the following definition environment variables: |
| 44 | + |
| 45 | + | Variable | Description | Status | |
| 46 | + |----------|-------------|--------| |
| 47 | + | `SSH_PUBLIC_KEY_FILE` | File path to the SSH public key that will be uploaded to the cloud provider (using the `name_prefix` variable as the key label). E.g. `/Users/example/.ssh/demo_ops.pub` | Mandatory | |
| 48 | + | `SSH_PRIVATE_KEY_FILE` | File path to the SSH private key. E.g. `/Users/example/.ssh/demo_ops` | Mandatory | |
| 49 | + | `CDP_LICENSE_FILE` | File path to a CDP Private Cloud Base license. E.g. `/Users/example/Documents/example_cloudera_license.txt` | Mandatory | |
| 50 | + | `AWS_PROFILE` | The profile label for your AWS credentials. Otherwise, use the associated `AWS_*` parameters. | Mandatory | |
| 51 | + |
| 52 | +#### Configuration file variables |
| 53 | + |
| 54 | +Copy `config-template.yml` to `config.yml` and edit this user-facing configuration file to match your particular deployment. |
| 55 | + |
| 56 | +> [!IMPORTANT] |
| 57 | +> `name_prefix` should be 4-7 characters and is the "primary key" for the deployment. |
| 58 | +
|
| 59 | +```yaml |
| 60 | +name_prefix: "{{ mandatory }}" # Unique identifier for the deployment |
| 61 | +infra_region: "us-east-2" |
| 62 | +domain: "{{ name_prefix }}.cldr.example" # The deployment subdomain |
| 63 | +realm: "CLDR.DEPLOYMENT" # The Kerberos realm |
| 64 | +common_password: "Example776" # For external services |
| 65 | +admin_password: "Example776" # For Cloudera-related services |
| 66 | +deployment_tags: |
| 67 | + deployment: "{{ name_prefix }}" |
| 68 | + deploy-tool: cloudera-deploy |
| 69 | +``` |
| 70 | +
|
| 71 | +## Execution |
| 72 | +
|
| 73 | +## All-in-One |
| 74 | +
|
| 75 | +You can run all of the following steps at once, if you wish: |
| 76 | +
|
| 77 | +```bash |
| 78 | +ansible-navigator run \ |
| 79 | + pre_setup.yml \ |
| 80 | + external_setup.yml \ |
| 81 | + internal_setup.yml \ |
| 82 | + base_setup.yml \ |
| 83 | + summary.yml \ |
| 84 | + -e @definition.yml \ |
| 85 | + -e @config.yml |
| 86 | +``` |
| 87 | + |
| 88 | +### Pre-setup Playbook |
| 89 | + |
| 90 | +This definition-specific playbook includes tasks such as: |
| 91 | + |
| 92 | +* Instructure provisioning |
| 93 | +* FreeIPA DNS and KRB services provisioning |
| 94 | + |
| 95 | +Run the following command |
| 96 | + |
| 97 | +```bash |
| 98 | +ansible-navigator run pre_setup.yml \ |
| 99 | + -e @definition.yml \ |
| 100 | + -e @config.yml |
| 101 | +``` |
| 102 | + |
| 103 | +Once the pre-setup playbook completes confirm that: |
| 104 | + |
| 105 | +* You can connect to each node via the inventory - see [Confirm SSH Connectivity](#confirm-ssh-connectivity) for help. You can also run `ansible-navigator run validate_dns_lookups.yml` to check connectivity and DNS. |
| 106 | +* Connect to FreeIPA UI and login with the `IPA_USER` and `IPA_PASSWORD` credentials in the configuration file. See [Cluster Access](#cluster-access) for details. |
| 107 | + |
| 108 | +### Platform Playbooks |
| 109 | + |
| 110 | +These playbooks configure and deploy PVC Base. They use the infrastructure provisioned. |
| 111 | + |
| 112 | +Tasks include: |
| 113 | + |
| 114 | +* System/host configuration |
| 115 | +* Cloudera Manager server and agent installation and configuration |
| 116 | +* Cluster template imports |
| 117 | + |
| 118 | +Run the following: |
| 119 | + |
| 120 | +```bash |
| 121 | +# Run the 'external' system configuration |
| 122 | +ansible-navigator run external_setup.yml \ |
| 123 | + -e @definition.yml \ |
| 124 | + -e @config.yml |
| 125 | +``` |
| 126 | + |
| 127 | +```bash |
| 128 | +# Run the 'internal' Cloudera installations and configurations |
| 129 | +ansible-navigator run internal_setup.yml \ |
| 130 | + -e @definition.yml \ |
| 131 | + -e @config.yml |
| 132 | +``` |
| 133 | + |
| 134 | +```bash |
| 135 | +# Run the Cloudera cluster configuration and imports |
| 136 | +ansible-navigator run base_setup.yml \ |
| 137 | + -e @definition.yml \ |
| 138 | + -e @config.yml |
| 139 | +``` |
| 140 | + |
| 141 | +```bash |
| 142 | +# Produce a deployment summary and retrieve the FreeIPA CA certificate |
| 143 | +ansible-navigator run summary.yml \ |
| 144 | + -e @definition.yml \ |
| 145 | + -e @config.yml |
| 146 | +``` |
| 147 | + |
| 148 | +## Cluster Access |
| 149 | + |
| 150 | +Once the cluster is up, you can access all of the UIs within, including the FreeIPA sidecar, via a SSH tunnel: |
| 151 | + |
| 152 | +```bash |
| 153 | +ssh -D 8157 -q -C -N ec2-user@<IP address of jump host> |
| 154 | +``` |
| 155 | + |
| 156 | +Use a SOCKS5 proxy switcher in your browser (an example is the SwitchyOmega browser extension). |
| 157 | + |
| 158 | +In the SOCKS5 proxy configuration, set _Protocol_ to `SOCKS5`, _Server_ to `localhost`, and _Port_ to `8157`. Ensure the SOCKS5 proxy is active when clicking on the CDP UI that you wish to access. |
| 159 | + |
| 160 | +> [!CAUTION] |
| 161 | +> You will get a SSL warning for the self-signed certificate; this is expected given this particular definition as the local FreeIPA server has no upstream certificates. However, you can install this CA certificate to remove this notification. |
| 162 | +
|
| 163 | +In addition, you can log into the jump host via SSH and get to any of the servers within the cluster. Remember to forward your SSH key! |
| 164 | + |
| 165 | +```bash |
| 166 | +ssh -A -C -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ec2-user@<IP address of jump host> |
| 167 | +``` |
| 168 | + |
| 169 | +> [!NOTE] |
| 170 | +> The above assume you are using the default AMI image set in the Terraform configuration. If not, adjust the SSH user appropriately. |
| 171 | +
|
| 172 | +## Teardown |
| 173 | + |
| 174 | +Run the following: |
| 175 | + |
| 176 | +```bash |
| 177 | +ansible-navigator run pre_teardown.yml \ |
| 178 | + -e @definition.yml \ |
| 179 | + -e @config.yml |
| 180 | +``` |
| 181 | + |
| 182 | +You can also run the direct Terraform command: |
| 183 | + |
| 184 | +```bash |
| 185 | +ansible-navigator exec -- terraform -chdir=tf_proxied_cluster destroy -auto-approve |
| 186 | +``` |
| 187 | + |
| 188 | +## Troubleshooting |
| 189 | + |
| 190 | +### Confirm SSH Connectivity |
| 191 | + |
| 192 | +Run the following: |
| 193 | + |
| 194 | +```bash |
| 195 | +ansible-navigator exec -- ansible -m ansible.builtin.ping -i inventory.yml all |
| 196 | +``` |
| 197 | + |
| 198 | +This will check to see if the inventory file is well constructed and the hosts are available via SSH. |
0 commit comments