IMPORTANT NOTE: This site is not official Red Hat documentation and is provided for informational purposes only. These guides and the scripts provided may be experimental, proof of concept, or early adoption. Please test and verify before deploying anyhting to production. Officially supported documentation is available at docs.openshift.com and access.redhat.com.
The below tasks are one time task to be performed on your AWS account to provide necessary permission to link your Redhat account to AWS.
Details
-
rosa cli : Refer the link to install rosa cli.
jq: Refer the link to install jq.
Terraform: Refer the link to install terraform.
aws cli: Refer the link to install aws cli.
-
The token can be obtainer from the link. Login using your Red Hat account.
test the token using the below command
export ROSA_OFFLINE_TOKEN=<token value> rosa login --token=$ROSA_OFFLINE_TOKEN
if successfull you should see the below
I: Logged in as '<Red Hat user>' on 'https://api.openshift.com'
-
Openshift Cluster Manger role grants the required permissions for installation of ROSA clusters in OpenShift Cluster Manager. And links your Red Hat and the AWS Account
run the below command to create the ocm-role
rosa create ocm-role --mode auto
-
Clone the repo
git clone https://github.com/Manoj2087/rosa-terraform.git
-
There are several ways to authenticate against AWS for your Terraform provider, more info refer link
This is the simplest way for testing. Make sure the user has necessary IAM policy to perform the deployment in your AWS account
Note: for testing you can assign the user `AdministratorAccess` policy
Run the below command and pass the AWS Access Key ID, AWS Secret Access Key and Default region name
aws configure
ROSA cluster can be deployed using different modes to support High availability and in different AWS network architecture to support your network security requirements.
- [Option 1] Single AZ/Multi AZ Public Cluster
Details
To deploy Single AZ Public Cluster
Details
export ROSA_OFFLINE_TOKEN="<update rosa token value>"
export TRANSIT_GATEWAY_ID=""
cd 02-rosa-cluster
sed -e "s/@@rosa-token@@/$ROSA_OFFLINE_TOKEN/" \
-e "s/@@multiaz@@/false/" \
-e "s/@@private-cluster@@/false/" \
-e "s/@@transitgw-used@@/false/" \
-e "s/@@transitgw-id@@/$TRANSIT_GATEWAY_ID/" \
-e "s/@@deploy-workstation@@/false/" \
variable.auto.tfvars.sample \
> variable.auto.tfvars
terraform init
terraform plan
terraform apply -auto-approve
cd ..
To deploy Multi AZ Public Cluster
Details
export ROSA_OFFLINE_TOKEN="<update rosa token value>"
export TRANSIT_GATEWAY_ID=""
cd 02-rosa-cluster
sed -e "s/@@rosa-token@@/$ROSA_OFFLINE_TOKEN/" \
-e "s/@@multiaz@@/true/" \
-e "s/@@private-cluster@@/false/" \
-e "s/@@transitgw-used@@/false/" \
-e "s/@@transitgw-id@@/$TRANSIT_GATEWAY_ID/" \
-e "s/@@deploy-workstation@@/false/" \
variable.auto.tfvars.sample \
> variable.auto.tfvars
terraform init
terraform plan
terraform apply -auto-approve
cd ..
To access the console Single AZ/Multi AZ Public Cluster
,
Details
refer the console url from the terraform output.
cd 02-rosa-cluster
terraform output -json | jq .rosa_console_url.value.url -r
cd ..
To delete Single AZ/Multi AZ Public Cluster
Details
cd 02-rosa-cluster
terraform plan -destroy
terraform apply -destroy -auto-approve
cd ..
- [Option 2] Single AZ/Multi AZ Private Cluster (Private link)
Details
To deploy Single AZ Private Cluster (Private link)
Details
export ROSA_OFFLINE_TOKEN="<update rosa token value>"
export TRANSIT_GATEWAY_ID=""
cd 02-rosa-cluster
sed -e "s/@@rosa-token@@/$ROSA_OFFLINE_TOKEN/" \
-e "s/@@multiaz@@/false/" \
-e "s/@@private-cluster@@/true/" \
-e "s/@@transitgw-used@@/false/" \
-e "s/@@transitgw-id@@/$TRANSIT_GATEWAY_ID/" \
-e "s/@@deploy-workstation@@/true/" \
variable.auto.tfvars.sample \
> variable.auto.tfvars
terraform init
terraform plan
terraform apply -auto-approve
cd ..
To deploy Multi AZ Private Cluster (Private link)
Details
export ROSA_OFFLINE_TOKEN="<update rosa token value>"
export TRANSIT_GATEWAY_ID=""
cd 02-rosa-cluster
sed -e "s/@@rosa-token@@/$ROSA_OFFLINE_TOKEN/" \
-e "s/@@multiaz@@/true/" \
-e "s/@@private-cluster@@/true/" \
-e "s/@@transitgw-used@@/false/" \
-e "s/@@transitgw-id@@/$TRANSIT_GATEWAY_ID/" \
-e "s/@@deploy-workstation@@/true/" \
variable.auto.tfvars.sample \
> variable.auto.tfvars
terraform init
terraform plan
terraform apply -auto-approve
cd ..
To access the console Single AZ/Multi AZ Private Cluster (Private link)
,
Details
refer the console url from the terraform output.
cd 02-rosa-cluster
terraform output -json | jq .rosa_console_url.value.url -r
cd ..
Note: Since the ROSA API and Console are only accessable internally, but setting the `DEPLOY_WORKSTATION` variable in the `variable.auto.tfvars` file, this also deploys a Linux Workstation (to use `oc cli`) and a windows Workstation (to use the console) in the ROSA VPC Private Subnet.
To delete Single AZ/Multi AZ Private Cluster (Private link)
Details
cd 02-rosa-cluster
terraform plan -destroy
terraform apply -destroy -auto-approve
cd ..
- [Option 3] Single AZ/Multi AZ Private Cluster (Private link) with Egress VPC (with AWS Transit Gateway and AWS Network Firewall)
Details
To deploy Single AZ Private Cluster (Private link) with Egress VPC (with AWS Transit Gateway and AWS Network Firewall)
Details
Deploy the Egress VPC
Note: Skip this deployment of Egress VPC step and continue with Deploy the cluster if your environment already has a Egress VPC with Transit Gateway setup
cd 01-ingress-network
terraform init
terraform plan
terraform apply -auto-approve
terraform output -json | jq .transit_gateway_id.value -r
cd ..
Deploy the cluster
export ROSA_OFFLINE_TOKEN="<update rosa token value>"
export TRANSIT_GATEWAY_ID="<update Transit GW value>"
cd 02-rosa-cluster
sed -e "s/@@rosa-token@@/$ROSA_OFFLINE_TOKEN/" \
-e "s/@@multiaz@@/false/" \
-e "s/@@private-cluster@@/true/" \
-e "s/@@transitgw-used@@/true/" \
-e "s/@@transitgw-id@@/$TRANSIT_GATEWAY_ID/" \
-e "s/@@deploy-workstation@@/true/" \
variable.auto.tfvars.sample \
> variable.auto.tfvars
terraform init
terraform plan
terraform apply -auto-approve
cd ..
To deploy Multi AZ Private Cluster (Private link) with Egress VPC (with AWS Transit Gateway and AWS Network Firewall)
Details
Deploy the Egress VPC
Note: Skip this deployment of Egress VPC step and continue with Deploy the cluster if your environment already has a Egress VPC with Transit Gateway setup
cd 01-ingress-network
terraform init
terraform plan
terraform apply -auto-approve
terraform output -json | jq .transit_gateway_id.value -r
cd ..
Deploy the cluster
export ROSA_OFFLINE_TOKEN="<update rosa token value>"
export TRANSIT_GATEWAY_ID="<update Transit GW value>"
cd 02-rosa-cluster
sed -e "s/@@rosa-token@@/$ROSA_OFFLINE_TOKEN/" \
-e "s/@@multiaz@@/true/" \
-e "s/@@private-cluster@@/true/" \
-e "s/@@transitgw-used@@/true/" \
-e "s/@@transitgw-id@@/$TRANSIT_GATEWAY_ID/" \
-e "s/@@deploy-workstation@@/true/" \
variable.auto.tfvars.sample \
> variable.auto.tfvars
terraform init
terraform plan
terraform apply -auto-approve
cd ..
To access the console Single AZ/Multi AZ Private Cluster (Private link) with Egress VPC (with AWS Transit Gateway and AWS Network Firewall)
Details
refer the console url from the terraform output.
cd 02-rosa-cluster
terraform output -json | jq .rosa_console_url.value.url -r
cd ..
Note: Since the ROSA API and Console are only accessable internally, but setting the `DEPLOY_WORKSTATION` variable in the `variable.auto.tfvars` file, this also deploys a Linux Workstation (to use `oc cli`) and a windows Workstation (to use the console) in the ROSA VPC Private Subnet.
To delete Single AZ/Multi AZ Private Cluster (Private link) with Egress VPC (with AWS Transit Gateway and AWS Network Firewall)
Details
Delete the cluster
cd 02-rosa-cluster
terraform plan -destroy
terraform apply -destroy -auto-approve
cd ..
Delete the Egress VPC
Note: Skip this delete of Egress VPC step if your environment already has a Egress VPC with Transit Gateway setup
cd 01-ingress-network
terraform plan -destroy
terraform apply -destroy -auto-approve
cd ..
Details
Details
If you deploy the cluster as as private cluster. In order, to access the ROSA console, you might need a workstation with a browser within your private network.
To facilate this, as part of the Terraform deployment if the DEPLOY_WORKSTATION
is set to true in the 02-rosa-cluster/variable.auto.tfvars
file, this will deploy a Windows worksation
Private Windows Workatstion will be configured with the an RDP enabled user rdp-user
. The password for this user is stored in the AWS secrets manager as name <cluster-prefix>-<env>-<region-short>-workstation-windows-rdp-user-<random-number>
Example,
You can then use the AWS Systems Manager Fleet Manager - remote desktop using the User credentials
authentication method with the abve retreived user name and password. For more information refer link
Details
If you deploy the cluster as as private cluster. In order, to access the ROSA api or use oc cli, you might need a linux workstation within your private network.
To facilate this, as part of the Terraform deployment if the DEPLOY_WORKSTATION
is set to true in the 02-rosa-cluster/variable.auto.tfvars
file, this will deploy a Linux worksation.
You can use the AWS Systems Manager Fleet Manager to start a terminal connection.
Details
The error logs for the creation and deletion of rosa cluster are pushed to the below location
$HOME/.terraform-rosa/logs/create-rosa-cluster
$HOME/.terraform-rosa/logs/delete-rosa-cluster
Details
If you do not require the AWS Network firewall to configure your own firewall device, you can ignore the deployment by setting the below value to false
in the 01-ingress-network/variables.tf
variable "DEPLOY_FIREWALL" {
type = bool
default = false
}
Details
If there is issue with the creation or deletion to get detailed error set debug = true
update main.tf
resource "shell_script" "rosa_cluster" {
lifecycle_commands {
create = templatefile("${path.module}/script-templates/create-cluster.tftpl",
{
..
..
debug = true
..
..
}
)
read = templatefile("${path.module}/script-templates/read-cluster.tftpl",
{
..
..
debug = true
..
..
}
)
# update = file("${path.module}/scripts/update.sh")
delete = templatefile("${path.module}/script-templates/delete-cluster.tftpl",
{
..
..
debug = true
..
..
}
)
}
environment = {}
sensitive_environment = {
ROSA_OFFLINE_ACCESS_TOKEN = var.ROSA_TOKEN
}
interpreter = ["/bin/bash", "-c"]
}
- Egress vpc AWS Firewall support