Skip to content

Commit

Permalink
Merge pull request #891 from SUSE/develop
Browse files Browse the repository at this point in the history
 Develop -> Main for 9.0.0 release
  • Loading branch information
yeoldegrove authored Aug 24, 2022
2 parents 32d834e + d7a9284 commit 4ba887b
Show file tree
Hide file tree
Showing 123 changed files with 4,192 additions and 1,183 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,11 +121,11 @@ The following features are implemented:

| Feature | AWS | Azure | GCP | OpenStack | Libvirt |
| :------ | :---: | :---: | :---: | :-------: | :-----: |
| **SUSE saptune / SAP sapnotes** <br> SUSE's saptune is applied with the correct solution template to configure the systems based on SAP sapnotes recommendations. <br> For additional information see [Tuning Systems with saptune🔗](https://documentation.suse.com/sles-sap/15-SP3/html/SLES-SAP-guide/cha-tune.html). ||||||
| **SUSE saptune / SAP sapnotes** <br> SUSE's saptune is applied with the correct solution template to configure the systems based on SAP sapnotes recommendations. <br> For additional information see [Tuning Systems with saptune🔗](https://documentation.suse.com/sles-sap/15-SP4/html/SLES-SAP-guide/cha-tune.html). ||||||
| **HANA single node** <br> Deployment of HANA on a single node. <br> For additional information see [SAP Hardware Directory for AWS🔗](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=ve:23) ||||||
| **HANA Scale-Up - performance optimized** <br> Deployment of HANA with system replication in a performance optimized setup. <br> For addition information see [SAP HANA System Replication Scale-Up - Performance Optimized Scenario🔗](https://documentation.suse.com/sbp/all/single-html/SLES4SAP-hana-sr-guide-PerfOpt-15/). ||||||
| **HANA Scale-Up - cost optimized** <br> Deployment of HANA with system replication in a cost optimized (additional tenant DB) setup. <br> For additional information see [SAP HANA System Replication Scale-Up - Cost Optimized Scenario🔗](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-sr-guide-costopt-15/). ||||||
| **HANA Scale-Out - performance optimized** <br> Deployment of HANA Scale-Out (multi node) with system replication in a performance optimized setup. <br> For additional information see [SAP HANA System Replication Scale-Out - Performance Optimized Scenario🔗](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-sr-guide-costopt-15/) and [SAP HANA System Replication Scale-Out High Availability in Amazon Web Services🔗](https://documentation.suse.com/sbp/all/html/SLES-SAP-hana-scaleOut-PerfOpt-12-AWS/). | || || |
| **HANA Scale-Out - performance optimized** <br> Deployment of HANA Scale-Out (multi node) with system replication in a performance optimized setup. <br> For additional information see [SAP HANA System Replication Scale-Out - Performance Optimized Scenario🔗](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-sr-guide-costopt-15/) and [SAP HANA System Replication Scale-Out High Availability in Amazon Web Services🔗](https://documentation.suse.com/sbp/all/html/SLES-SAP-hana-scaleOut-PerfOpt-12-AWS/). | || || |
| **HANA Scale-Out - with standby nodes (HANA Host-Auto-Failover)** <br> Deployment of HANA Scale-Out (multi node) with system replication and Host-Auto-Failover via standby nodes. <br> For additional information see [Setting Up Host Auto-Failover🔗](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/879d9dc46bb64ccda028872c86c70afc.html?version=2.0.05) and [Azure: Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server🔗](https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse). | 🚫 || 🚫 |||
| **SAP S/4HANA ENSA 1** <br> Deployment of a SAP S/4HANA in Enqueue Replication (ENSA) 1 scenario. <br> For additional information see [SAP NetWeaver Enqueue Replication 1 High Availability Cluster - Setup Guide for SAP NetWeaver 7.40 and 7.50 🔗](https://documentation.suse.com/sbp/all/html/SAP-nw740-sle15-setupguide/). ||||||
| **SAP S/4HANA ENSA 2** <br> Deployment of a S/4HANA in Enqueue Replication (ENSA) 2 scenario. <br> For additional information see [SAP S/4HANA - Enqueue Replication 2 High Availability Cluster - Setup Guide 🔗](https://documentation.suse.com/sbp/all/html/SAP-S4HA10-setupguide-sle15/index.html). ||||||
Expand Down
97 changes: 94 additions & 3 deletions aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,10 @@
* [Customization](#customization)
* [QA deployment](#qa-deployment)
* [Pillar files configuration](#pillar-files-configuration)
* [Delete secrets and sensitive information after deployment](#delete-secrets-and-sensitive-information-after-deployment)
* [Use already existing network resources](#use-already-existing-network-resources)
* [Autogenerated network addresses](#autogenerated-network-addresses)
* [HANA configuration](#hana-configuration)
* [Advanced Customization](#advanced-customization)
* [Terraform Parallelism](#terraform-parallelism)
* [Remote State](#remote-state)
Expand Down Expand Up @@ -147,7 +149,38 @@ For detailed information and deployment options have a look at `terraform.tfvars
## Bastion
A bastion host is not implemented for AWS.
By default, the bastion machine is enabled in AWS (it can be disabled), which will have the unique public IP address of the deployment. Connect using ssh and the selected admin user with:
```
ssh -i $(terraform output -raw ssh_bastion_private_key) -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $(terraform output -raw ssh_user)@$(terraform output -raw bastion_public_ip)
```
To log to hana and others instances, use:
```
SSH_USER=$(terraform output -raw ssh_user)
BASTION=$(terraform output -raw bastion_public_ip)
SSH_BASTION_PRIVATE_KEY=$(terraform output -raw ssh_bastion_private_key)
SSH_PRIVATE_KEY=$(terraform output -raw ssh_private_key)
SSH_OPTIONS="-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
IP=$(terraform output -json hana_ip | jq '.[0]') # change to match the host you want to connect to
ssh -o ProxyCommand="ssh -W %h:%p ${SSH_USER}@${BASTION} -i ${SSH_BASTION_PRIVATE_KEY} ${SSH_OPTIONS}" -i ${SSH_PRIVATE_KEY} ${SSH_OPTIONS} ${SSH_USER}@${IP}

# OR in one single command

ssh -o ProxyCommand="ssh -W %h:%p $(terraform output -raw ssh_user)@$(terraform output -raw bastion_public_ip) -i $(terraform output -raw ssh_bastion_private_key) -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" -i $(terraform output -raw ssh_private_key) -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no $(terraform output -raw ssh_user)@$(terraform output -json hana_ip | jq '.[0]')
```
To disable the bastion use:
```
bastion_enabled = false
```
Destroy the created infrastructure with:
```
terraform destroy
```
# Highlevel description
Expand Down Expand Up @@ -182,6 +215,10 @@ The project has been created in order to provide the option to run the deploymen
Besides the `terraform.tfvars` file usage to configure the deployment, a more advanced configuration is available through pillar files customization. Find more information [here](../pillar_examples/README.md).
## Delete secrets and sensitive information after deployment
To delete e.g. `/etc/salt/grains` and other sensitive information from the hosts after a successful deployment, you can set `cleanup_secrets = true` in `terraform.tfvars`. This is disabled by default.
## Use already existing network resources
The usage of already existing network resources (vpc and security groups) can be done configuring the `terraform.tfvars` file and adjusting some variables. The example of how to use them is available at [terraform.tfvars.example](terraform.tfvars.example).
Expand Down Expand Up @@ -212,6 +249,7 @@ Example based on `10.0.0.0/16` address range (VPC address range) and `192.168.1.
| Service | Variable | Addresses | Comments |
| ---- | -------- | --------- | -------- |
| Bastion | - | `10.0.254.254` | |
| iSCSI server | `iscsi_srv_ip` | `10.0.0.4` | |
| Monitoring | `monitoring_srv_ip` | `10.0.0.5` | |
| HANA ips | `hana_ips` | `10.0.1.10`, `10.0.2.11` | |
Expand All @@ -222,6 +260,59 @@ Example based on `10.0.0.0/16` address range (VPC address range) and `192.168.1.
| S/4HANA or NetWeaver ips | `netweaver_ips` | `10.0.3.30`, `10.0.4.31`, `10.0.3.32`, `10.0.4.33` | Addresses for the ASCS, ERS, PAS and AAS. The sequence will continue if there are more AAS machines |
| S/4HANA or NetWeaver virtual ips | `netweaver_virtual_ips` | `192.168.1.30`, `192.168.1.31`, `192.168.1.32`, `192.168.1.33` | The last number of the address will match with the regular address |
## HANA configuration
### HANA data disks configuration
The whole disk configuration is made by configuring a variable named `hana_data_disks_configuration`. It encapsulates hard disk selection, logical volumes and data destinations in a compact form. This section describes all parameters line by line.
```
variable "hana_data_disks_configuration" {
disks_type = "gp2,gp2,gp2,gp2,gp2,gp2,gp2"
disks_size = "128,128,128,128,128,128,128"
# The next variables are used during the provisioning
luns = "0,1#2,3#4#5#6"
names = "data#log#shared#usrsap#backup"
lv_sizes = "100#100#100#100#100"
paths = "/hana/data#/hana/log#/hana/shared#/usr/sap#/hana/backup"
}
```
During deployment, HANA VM expects a standard set of directories for its data storage `/hana/data`, `/hana/log`, `/hana/shared`, `/usr/sap` and `/hana/backup`.
A HANA VM typically uses 5 to 10 disks according to usage scenario. These are combined to several logical volumes. At last the data locations of the standard mount points are assigned to these logical volumes.
The first two parameters `disks_type` and `disks_size` are used to provision the resources in terraform. One disk is using one entry. Every further disk is added by appending more comma separated entries to each parameter.
In Detail: `disks_type` selects the sort of SSD with bandwidth and redundancy options. You find possible selections and costs at [Amazon EBS volume types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html).
The parameter `disks_size` selects the size of each disk in GB.
The disks are counted from left to right beginning with **0**. This number is called LUN. A Logical Unit Number (LUN) is a SCSI concept for logical abstraction targeting physical drives. If you have 5 disks you count **0,1,2,3,4**.
After describing the physical disks, the logical volumes can be specified using the parameters `luns`, `names`, `lv_sizes` and `paths`. The comma combines several values into one value and the `#` sign is used for separation of volume groups. Think about the `#` sign as a column separator in a table then it will look like:
| Parameter | VG1 | VG2 | VG3 | VG4 | VG5 |
| --------- | --- | --- | --- | --- | --- |
| **luns** | 0,1 | 2,3 | 4 | 5 | 6 |
| **names** | data | log | shared | usrsap | backup |
| **lv_sizes** | 100 | 100 | 1000 | 100 | 100 |
| **paths** | /hana/data | /hana/log | /hana/shared | /usr/sap | /hana/backup |
As you see, there are 5 volume groups specified. Each volume group has its own name. It is set with parameter `names`. The parameter `luns` assigns one LUN or a combination of several LUNs to a volume group. In the example above `data` uses disk with LUN **0** and **1**, but `backup` only uses disk with LUN **6**. A LUN can only be assigned to one volume group.
Using the example above for volume group `data` again to show how a HANA VM is affected. As said the `data` volume group uses two physical disks. They are used as physical volumes (i. e. `/dev/sdc` and `/dev/sdd` resp. LUN **0** and **1**). Both physical volumes share the same volume group named `vg_hana_data`. A logical volume named `lv_hana_data_0` allocates **100%** of this volume group. The logical volume name is generated from the volume group name. The logical volume is mounted at mount point `/hana/data`.
It is also possible to deploy several logical volumes to one volume group. For example:
| Parameter | VG1 |
| --------- | --- |
| **luns** | 0,1 |
| **names** | datalog |
| **lv_sizes** | 75,25 |
| **paths** | /hana/data,/hana/log |
If both disks have a size of 512GB, a first virtual volume with name `vg_hana_datalog_0` and size of 768GB and a second virtual volume with name `vg_hana_datalog_1` and size 256GB are created. Both virtual volumes are in volume group `vg_hana_datalog`. The first is mounted at `/hana/data` and the second at `/hana/log`.
# Advanced Customization
## Terraform Parallelism
Expand Down Expand Up @@ -259,7 +350,7 @@ An image owner can also be specified:
hana_os_owner = "amazon"
```
### Upload image to AWS
## Upload image to AWS
Instead of the public OS images referenced in this configuration, the EC2 instances can also be launched using a private OS images as long as it is uploaded to AWS as a Amazon Machine Image (AMI). These images have to be in raw format.
Expand Down Expand Up @@ -514,7 +605,7 @@ When the process is completed, the `describe-import-snapshot-tasks` command will
Notice the **completed** status in the above JSON output.
Also notice tne `SnapshotId` which will be used in the next step to register the AMI.
Also notice the `SnapshotId` which will be used in the next step to register the AMI.
Once the snapshot is completely imported, the next step is to register an AMI with the command:
Expand Down
Loading

0 comments on commit 4ba887b

Please sign in to comment.