Skip to content

Commit

Permalink
Merge pull request #476 from SUSE/develop
Browse files Browse the repository at this point in the history
Develop
  • Loading branch information
arbulu89 authored May 21, 2020
2 parents d36bf5a + c24d68f commit 2445cc8
Show file tree
Hide file tree
Showing 177 changed files with 9,110 additions and 4,666 deletions.
44 changes: 44 additions & 0 deletions .github/hana-netweaver-tf-only.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# the following 2 vars are aquired via ENV
# qemu_uri =
# source_image =

hana_inst_media = "10.162.32.134:/sapdata/sap_inst_media/51053787"
iprange = "192.168.25.0/24"

storage_pool = "terraform"

# Enable pre deployment to automatically copy the pillar files and create cluster ssh keys
pre_deployment = true

# For iscsi, it will deploy a new machine hosting an iscsi service
shared_storage_type = "iscsi"
ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:/ha-clustering:/sap-deployments:/devel"

monitoring_enabled = true

# don't use salt for this test
provisioner = ""

# Netweaver variables

# Enable/disable Netweaver deployment
netweaver_enabled = true

# NFS share with netweaver installation folders
netweaver_inst_media = "10.162.32.134:/sapdata/sap_inst_media"
netweaver_swpm_folder = "SWPM_10_SP26_6"

# Install NetWeaver
netweaver_sapexe_folder = "kernel_nw75_sar"
netweaver_additional_dvds = ["51050829_3", "51053787"]


# DRBD variables

# Enable the DRBD cluster for nfs
drbd_enabled = true

# IP of DRBD cluster
drbd_shared_storage_type = "iscsi"

devel_mode = false
19 changes: 19 additions & 0 deletions .github/workflows/tf-validation.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# github-actions workflow
# this test will just run terraform without salt
name: e2e tests

on: [pull_request]

jobs:
terraform-sap-deployment:
runs-on: self-hosted

steps:
- uses: actions/checkout@v2

- name: terraform apply
run: /tmp/terraform-apply.sh

- name: terraform destroy
if: ${{ always() }}
run: /tmp/terraform-destroy.sh
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
**/.terraform
**/terraform.tfstate*
**/terraform.tfvars
**/terraform*.tfvars
azure/terraform/provision/node0_id_rsa
azure/terraform/provision/node0_id_rsa.pub
azure/terraform/provision/node1_id_rsa
Expand All @@ -9,8 +9,10 @@ azure/terraform/provision/node1_id_rsa.pub
salt/hana_node/files/sshkeys
salt/hana_node/files/pillar/*
salt/drbd_node/files/pillar/*
salt/netweaver_node/files/pillar/*
!salt/hana_node/files/pillar/top.sls
!salt/drbd_node/files/pillar/top.sls
!salt/netweaver_node/files/pillar/top.sls

# Dev specific
**/*.swp
Expand Down
26 changes: 25 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,26 @@ For fine tuning refer to variable specification.

- [templates](doc/deployment-templates.md)

## Rationale
## Design

This project is based in [terraform](https://www.terraform.io/) and [salt](https://www.saltstack.com/) usage.

Components:

- **terraform**: Terraform is used to create the required infrastructure in the specified provider. The code is divided in different terraform modules to make the code modular and more maintanable.
- **salt**: Salt configures all the created machines by terraform based in the provided pillar files that give the option to customize the deployment.

## Components

The project can deploy and configure the next components (they can be enabled/disabled through configuration options):

- SAP HANA environment: The HANA deployment is configurable. It might be deployed as a single HANA database, a dual configuration with system replication, and a HA cluster can be set in top of that.
- ISCSI server: The ISCSI server provides a network based storage mostly used by sbd fencing mechanism.
- Monitoring services server: The monitoring solution is based in [prometheus](https://prometheus.io) and [grafana](https://grafana.com/) and it provides informative and customizable dashboards to the users and administrators.
- DRBD cluster: The DRBD cluster is used to mount a HA NFS server in top of it to mount NETWEAVER shared files.
- SAP NETWEAVER environment: A SAP NETWEAVER environment with ASCS, ERS, PAS and AAS instances can be deployed using HANA database as storage.

## Project structure

This project is organized in folders containing the Terraform configuration files per Public or Private Cloud providers, each also containing documentation relevant to the use of the configuration files and to the cloud provider itself.

Expand All @@ -44,3 +63,8 @@ These are links to find certified systems for each provider:
- [SAP Certified IaaS Platforms for GCP](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Google%20Cloud%20Platform)

- [SAP Certified IaaS Platforms for Azure](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure) (Be carreful with Azure, **clustering** means scale-out scenario)


## Troubleshooting

In the case you have some issue, take a look at the [troubleshooting guide](doc/troubleshooting.md)
112 changes: 32 additions & 80 deletions aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,8 @@ Here how it should look like your user or group:

```
terraform init
terraform workspace new my-execution # optional
terraform workspace select my-execution # optional
terraform workspace new myexecution # optional
terraform workspace select myexecution # optional
terraform plan
terraform apply
```
Expand All @@ -134,79 +134,23 @@ The infrastructure deployed includes:

By default it creates 3 instances in AWS: one for support services (mainly iSCSI as most other services - DHCP, NTP, etc - are provided by Amazon) and 2 cluster nodes, but this can be changed to deploy more cluster nodes as needed.

## Provisioning by Salt
By default, the cluster and HANA installation is done using Salt Formulas in foreground.
To customize this provisioning, you have to create the pillar files (cluster.sls and hana.sls) according to the examples in the [pillar_examples](../pillar_examples) folder (more information in the dedicated [README](../pillar_examples/README.md))
# Specifications

# Specification:
In order to deploy the environment, different configurations are available through the terraform variables. These variables can be configured using a `terraform.tfvars` file. An example is available in [terraform.tfvars.example](./terraform.tvars.example). To find all the available variables check the [variables.tf](./variables.tf) file.

These are the relevant files and what each provides:
## QA deployment

- [provider.tf](provider.tf): definition of the providers being used in the terraform configuration. Mainly `aws` and `template`.
The project has been created in order to provide the option to run the deployment in a `Test` or `QA` mode. This mode only enables the packages coming properly from SLE channels, so no other packages will be used. Find more information [here](../doc/qa.md).

- [variables.tf](variables.tf): definition of variables used in the configuration. These include definition of the AMIs in use, number and type of instances, AWS region, etc.
## Pillar files configuration

- [keys.tf](keys.tf): definition of key to include in the instances to allow connection via SSH.
Besides the `terraform.tfvars` file usage to configure the deployment, a more advanced configuration is available through pillar files customization. Find more information [here](../pillar_examples/README.md).

- [network.tf](network.tf): definition of network resources (VPC, route table, Internet Gateway and security group) used by the infrastructure.
## Use already existing network resources

- [instances.tf](instances.tf): definition of the EC2 instances to create on deployment.
The usage of already existing network resources (vpc and security groups) can be done configuring the `terraform.tfvars` file and adjusting some variables. The example of how to use them is available at [terraform.tfvars.example](terraform.tfvars.example).

- [salt_provisioner.tf](salt_provisioner.tf): salt provisioning resources.

- [salt_provisioner_script.tpl](../salt/salt_provisioner_script.tpl): template code for the initialization script for the servers. This will add the salt-minion if needed and execute the SALT deployment.

- [outputs.tf](outputs.tf): definition of outputs of the terraform configuration.

- [remote-state.sample](remote-state.sample): sample file for the definition of the backend to [store the Terraform state file remotely](create_remote_state).

- [terraform.tfvars.example](terraform.tfvars.example): file containing initialization values for variables used throughout the configuration. **Rename/Duplicate this file to terraform.tfvars and edit the content with your values before use**.

#### Variables

In [terraform.tfvars](terraform.tfvars.example) there are a number of variables that control what is deployed. Some of these variables are:

* **instancetype**: instance type to use for the cluster nodes; basically the "size" (number of vCPUS and memory) of the instance. Defaults to `t2.micro`.
* **hana_data_disk_type**: disk type to use for HANA (gp2 by default).
* **ninstances**: number of cluster nodes to deploy. Defaults to 2.
* **aws_region**: AWS region where to deploy the configuration.
* **public_key_location**: local path to the public SSH key associated with the private key file. This public key is configured in the file $HOME/.ssh/authorized_keys of the administration user in the remote virtual machines.
* **private_key_location**: local path to the private SSH key associated to the public key from the previous line.
* **aws_access_key_id**: AWS access key id.
* **aws_secret_access_key**: AWS secret access key.
* **aws_credentials**: path to the `aws-cli` credentials file. This is required to configure `aws-cli` in the instances so that they can access the S3 bucket containing the HANA installation master.
* **name**: hostname for the hana node without the domain part.
* **init_type**: initialization script parameter that controls what is deployed in the cluster nodes. Valid values are `all` (installs HANA and configures cluster), `skip-hana` (does not install HANA, but configures cluster) and `skip-cluster` (installs HANA, but does not configure cluster). Defaults to `all`.
* **hana_inst_master**: path to the `S3 Bucket` containing the HANA installation master.
* **hana_inst_folder**: path where HANA installation master will be downloaded from `S3 Bucket`.
* **hana_disk_device**: device used by node where HANA will be installed.
* **hana_fstype**: filesystem type used for HANA installation (xfs by default).
* **iscsidev**: device used by the iscsi server.
* **iscsi_disks**: attached partitions number for iscsi server.
* **cluster_ssh_pub**: SSH public key name (must match with the key copied in sshkeys folder)
* **cluster_ssh_key**: SSH private key name (must match with the key copied in sshkeys folder)
* **ha_sap_deployment_repo**: Repository with HA and Salt formula packages. The latest RPM packages can be found at [https://download.opensuse.org/repositories/network:/ha-clustering:/Factory/{YOUR OS VERSION}](https://download.opensuse.org/repositories/network:/ha-clustering:/Factory/)
* **scenario_type**: SAP HANA scenario type. Available options: `performance-optimized` and `cost-optimized`.
* **provisioner**: select the desired provisioner to configure the nodes. Salt is used by default: [salt](../salt). Let it empty to disable the provisioning part.
* **background**: run the provisioning process in background finishing terraform execution.
* **reg_code**: registration code for the installed base product (Ex.: SLES for SAP). This parameter is optional. If informed, the system will be registered against the SUSE Customer Center.
* **reg_email**: email to be associated with the system registration. This parameter is optional.
* **reg_additional_modules**: additional optional modules and extensions to be registered (Ex.: Containers Module, HA module, Live Patching, etc). The variable is a key-value map, where the key is the _module name_ and the value is the _registration code_. If the _registration code_ is not needed, set an empty string as value. The module format must follow SUSEConnect convention:
- `<module_name>/<product_version>/<architecture>`
- *Example:* Suggested modules for SLES for SAP 15

sle-module-basesystem/15/x86_64
sle-module-desktop-applications/15/x86_64
sle-module-server-applications/15/x86_64
sle-ha/15/x86_64 (use the same regcode as SLES for SAP)
sle-module-sap-applications/15/x86_64

For more information about registration, check the ["Registering SUSE Linux Enterprise and Managing Modules/Extensions"](https://www.suse.com/documentation/sles-15/book_sle_deployment/data/cha_register_sle.html) guide.

* **additional_packages**: Additional packages to add to the guest machines.
* **hosts_ips**: Each cluster nodes IP address (sequential order). Mandatory to have a generic `/etc/hosts` file.

[Specific QA variables](../doc/qa.md#specific-qa-variables)
**Important: In order to use the deployment with an already existing vpc, it must have an internet gateway attached.**

### Relevant Details

Expand All @@ -220,8 +164,6 @@ There are some fixed values used throughout the terraform configuration:
- The cluster nodes have a second disk volume that is being used for Hana installation.

# Advanced Usage


# notes:

**Important**: If you want to use remote terraform states, first follow the [procedure to create a remote terraform state](create_remote_state).
Expand All @@ -235,11 +177,21 @@ If the use of a private/custom image is required (for example, to perform the Bu
To define the custom AMI in terraform, you should use the [terraform.tfvars](terraform.tfvars.example) file:

```
# Custom AMI for nodes
sles4sap = {
"eu-central-1" = "ami-xxxxxxxxxxxxxxxxx"
}
hana_os_image = "ami-xxxxxxxxxxxxxxxxx"
```

You could also use an image available in the AWS store, in human readable form:

```
hana_os_image = "suse-sles-sap-15-sp1-byos"
```

An image owner can also be specified:
```
hana_os_owner = "amazon"
```

OS for each module can be configured independently.


After an `apply` command, terraform will deploy the insfrastructure to the cloud and ouput the public IP addresses and names of the iSCSI server and the cluster nodes. Connect using `ssh` as the user `ec2-user`, for example:
Expand Down Expand Up @@ -267,15 +219,15 @@ terraform apply -var aws_region=eu-central-1 -var instancetype=m4.large

Will deploy 2 `m4.large` instances in Frankfurt, instead of the `m4.2xlarge` default ones. The iSCSI server is always deployed with the `t2.micro` type instance.

Finally, the number of cluster nodes can be modified with the option `-var ninstances`. For example:
Finally, the number of cluster nodes can be modified with the option `-var hana_count`. For example:

```
terraform apply -var aws_region=eu-central-1 -var ninstances=4
terraform apply -var aws_region=eu-central-1 -var hana_count=4
```

Will deploy in Frankfurt 1 `t2.micro` instance as an iSCSI server, and 4 `m4.2xlarge` instances as cluster nodes.

All this means that basically the default command `terraform apply` and be also written as `terraform apply -var instancetype=m4.2xlarge -var ninstances=2`.
All this means that basically the default command `terraform apply` and be also written as `terraform apply -var instancetype=m4.2xlarge -var hana_count=2`.



Expand Down Expand Up @@ -552,8 +504,8 @@ Examples of the JSON files used in this document have been added to this repo.

## Logs

This configuration is leaving logs in /tmp folder in each of the instances. Connect as `ssh ec2-user@<remote_ip>`, then do a `sudo su -` and check the following files:
This configuration is leaving logs in `/var/log` folder in each of the instances. Connect as `ssh ec2-user@<remote_ip>`, then do a `sudo su -` and check the following files:

* **/tmp/provisioning.log**: This is the global log file, inside it you will find the logs for user_data, salt-deployment and salt-formula.
* **/tmp/salt-deployment.log**: Check here the debug log for the salt-deployment if you need to troubleshoot something.
* **/tmp/salt-formula.log**: Same as above but for salt-formula.
* **/var/log/provisioning.log**: This is the global log file, inside it you will find the logs for user_data, salt-predeployment and salt-deployment
* **/var/log/salt-predeployment.log**: Check here the debug log for the salt pre-deployment execution if you need to troubleshoot something.
* **/var/log/salt-deployment.log**: Same as above but for the final SAP/HA/DRBD deployments salt execution logs.
Loading

0 comments on commit 2445cc8

Please sign in to comment.