Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding support for GCE instance resource policies (SCP-5918) #188

Merged
merged 7 commits into from
Mar 5, 2025

Conversation

bistline
Copy link
Contributor

@bistline bistline commented Mar 3, 2025

This update adds support for a google_compute_resource_policy to be attached to a GCE instance via the docker-instance-data-disk module. Currently this only supports specifying an instance schedule policy, but other policies could be added via the Terraform module. This is controlled via a new flag called enable_resource_policy which is turned off by default.

@bistline bistline requested review from em-may and eweitz March 3, 2025 20:37
Copy link
Contributor

@em-may em-may left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor changes requested. Also, please test these changes and add a section to the description documenting how the changes were tested and what the outcome was. Thanks!

project = var.project
region = var.instance_region
name = "${var.instance_name}-resource-policy"
count = var.enable_resource_policy == "1" ? 1 : 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TF supports ternary statements on booleans, e.g.

Suggested change
count = var.enable_resource_policy == "1" ? 1 : 0
count = var.enable_resource_policy ? 1 : 0

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


# control adding resource policy to instances
variable "enable_resource_policy" {
default = "0"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above. Also, please add a description to this, even though it's somewhat self-documenting, as that's helpful for people that are less experienced with TF.

Suggested change
default = "0"
default = false

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -62,6 +77,9 @@ resource "google_compute_instance" "instance" {
device_name = var.instance_data_disk_name
}

# instance resource policies
resource_policies = var.enable_resource_policy == "1" ? [ google_compute_resource_policy.resource-policy.self_link ] : null
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

Suggested change
resource_policies = var.enable_resource_policy == "1" ? [ google_compute_resource_policy.resource-policy.self_link ] : null
resource_policies = var.enable_resource_policy ? [ google_compute_resource_policy.resource-policy.self_link ] : null

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@eweitz eweitz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice optimization for cost savings!

Suggested changes:

  1. Adjust hour offsets to be effectively Eastern Time, while accounting for machine's local Central timezone. I assume that means start at hour 8 not 9, and stop at 16 not 17.

  2. Account for daylight vs. standard time. This will ensure staging and development environments don't shut down at 4 PM Eastern Time in a few weeks (and seamlessly start up before 10 AM ET in November).

  3. Keep machines down as is on weekdays, but change to be down all day on Saturday and Sunday.

Suggestions 1 and 2 seem blocking if not yet handled, 3 seems non-blocking.

@bistline
Copy link
Contributor Author

bistline commented Mar 4, 2025

Nice optimization for cost savings!

Suggested changes:

  1. Adjust hour offsets to be effectively Eastern Time, while accounting for machine's local Central timezone. I assume that means start at hour 8 not 9, and stop at 16 not 17.
  2. Account for daylight vs. standard time. This will ensure staging and development environments don't shut down at 4 PM Eastern Time in a few weeks (and seamlessly start up before 10 AM ET in November).
  3. Keep machines down as is on weekdays, but change to be down all day on Saturday and Sunday.

Suggestions 1 and 2 seem blocking if not yet handled, 3 seems non-blocking.

I plan to do all of these in our SCP-specific Terraform code in a forthcoming PR. These default values are really just placeholders.

Update: Changes represented here

@bistline bistline requested a review from em-may March 4, 2025 20:14
@bistline
Copy link
Contributor Author

bistline commented Mar 4, 2025

Testing over in https://github.com/broadinstitute/terraform-ap-deployments/pull/1820 yields the following results:

Developer MongoDB instance via atlantis plan -p scp-bistline

Show Output
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place

Terraform will perform the following actions:

  # module.scp.module.mongodb.module.instances.google_compute_instance.instance[0] will be updated in-place
~ resource "google_compute_instance" "instance" {
        allow_stopping_for_update = true
        can_ip_forward            = false
        cpu_platform              = "Intel Haswell"
        creation_timestamp        = "2021-02-19T12:35:21.324-08:00"
        current_status            = "RUNNING"
        deletion_protection       = false
        effective_labels          = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell-mongo"
            "owner"           = "bistline"
            "role"            = "db"
        }
        enable_display            = false
        id                        = "projects/broad-singlecellportal-bistlin/zones/us-central1-a/instances/singlecell-mongo-02"
        instance_id               = "3007657788317736935"
        label_fingerprint         = "qevm-MDnwJI="
        labels                    = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell-mongo"
            "owner"           = "bistline"
            "role"            = "db"
        }
        machine_type              = "n1-highmem-2"
        metadata                  = {}
        metadata_fingerprint      = "JbdmiFfjWmk="
        metadata_startup_script   = <<~EOT
            #!/bin/bash
            
            # Only run this script once
            if [ -f /etc/sysconfig/gce-metadata-run ];
                then
                exit 0
            fi
            
            #stop and disable firewalld
            systemctl stop firewalld.service
            systemctl disable firewalld.service
            
            #install pip and ansible
            yum install epel-release -y
            yum update
            yum install python36 python36-pip git jq python-setuptools -y
            python3.6 -m pip install --upgrade pip
            python3.6 -m pip install virtualenv
            virtualenv /usr/local/bin/ansible
            source /usr/local/bin/ansible/bin/activate
            python3.6 -m pip install ansible==2.7.8
            python3.6 -m pip install hvac 
            python3.6 -m pip install ansible_merge_vars
            
            # convert labels to env vars
            gcloud compute instances list --filter="name:$(hostname)" --format 'value(labels)' | tr ';' '\n' | while read var ; do key="${var%=*}"; value="${var##*=}" ; key=$(echo $key | tr '[a-z]' '[A-Z]') ; echo "export $key=\"$value\"" ; done  > /etc/bashrc-labels
            
            # gcloud compute instances list --filter="name:$(hostname)" --format=json | jq .[].labels | tr -d '"|,|{|}|:' | while read key value ; do if [ ! -z "${key}" ] ; then  key=$(echo $key | tr '[a-z]' '[A-Z]') ; echo "export $key=\"$value\"" ; fi ; done > /etc/bashrc-labels
            
            echo "test -f /etc/bashrc-labels && source /etc/bashrc-labels" >> /etc/bashrc
            source /etc/bashrc-labels
            
            #env vars and paths
            echo "source /usr/local/bin/ansible/bin/activate " >> /root/.bashrc
            echo "export PATH=/usr/local/bin:$PATH" >> /root/.bashrc
            # echo "export GPROJECT=${gproject_ansible}"  >> /root/.bashrc
            # echo "export ANSIBLE_BRANCH=${ansible_branch}"  >> /root/.bashrc
            source /root/.bashrc
            
            #needed for checkout otherwise ssh cannot git clone or checkout
            mkdir ~/.ssh
            ssh-keyscan -H github.com >> ~/.ssh/known_hosts
            
            # Fetch all the common setup scripts from GCE metadata
            #curl -sH 'Metadata-Flavor: Google' http://metadata/computeMetadata/v1/project/attributes/ansible-key > /root/.ssh/id_rsa
            #chmod 0600 /root/.ssh/id_rsa
            
            #find newly added disks without rebooting ie:scratch disks
            /usr/bin/rescan-scsi-bus.sh
            
            #one time anisble run
            ansible-pull provisioner.yml -C ${ANSIBLE_BRANCH} -d /var/lib/ansible/local -U https://github.com/broadinstitute/dsp-ansible-configs.git -i hosts >> /root/ansible-provisioner-firstrun.log 2>&1
            
            # sh /root/ansible-setup.sh 2>&1 | tee /root/ansible-setup.log
            
            touch /etc/sysconfig/gce-metadata-run
            chmod 0644 /etc/sysconfig/gce-metadata-run
            
            # Prevent yum-cron from arbitrarily updating docker packages
            echo "exclude = docker* containerd.io" >> /etc/yum/yum-cron.conf
        EOT
        name                      = "singlecell-mongo-02"
        project                   = "broad-singlecellportal-bistlin"
      ~ resource_policies         = [] -> (known after apply)
        self_link                 = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-bistlin/zones/us-central1-a/instances/singlecell-mongo-02"
        tags                      = [
            "http-server",
            "https-server",
            "mongodb",
            "singlecell-mongodb-bistline",
        ]
        tags_fingerprint          = "etyP2crAjrM="
        terraform_labels          = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell-mongo"
            "owner"           = "bistline"
            "role"            = "db"
        }
        zone                      = "us-central1-a"

        attached_disk {
            device_name = "docker"
            mode        = "READ_WRITE"
            source      = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-bistlin/zones/us-central1-a/disks/singlecell-mongo-02-docker-disk"
        }
        attached_disk {
            device_name = "singlecell-mongo-data-disk"
            mode        = "READ_WRITE"
            source      = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-bistlin/zones/us-central1-a/disks/singlecell-mongo-02-data-disk"
        }

        boot_disk {
            auto_delete = true
            device_name = "persistent-disk-0"
            mode        = "READ_WRITE"
            source      = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-bistlin/zones/us-central1-a/disks/singlecell-mongo-02"

            initialize_params {
                enable_confidential_compute = false
                image                       = "https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20210217"
                labels                      = {}
                provisioned_iops            = 0
                provisioned_throughput      = 0
                resource_manager_tags       = {}
                resource_policies           = []
                size                        = 50
                type                        = "pd-standard"
            }
        }

        network_interface {
            internal_ipv6_prefix_length = 0
            name                        = "nic0"
            network                     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-bistlin/global/networks/singlecell-network"
            network_ip                  = "10.128.0.8"
            queue_count                 = 0
            subnetwork                  = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-bistlin/regions/us-central1/subnetworks/singlecell-network"
            subnetwork_project          = "broad-singlecellportal-bistlin"

            access_config {
                nat_ip       = "35.202.81.60"
                network_tier = "PREMIUM"
            }
        }

        scheduling {
            automatic_restart   = true
            availability_domain = 0
            min_node_cpus       = 0
            on_host_maintenance = "MIGRATE"
            preemptible         = false
            provisioning_model  = "STANDARD"
        }

        service_account {
            email  = "default-service-account@broad-singlecellportal-bistlin.iam.gserviceaccount.com"
            scopes = [
                "https://www.googleapis.com/auth/cloud-platform",
                "https://www.googleapis.com/auth/compute.readonly",
                "https://www.googleapis.com/auth/devstorage.read_only",
                "https://www.googleapis.com/auth/logging.write",
                "https://www.googleapis.com/auth/monitoring.write",
                "https://www.googleapis.com/auth/userinfo.email",
            ]
        }

        shielded_instance_config {
            enable_integrity_monitoring = true
            enable_secure_boot          = false
            enable_vtpm                 = true
        }
    }

  # module.scp.module.mongodb.module.instances.google_compute_resource_policy.resource-policy[0] will be created
+ resource "google_compute_resource_policy" "resource-policy" {
      + id        = (known after apply)
      + name      = "singlecell-mongo-resource-policy"
      + project   = "broad-singlecellportal-bistlin"
      + region    = "us-central1"
      + self_link = (known after apply)

      + instance_schedule_policy {
          + time_zone = "US/Eastern"

          + vm_start_schedule {
              + schedule = "0 8 * * 1-5"
            }

          + vm_stop_schedule {
              + schedule = "0 18 * * 1-5"
            }
        }
    }

Plan: 1 to add, 1 to change, 0 to destroy.

Warning: Deprecated

  on modules/single-cell-portal/modules/mongodb/dns.tf line 33, in data "null_data_source" "hostnames_with_no_trailing_dot":
  33: data "null_data_source" "hostnames_with_no_trailing_dot" {

The null_data_source was historically used to construct intermediate values to
re-use elsewhere in configuration, the same can now be achieved using locals
or the terraform_data resource type in Terraform 1.4 and later.

Staging instances via atlantis plan -p scp-staging (unfortunately this wants to recreate the staging server as it was an even older module version than the MongoDB instances (April 2020 vs. May 2021). This is fine as it's non-production)

Show Output
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.scp.module.app_server.google_compute_instance.instance[0] must be replaced
-/+ resource "google_compute_instance" "instance" {
        allow_stopping_for_update = true
        can_ip_forward            = false
      ~ cpu_platform              = "Intel Haswell" -> (known after apply)
      ~ creation_timestamp        = "2020-04-24T11:10:57.930-07:00" -> (known after apply)
      ~ current_status            = "RUNNING" -> (known after apply)
        deletion_protection       = false
      ~ effective_labels          = {
            "ansible_branch"             = "master"
            "ansible_project"            = "singlecell"
            "app"                        = "singlecell"
          + "goog-terraform-provisioned" = "true"
            "owner"                      = "staging"
            "role"                       = "app-server"
        }
      - enable_display            = false -> null
      ~ id                        = "projects/broad-singlecellportal-staging/zones/us-central1-a/instances/singlecell-01" -> (known after apply)
      ~ instance_id               = "5262900227694564702" -> (known after apply)
      ~ label_fingerprint         = "bkBOrNOPzRA=" -> (known after apply)
        labels                    = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell"
            "owner"           = "staging"
            "role"            = "app-server"
        }
        machine_type              = "n1-highmem-8"
      - metadata                  = {} -> null
      ~ metadata_fingerprint      = "JyJUgQJAWC4=" -> (known after apply)
      ~ metadata_startup_script   = <<~EOT # forces replacement
            #!/bin/bash
            
            # Only run this script once
            if [ -f /etc/sysconfig/gce-metadata-run ];
                then
                exit 0
            fi
            
            #stop and disable firewalld
            systemctl stop firewalld.service
            systemctl disable firewalld.service
            
            #install pip and ansible
            yum install epel-release -y
            yum update
          - yum install  python36 python36-pip git jq -y
          + yum install python36 python36-pip git jq python-setuptools -y
            python3.6 -m pip install --upgrade pip
            python3.6 -m pip install virtualenv
            virtualenv /usr/local/bin/ansible
            source /usr/local/bin/ansible/bin/activate
            python3.6 -m pip install ansible==2.7.8
            python3.6 -m pip install hvac 
            python3.6 -m pip install ansible_merge_vars
            
            # convert labels to env vars
            gcloud compute instances list --filter="name:$(hostname)" --format 'value(labels)' | tr ';' '\n' | while read var ; do key="${var%=*}"; value="${var##*=}" ; key=$(echo $key | tr '[a-z]' '[A-Z]') ; echo "export $key=\"$value\"" ; done  > /etc/bashrc-labels
            
            # gcloud compute instances list --filter="name:$(hostname)" --format=json | jq .[].labels | tr -d '"|,|{|}|:' | while read key value ; do if [ ! -z "${key}" ] ; then  key=$(echo $key | tr '[a-z]' '[A-Z]') ; echo "export $key=\"$value\"" ; fi ; done > /etc/bashrc-labels
            
            echo "test -f /etc/bashrc-labels && source /etc/bashrc-labels" >> /etc/bashrc
            source /etc/bashrc-labels
            
            #env vars and paths
            echo "source /usr/local/bin/ansible/bin/activate " >> /root/.bashrc
            echo "export PATH=/usr/local/bin:$PATH" >> /root/.bashrc
            # echo "export GPROJECT=${gproject_ansible}"  >> /root/.bashrc
            # echo "export ANSIBLE_BRANCH=${ansible_branch}"  >> /root/.bashrc
            source /root/.bashrc
            
            #needed for checkout otherwise ssh cannot git clone or checkout
            mkdir ~/.ssh
            ssh-keyscan -H github.com >> ~/.ssh/known_hosts
            
            # Fetch all the common setup scripts from GCE metadata
            #curl -sH 'Metadata-Flavor: Google' http://metadata/computeMetadata/v1/project/attributes/ansible-key > /root/.ssh/id_rsa
            #chmod 0600 /root/.ssh/id_rsa
            
            #find newly added disks without rebooting ie:scratch disks
            /usr/bin/rescan-scsi-bus.sh
            
            #one time anisble run
            ansible-pull provisioner.yml -C ${ANSIBLE_BRANCH} -d /var/lib/ansible/local -U https://github.com/broadinstitute/dsp-ansible-configs.git -i hosts >> /root/ansible-provisioner-firstrun.log 2>&1
            
            # sh /root/ansible-setup.sh 2>&1 | tee /root/ansible-setup.log
            
            touch /etc/sysconfig/gce-metadata-run
            chmod 0644 /etc/sysconfig/gce-metadata-run
          + 
          + # Prevent yum-cron from arbitrarily updating docker packages
          + echo "exclude = docker* containerd.io" >> /etc/yum/yum-cron.conf
        EOT
      + min_cpu_platform          = (known after apply)
        name                      = "singlecell-01"
        project                   = "broad-singlecellportal-staging"
      ~ resource_policies         = [] -> (known after apply)
      ~ self_link                 = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/instances/singlecell-01" -> (known after apply)
        tags                      = [
            "gce-lb-instance-group-member",
            "http-server",
            "https-server",
            "singlecell",
            "singlecell-staging",
        ]
      ~ tags_fingerprint          = "-YbVWwzRT3s=" -> (known after apply)
      ~ terraform_labels          = {
            "ansible_branch"             = "master"
            "ansible_project"            = "singlecell"
            "app"                        = "singlecell"
          + "goog-terraform-provisioned" = "true"
            "owner"                      = "staging"
            "role"                       = "app-server"
        }
        zone                      = "us-central1-a"

      ~ attached_disk {
            device_name                = "docker"
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
            mode                       = "READ_WRITE"
          ~ source                     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/disks/singlecell-01-docker-disk" -> "singlecell-01-docker-disk"
        }
      ~ attached_disk {
            device_name                = "singlecell-data-disk"
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
            mode                       = "READ_WRITE"
          ~ source                     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/disks/singlecell-01-data-disk" -> "singlecell-01-data-disk"
        }

      ~ boot_disk {
            auto_delete                = true
          ~ device_name                = "persistent-disk-0" -> (known after apply)
          + disk_encryption_key_sha256 = (known after apply)
          + kms_key_self_link          = (known after apply)
            mode                       = "READ_WRITE"
          ~ source                     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/disks/singlecell-01" -> (known after apply)

          ~ initialize_params {
              - enable_confidential_compute = false -> null
              ~ image                       = "https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20200420" -> "centos-7"
              ~ labels                      = {} -> (known after apply)
              ~ provisioned_iops            = 0 -> (known after apply)
              ~ provisioned_throughput      = 0 -> (known after apply)
              - resource_manager_tags       = {} -> null
              ~ resource_policies           = [] -> (known after apply)
                size                        = 50
              ~ type                        = "pd-standard" -> (known after apply)
            }
        }

      + confidential_instance_config {
          + confidential_instance_type  = (known after apply)
          + enable_confidential_compute = (known after apply)
        }

      + guest_accelerator {
          + count = (known after apply)
          + type  = (known after apply)
        }

      ~ network_interface {
          ~ internal_ipv6_prefix_length = 0 -> (known after apply)
          + ipv6_access_type            = (known after apply)
          + ipv6_address                = (known after apply)
          ~ name                        = "nic0" -> (known after apply)
          ~ network                     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/global/networks/singlecell" -> "singlecell"
          + network_attachment          = (known after apply)
          ~ network_ip                  = "10.128.0.5" -> (known after apply)
          - queue_count                 = 0 -> null
          + stack_type                  = (known after apply)
            subnetwork                  = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/regions/us-central1/subnetworks/singlecell"
          ~ subnetwork_project          = "broad-singlecellportal-staging" -> (known after apply)

          ~ access_config {
                nat_ip       = "35.239.250.234"
              ~ network_tier = "PREMIUM" -> (known after apply)
            }
        }

      + reservation_affinity {
          + type = (known after apply)

          + specific_reservation {
              + key    = (known after apply)
              + values = (known after apply)
            }
        }

      ~ scheduling {
          ~ automatic_restart           = true -> (known after apply)
          ~ availability_domain         = 0 -> (known after apply)
          + instance_termination_action = (known after apply)
          ~ min_node_cpus               = 0 -> (known after apply)
          ~ on_host_maintenance         = "MIGRATE" -> (known after apply)
          ~ preemptible                 = false -> (known after apply)
          ~ provisioning_model          = "STANDARD" -> (known after apply)

          + local_ssd_recovery_timeout {
              + nanos   = (known after apply)
              + seconds = (known after apply)
            }

          + max_run_duration {
              + nanos   = (known after apply)
              + seconds = (known after apply)
            }

          + node_affinities {
              + key      = (known after apply)
              + operator = (known after apply)
              + values   = (known after apply)
            }

          + on_instance_stop_action {
              + discard_local_ssd = (known after apply)
            }
        }

        service_account {
            email  = "839419950053-compute@developer.gserviceaccount.com"
            scopes = [
                "https://www.googleapis.com/auth/compute.readonly",
                "https://www.googleapis.com/auth/devstorage.read_only",
                "https://www.googleapis.com/auth/logging.write",
                "https://www.googleapis.com/auth/monitoring.write",
                "https://www.googleapis.com/auth/userinfo.email",
            ]
        }

      - shielded_instance_config {
          - enable_integrity_monitoring = true -> null
          - enable_secure_boot          = false -> null
          - enable_vtpm                 = true -> null
        }
    }

  # module.scp.module.app_server.google_compute_instance_group.instance-group-unmanaged[0] will be updated in-place
~ resource "google_compute_instance_group" "instance-group-unmanaged" {
        description = "singlecell Instance Group - Unmanaged"
        id          = "projects/broad-singlecellportal-staging/zones/us-central1-a/instanceGroups/singlecell-instance-group-unmanaged"
      ~ instances   = [
          - "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/instances/singlecell-01",
        ] -> (known after apply)
        name        = "singlecell-instance-group-unmanaged"
        network     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/global/networks/singlecell"
        project     = "broad-singlecellportal-staging"
        self_link   = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/instanceGroups/singlecell-instance-group-unmanaged"
        size        = 1
        zone        = "us-central1-a"

        named_port {
            name = "http"
            port = 80
        }
        named_port {
            name = "https"
            port = 443
        }
    }

  # module.scp.module.app_server.google_compute_resource_policy.resource-policy[0] will be created
+ resource "google_compute_resource_policy" "resource-policy" {
      + id        = (known after apply)
      + name      = "singlecell-resource-policy"
      + project   = "broad-singlecellportal-staging"
      + region    = "us-central1"
      + self_link = (known after apply)

      + instance_schedule_policy {
          + time_zone = "US/Eastern"

          + vm_start_schedule {
              + schedule = "0 8 * * 1-5"
            }

          + vm_stop_schedule {
              + schedule = "0 18 * * 1-5"
            }
        }
    }

  # module.scp.module.mongodb.module.instances.google_compute_instance.instance[0] will be updated in-place
~ resource "google_compute_instance" "instance" {
        allow_stopping_for_update = true
        can_ip_forward            = false
        cpu_platform              = "Intel Haswell"
        creation_timestamp        = "2021-03-01T12:41:32.435-08:00"
        current_status            = "RUNNING"
        deletion_protection       = false
        effective_labels          = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell-mongo"
            "owner"           = "staging"
            "role"            = "db"
        }
        enable_display            = false
        id                        = "projects/broad-singlecellportal-staging/zones/us-central1-a/instances/singlecell-mongo-02"
        instance_id               = "8124538600496885652"
        label_fingerprint         = "aEkYbe9lWxU="
        labels                    = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell-mongo"
            "owner"           = "staging"
            "role"            = "db"
        }
        machine_type              = "n1-highmem-8"
        metadata                  = {}
        metadata_fingerprint      = "v-C7PQu-U0w="
        metadata_startup_script   = <<~EOT
            #!/bin/bash
            
            # Only run this script once
            if [ -f /etc/sysconfig/gce-metadata-run ];
                then
                exit 0
            fi
            
            #stop and disable firewalld
            systemctl stop firewalld.service
            systemctl disable firewalld.service
            
            #install pip and ansible
            yum install epel-release -y
            yum update
            yum install python36 python36-pip git jq python-setuptools -y
            python3.6 -m pip install --upgrade pip
            python3.6 -m pip install virtualenv
            virtualenv /usr/local/bin/ansible
            source /usr/local/bin/ansible/bin/activate
            python3.6 -m pip install ansible==2.7.8
            python3.6 -m pip install hvac 
            python3.6 -m pip install ansible_merge_vars
            
            # convert labels to env vars
            gcloud compute instances list --filter="name:$(hostname)" --format 'value(labels)' | tr ';' '\n' | while read var ; do key="${var%=*}"; value="${var##*=}" ; key=$(echo $key | tr '[a-z]' '[A-Z]') ; echo "export $key=\"$value\"" ; done  > /etc/bashrc-labels
            
            # gcloud compute instances list --filter="name:$(hostname)" --format=json | jq .[].labels | tr -d '"|,|{|}|:' | while read key value ; do if [ ! -z "${key}" ] ; then  key=$(echo $key | tr '[a-z]' '[A-Z]') ; echo "export $key=\"$value\"" ; fi ; done > /etc/bashrc-labels
            
            echo "test -f /etc/bashrc-labels && source /etc/bashrc-labels" >> /etc/bashrc
            source /etc/bashrc-labels
            
            #env vars and paths
            echo "source /usr/local/bin/ansible/bin/activate " >> /root/.bashrc
            echo "export PATH=/usr/local/bin:$PATH" >> /root/.bashrc
            # echo "export GPROJECT=${gproject_ansible}"  >> /root/.bashrc
            # echo "export ANSIBLE_BRANCH=${ansible_branch}"  >> /root/.bashrc
            source /root/.bashrc
            
            #needed for checkout otherwise ssh cannot git clone or checkout
            mkdir ~/.ssh
            ssh-keyscan -H github.com >> ~/.ssh/known_hosts
            
            # Fetch all the common setup scripts from GCE metadata
            #curl -sH 'Metadata-Flavor: Google' http://metadata/computeMetadata/v1/project/attributes/ansible-key > /root/.ssh/id_rsa
            #chmod 0600 /root/.ssh/id_rsa
            
            #find newly added disks without rebooting ie:scratch disks
            /usr/bin/rescan-scsi-bus.sh
            
            #one time anisble run
            ansible-pull provisioner.yml -C ${ANSIBLE_BRANCH} -d /var/lib/ansible/local -U https://github.com/broadinstitute/dsp-ansible-configs.git -i hosts >> /root/ansible-provisioner-firstrun.log 2>&1
            
            # sh /root/ansible-setup.sh 2>&1 | tee /root/ansible-setup.log
            
            touch /etc/sysconfig/gce-metadata-run
            chmod 0644 /etc/sysconfig/gce-metadata-run
            
            # Prevent yum-cron from arbitrarily updating docker packages
            echo "exclude = docker* containerd.io" >> /etc/yum/yum-cron.conf
        EOT
        name                      = "singlecell-mongo-02"
        project                   = "broad-singlecellportal-staging"
      ~ resource_policies         = [] -> (known after apply)
        self_link                 = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/instances/singlecell-mongo-02"
        tags                      = [
            "http-server",
            "https-server",
            "mongodb",
            "singlecell-mongodb-staging",
        ]
        tags_fingerprint          = "wb5meV89M-I="
        terraform_labels          = {
            "ansible_branch"  = "master"
            "ansible_project" = "singlecell"
            "app"             = "singlecell-mongo"
            "owner"           = "staging"
            "role"            = "db"
        }
        zone                      = "us-central1-a"

        attached_disk {
            device_name = "docker"
            mode        = "READ_WRITE"
            source      = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/disks/singlecell-mongo-02-docker-disk"
        }
        attached_disk {
            device_name = "singlecell-mongo-data-disk"
            mode        = "READ_WRITE"
            source      = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/disks/singlecell-mongo-02-data-disk"
        }

        boot_disk {
            auto_delete = true
            device_name = "persistent-disk-0"
            mode        = "READ_WRITE"
            source      = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/zones/us-central1-a/disks/singlecell-mongo-02"

            initialize_params {
                enable_confidential_compute = false
                image                       = "https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20210217"
                labels                      = {}
                provisioned_iops            = 0
                provisioned_throughput      = 0
                resource_manager_tags       = {}
                resource_policies           = []
                size                        = 50
                type                        = "pd-standard"
            }
        }

        network_interface {
            internal_ipv6_prefix_length = 0
            name                        = "nic0"
            network                     = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/global/networks/singlecell"
            network_ip                  = "10.128.0.46"
            queue_count                 = 0
            subnetwork                  = "https://www.googleapis.com/compute/v1/projects/broad-singlecellportal-staging/regions/us-central1/subnetworks/singlecell"
            subnetwork_project          = "broad-singlecellportal-staging"

            access_config {
                nat_ip       = "34.69.44.144"
                network_tier = "PREMIUM"
            }
        }

        scheduling {
            automatic_restart   = true
            availability_domain = 0
            min_node_cpus       = 0
            on_host_maintenance = "MIGRATE"
            preemptible         = false
            provisioning_model  = "STANDARD"
        }

        service_account {
            email  = "839419950053-compute@developer.gserviceaccount.com"
            scopes = [
                "https://www.googleapis.com/auth/cloud-platform",
                "https://www.googleapis.com/auth/compute.readonly",
                "https://www.googleapis.com/auth/devstorage.read_only",
                "https://www.googleapis.com/auth/logging.write",
                "https://www.googleapis.com/auth/monitoring.write",
                "https://www.googleapis.com/auth/userinfo.email",
            ]
        }

        shielded_instance_config {
            enable_integrity_monitoring = true
            enable_secure_boot          = false
            enable_vtpm                 = true
        }
    }

  # module.scp.module.mongodb.module.instances.google_compute_resource_policy.resource-policy[0] will be created
+ resource "google_compute_resource_policy" "resource-policy" {
      + id        = (known after apply)
      + name      = "singlecell-mongo-resource-policy"
      + project   = "broad-singlecellportal-staging"
      + region    = "us-central1"
      + self_link = (known after apply)

      + instance_schedule_policy {
          + time_zone = "US/Eastern"

          + vm_start_schedule {
              + schedule = "0 8 * * 1-5"
            }

          + vm_stop_schedule {
              + schedule = "0 18 * * 1-5"
            }
        }
    }

Plan: 3 to add, 2 to change, 1 to destroy.

@bistline bistline merged commit 1d5af02 into master Mar 5, 2025
@em-may em-may deleted the jb-scp-compute-instance-policy branch March 5, 2025 20:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants