diff --git a/.gitignore b/.gitignore
index 98366f36..9ceb97fc 100644
--- a/.gitignore
+++ b/.gitignore
@@ -12,3 +12,4 @@ eeauditor/processor/outputs/*.html
LOCAL_external_providers.toml
output.json
output_ocsf_v1-4-0_events.json
+gcp_cred.json
\ No newline at end of file
diff --git a/Dockerfile b/Dockerfile
index 04cf63aa..2e58aa18 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -18,10 +18,10 @@
#specific language governing permissions and limitations
#under the License.
-# latest hash as of 27 AUG 2024 - Alpine 3.20.2
-# https://hub.docker.com/layers/library/alpine/3.20.2/images/sha256-eddacbc7e24bf8799a4ed3cdcfa50d4b88a323695ad80f317b6629883b2c2a78?context=explore
+# latest hash as of 13 FEB 2025 - Alpine 3.21.3
+# https://hub.docker.com/layers/library/alpine/3.20.2/images/sha256-a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c?context=explore
# use as builder image to pull in required deps
-FROM alpine@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099 AS builder
+FROM alpine@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c AS builder
ENV PYTHONUNBUFFERED=1
@@ -40,9 +40,9 @@ RUN \
rm -rf /tmp/* && \
rm -f /var/cache/apk/*
-# latest hash as of 27 AUG 2024 - Alpine 3.20.2
-# https://hub.docker.com/layers/library/alpine/3.20.2/images/sha256-eddacbc7e24bf8799a4ed3cdcfa50d4b88a323695ad80f317b6629883b2c2a78?context=explore
-FROM alpine@sha256:56fa17d2a7e7f168a043a2712e63aed1f8543aeafdcee47c58dcffe38ed51099 as electriceye
+# latest hash as of 13 FEB 2025 - Alpine 3.21.3
+# https://hub.docker.com/layers/library/alpine/3.20.2/images/sha256-a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c?context=explore
+FROM alpine@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c as electriceye
COPY --from=builder /usr /usr
diff --git a/README.md b/README.md
index 2f50af52..5fe4883f 100644
--- a/README.md
+++ b/README.md
@@ -153,9 +153,9 @@ In total there are:
- **4** Supported Public CSPs: `AWS`, `GCP`, `OCI`, and `Azure`
- **4** Supported SaaS Providers: `ServiceNow`, `M365`, `Salesforce`, and `Snowflake`
-- **1193** ElectricEye Checks
-- **177** Supported CSP & SaaS Asset Components across all Services
-- **133** ElectricEye Auditors
+- **1196** ElectricEye Checks
+- **179** Supported CSP & SaaS Asset Components across all Services
+- **135** ElectricEye Auditors
The tables of supported Services and Checks have been migrated to the respective per-Provider setup documentation linked above in [Configuring ElectricEye](#configuring-electriceye).
@@ -389,6 +389,8 @@ The controls frameworks that ElectricEye supports is always being updated as new
- CIS Amazon Web Services Foundations Benchmark V2.0
- CIS Amazon Web Services Foundations Benchmark V3.0
- CIS Microsoft Azure Foundations Benchmark V2.0.0
+- CIS Snowflake Foundations Benchmark V1.0.0
+- CIS Google Cloud Platform Foundation Benchmark V2.0
## Repository Security
diff --git a/docs/setup/Setup_GCP.md b/docs/setup/Setup_GCP.md
index ae22ec6b..b865a12d 100644
--- a/docs/setup/Setup_GCP.md
+++ b/docs/setup/Setup_GCP.md
@@ -22,7 +22,7 @@ To configure the TOML file, you need to modify the values of the variables in th
- `gcp_project_ids`: Set this variable to specify a list of GCP Project IDs, ensure you only specify the GCP Projects which the Service Account specified in `gcp_service_account_json_payload_value` has access to.
-- `gcp_service_account_json_payload_value`: This variable is used to specify the contents of the Google Cloud Platform (GCP) service account key JSON file that ElectricEye should use to authenticate to GCP. The contents of the JSON file should be provided as a string, and the entire string should be assigned to the `gcp_service_account_json_payload_value` setting.
+- `gcp_service_account_json_payload_value`: This variable is used to specify the contents of the Google Cloud Platform (GCP) service account key JSON file that ElectricEye should use to authenticate to GCP. If `credentials_location` is set to `CONFIG_FILE` you should paste the entire contents of the Service Account JSON within triple single-quotes (`'''`) otherwise the newline characters (`\n`) will cause an issue within the TOML.
It's important to note that this setting is a sensitive credential, and as such, its value should be stored in a secure manner that matches the location specified in the `[global]` section's `credentials_location` setting. For example, if `credentials_location` is set to `"AWS_SSM"`, then the gcp_service_account_json_payload_value should be the name of an AWS Systems Manager Parameter Store SecureString parameter that contains the contents of the GCP service account key JSON file.
@@ -32,16 +32,19 @@ Refer [here](#gcp-multi-project-service-account-support) for information on addi
1. Enable the following APIs for all GCP Projects you wish to assess with ElectricEye.
-> - Compute Engine API
-> - Cloud SQL Admin API
-> - Cloud Logging API
-> - OS Config API
-> - Service Networking API
+- Compute Engine API
+- Cloud SQL Admin API
+- Cloud Logging API
+- OS Config API
+- Service Networking API
+- BigQuery API
2. Create a **Service Account** with the following permissions per Project you want to assess with ElectricEye (**Note**: In the future, Organizations will be supported for GCP, you can instead create a single **Service Account** and add it's Email into all of your other Projects)
-> - Security Reviewer
-> - Project Viewer
+- Security Reviewer
+- Viewer
+- BigQuery Data Viewer
+- BigQuery Metadata Viewer
#### NOTE: For evaluating multiple GCP Projects, you only need ONE Service Account, refer to [GCP Multi-Project Service Account Support](#gcp-multi-project-service-account-support) for more information on adding permissions to other Projects.
@@ -150,10 +153,13 @@ done
## GCP Checks & Services
-These are the following services and checks perform by each Auditor, there are currently **53 Checks** across **3 Auditors** that support the secure configuration of **2 services/components**
+These are the following services and checks perform by each Auditor, there are currently **56 Checks** across **5 Auditors** that support the secure configuration of **4 services/components**
| Auditor File Name | Scanned Resource Name | Auditor Scan Description |
|---|---|---|
+| GCP_BigQuery_Auditor | BigQuery table | Has the table been updated in the last 90 days |
+| GCP_BigQuery_Auditor | BigQuery table | Do tables use CMEKs for encryption |
+| GCP_IAM_Auditor | Service Account | Are user-managed keys in use (lol, yes, at least one!) |
| GCP_ComputeEngine_Auditor | GCE VM Instance | Is deletion protection enabled |
| GCP_ComputeEngine_Auditor | GCE VM Instance | Is IP forwarding disabled |
| GCP_ComputeEngine_Auditor | GCE VM Instance | Is auto-restart enabled |
diff --git a/eeauditor/auditors/gcp/ElectricEye_AttackSurface_GCP_Auditor.py b/eeauditor/auditors/gcp/ElectricEye_AttackSurface_GCP_Auditor.py
index 817fa28d..c5c4d6ad 100644
--- a/eeauditor/auditors/gcp/ElectricEye_AttackSurface_GCP_Auditor.py
+++ b/eeauditor/auditors/gcp/ElectricEye_AttackSurface_GCP_Auditor.py
@@ -30,7 +30,7 @@
# Instantiate a NMAP scanner for TCP scans to define ports
nmap = nmap3.NmapScanTechniques()
-def get_compute_engine_instances(cache: dict, gcpProjectId: str):
+def get_compute_engine_instances(cache: dict, gcpProjectId: str, gcpCredentials):
'''
AggregatedList result provides Zone information as well as every single Instance in a Project
'''
@@ -39,7 +39,7 @@ def get_compute_engine_instances(cache: dict, gcpProjectId: str):
results = []
- compute = googleapiclient.discovery.build('compute', 'v1')
+ compute = googleapiclient.discovery.build('compute', 'v1', credentials=gcpCredentials)
aggResult = compute.instances().aggregatedList(project=gcpProjectId).execute()
@@ -79,11 +79,11 @@ def scan_host(hostIp, assetName, assetComponent):
results = None
@registry.register_check("gce")
-def gce_attack_surface_open_tcp_port_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_attack_surface_open_tcp_port_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""[AttackSurface.GCP.GCE.{checkIdNumber}] Google Compute Engine VM instances should not be publicly reachable on {serviceName}"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
diff --git a/eeauditor/auditors/gcp/GCP_BigQuery_Auditor.py b/eeauditor/auditors/gcp/GCP_BigQuery_Auditor.py
new file mode 100644
index 00000000..03ef94c8
--- /dev/null
+++ b/eeauditor/auditors/gcp/GCP_BigQuery_Auditor.py
@@ -0,0 +1,350 @@
+#This file is part of ElectricEye.
+#SPDX-License-Identifier: Apache-2.0
+
+#Licensed to the Apache Software Foundation (ASF) under one
+#or more contributor license agreements. See the NOTICE file
+#distributed with this work for additional information
+#regarding copyright ownership. The ASF licenses this file
+#to you under the Apache License, Version 2.0 (the
+#"License"); you may not use this file except in compliance
+#with the License. You may obtain a copy of the License at
+
+#http://www.apache.org/licenses/LICENSE-2.0
+
+#Unless required by applicable law or agreed to in writing,
+#software distributed under the License is distributed on an
+#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#KIND, either express or implied. See the License for the
+#specific language governing permissions and limitations
+#under the License.
+
+import datetime
+from check_register import CheckRegister
+import googleapiclient.discovery
+import base64
+import json
+
+registry = CheckRegister()
+
+def get_bigquery_tables(cache: dict, gcpProjectId, gcpCredentials) -> list[dict] | dict:
+ """Retrieves the extended metadata of every table for every BigQuery dataset in the Project and returns them"""
+ response = cache.get("get_bigquery_tables")
+ if response:
+ return response
+
+ tableDetails: list[dict] = []
+
+ service = googleapiclient.discovery.build('bigquery', 'v2', credentials=gcpCredentials)
+
+ datasets = service.datasets().list(projectId=gcpProjectId).execute()
+
+ if datasets["datasets"]:
+ for dataset in datasets["datasets"]:
+ datasetId = dataset["datasetReference"]["datasetId"]
+ # now get the tables, we have to execute an additional GET per table to get the full metadata
+ tables = service.tables().list(projectId=gcpProjectId, datasetId=datasetId).execute()
+ for table in tables["tables"]:
+ tableId = table["tableReference"]["tableId"]
+ tableDetails.append(
+ service.tables().get(projectId=gcpProjectId, datasetId=datasetId, tableId=tableId).execute()
+ )
+
+ if tableDetails:
+ cache["get_bigquery_tables"] = tableDetails
+ return cache["get_bigquery_tables"]
+ else:
+ return {}
+
+@registry.register_check("gcp.bigquery")
+def bigquery_table_updated_within_90_days_check(cache: dict, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
+ """[GCP.BigQuery.1] BigQuery Tables that have not been modified in 90 days should be reviewed"""
+ # ISO Time
+ iso8601Time = datetime.datetime.now(datetime.UTC).replace(tzinfo=datetime.timezone.utc).isoformat()
+ # Loop the datasets
+ for table in get_bigquery_tables(cache, gcpProjectId, gcpCredentials):
+ fullTableId = table["id"]
+ tableId = table["tableReference"]["tableId"]
+ assetJson = json.dumps(table,default=str).encode("utf-8")
+ assetB64 = base64.b64encode(assetJson)
+ modifyCheckFail = False
+ lastModifiedEpoch = int(table.get("lastModifiedTime", 0))
+ if lastModifiedEpoch == 0:
+ modifyCheckFail = True
+ else:
+ # convert epochmillis and use timedelta to check if older than 90 days
+ lastModified = datetime.datetime.fromtimestamp(lastModifiedEpoch / 1000.0, tz=datetime.timezone.utc)
+ if datetime.datetime.now(datetime.timezone.utc) - lastModified > datetime.timedelta(days=90):
+ modifyCheckFail = True
+
+ # this is a failing check
+ if modifyCheckFail:
+ finding = {
+ "SchemaVersion": "2018-10-08",
+ "Id": f"{fullTableId}/bigquery-table-not-modified-in-90-days-check",
+ "ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
+ "GeneratorId": f"{fullTableId}/bigquery-table-not-modified-in-90-days-check",
+ "AwsAccountId": awsAccountId,
+ "Types": ["Software and Configuration Checks/AWS Security Best Practices"],
+ "FirstObservedAt": iso8601Time,
+ "CreatedAt": iso8601Time,
+ "UpdatedAt": iso8601Time,
+ "Severity": {"Label": "INFORMATIONAL"},
+ "Confidence": 99,
+ "Title": "[GCP.BigQuery.1] BigQuery Tables that have not been modified in 90 days should be reviewed",
+ "Description": f"BigQuery table {tableId} has not been modified in 90 days. This may be an unused resource that can be deleted, especially if there is not any business use case to keeping the table operational. Review you internal policies and usage logs, as well as potentially sensitive or critical information, to make the determination if the table should be deleted. Refer to the remediation instructions if keeping the table is not intended.",
+ "Remediation": {
+ "Recommendation": {
+ "Text": "For more information on BigQuery best practices for backing up tables refer to the Backup & Disaster Recovery strategies for BigQuery entry in the Google Cloud blog.",
+ "Url": "https://cloud.google.com/blog/topics/developers-practitioners/backup-disaster-recovery-strategies-bigquery"
+ }
+ },
+ "ProductFields": {
+ "ProductName": "ElectricEye",
+ "Provider": "GCP",
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": table["location"],
+ "AssetDetails": assetB64,
+ "AssetClass": "Analytics",
+ "AssetService": "Google Cloud BigQuery",
+ "AssetComponent": f"Table"
+ },
+ "Resources": [
+ {
+ "Type": "GcpBigQueryTable",
+ "Id": fullTableId,
+ "Partition": awsPartition,
+ "Region": awsRegion,
+ "Details": {
+ "Other": {
+ "ProjectId": gcpProjectId,
+ "TableId": table["tableReference"]["tableId"],
+ "DatasetId": table["tableReference"]["datasetId"],
+ "LastModifiedTime": lastModified
+ }
+ }
+ }
+ ],
+ "Compliance": {
+ "Status": "FAILED",
+ "RelatedRequirements": [
+ "NIST CSF V1.1 ID.AM-2",
+ "NIST SP 800-53 Rev. 4 CM-8",
+ "NIST SP 800-53 Rev. 4 PM-5",
+ "AICPA TSC CC3.2",
+ "AICPA TSC CC6.1",
+ "ISO 27001:2013 A.8.1.1",
+ "ISO 27001:2013 A.8.1.2",
+ "ISO 27001:2013 A.12.5.1"
+ ]
+ },
+ "Workflow": {"Status": "NEW"},
+ "RecordState": "ACTIVE"
+ }
+ yield finding
+ # this is a passing check
+ else:
+ finding = {
+ "SchemaVersion": "2018-10-08",
+ "Id": f"{fullTableId}/bigquery-table-not-modified-in-90-days-check",
+ "ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
+ "GeneratorId": f"{fullTableId}/bigquery-table-not-modified-in-90-days-check",
+ "AwsAccountId": awsAccountId,
+ "Types": ["Software and Configuration Checks/AWS Security Best Practices"],
+ "FirstObservedAt": iso8601Time,
+ "CreatedAt": iso8601Time,
+ "UpdatedAt": iso8601Time,
+ "Severity": {"Label": "INFORMATIONAL"},
+ "Confidence": 99,
+ "Title": "[GCP.BigQuery.1] BigQuery Tables that have not been modified in 90 days should be reviewed",
+ "Description": f"BigQuery table {tableId} has been modified within the last 90 days. Periodically review your BigQuery tables to ensure they are still needed and that the data is still relevant. Refer to the remediation instructions if keeping the table is not intended.",
+ "Remediation": {
+ "Recommendation": {
+ "Text": "For more information on BigQuery best practices for backing up tables refer to the Backup & Disaster Recovery strategies for BigQuery entry in the Google Cloud blog.",
+ "Url": "https://cloud.google.com/blog/topics/developers-practitioners/backup-disaster-recovery-strategies-bigquery"
+ }
+ },
+ "ProductFields": {
+ "ProductName": "ElectricEye",
+ "Provider": "GCP",
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": table["location"],
+ "AssetDetails": assetB64,
+ "AssetClass": "Analytics",
+ "AssetService": "Google Cloud BigQuery",
+ "AssetComponent": "Table"
+ },
+ "Resources": [
+ {
+ "Type": "GcpBigQueryTable",
+ "Id": fullTableId,
+ "Partition": awsPartition,
+ "Region": awsRegion,
+ "Details": {
+ "Other": {
+ "ProjectId": gcpProjectId,
+ "TableId": table["tableReference"]["tableId"],
+ "DatasetId": table["tableReference"]["datasetId"],
+ "LastModifiedTime": lastModified
+ }
+ }
+ }
+ ],
+ "Compliance": {
+ "Status": "PASSED",
+ "RelatedRequirements": [
+ "NIST CSF V1.1 ID.AM-2",
+ "NIST SP 800-53 Rev. 4 CM-8",
+ "NIST SP 800-53 Rev. 4 PM-5",
+ "AICPA TSC CC3.2",
+ "AICPA TSC CC6.1",
+ "ISO 27001:2013 A.8.1.1",
+ "ISO 27001:2013 A.8.1.2",
+ "ISO 27001:2013 A.12.5.1"
+ ]
+ },
+ "Workflow": {"Status": "RESOLVED"},
+ "RecordState": "ARCHIVED"
+ }
+ yield finding
+
+@registry.register_check("gcp.bigquery")
+def bigquery_table_custom_cmek_check(cache: dict, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
+ """[GCP.BigQuery.2] BigQuery Tables should be encrypted with a customer-managed encryption key (CMEK)"""
+ # ISO Time
+ iso8601Time = datetime.datetime.now(datetime.UTC).replace(tzinfo=datetime.timezone.utc).isoformat()
+ # Loop the datasets
+ for table in get_bigquery_tables(cache, gcpProjectId, gcpCredentials):
+ fullTableId = table["id"]
+ tableId = table["tableReference"]["tableId"]
+ assetJson = json.dumps(table,default=str).encode("utf-8")
+ assetB64 = base64.b64encode(assetJson)
+ # this is a failing check
+ if table.get("encryptionConfiguration", {}).get("kmsKeyName", "") == "":
+ finding = {
+ "SchemaVersion": "2018-10-08",
+ "Id": f"{fullTableId}/bigquery-table-custom-cmek-check",
+ "ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
+ "GeneratorId": f"{fullTableId}/bigquery-table-custom-cmek-check",
+ "AwsAccountId": awsAccountId,
+ "Types": ["Software and Configuration Checks/AWS Security Best Practices"],
+ "FirstObservedAt": iso8601Time,
+ "CreatedAt": iso8601Time,
+ "UpdatedAt": iso8601Time,
+ "Severity": {"Label": "LOW"},
+ "Confidence": 99,
+ "Title": "[GCP.BigQuery.2] BigQuery Tables should be encrypted with a customer-managed encryption key (CMEK)",
+ "Description": f"BigQuery table {tableId} is not encrypted with a customer-managed encryption key (CMEK). By default, BigQuery encrypts all data before it is written to disk, and decrypts it when read by an authorized user. This process is transparent to users. However, you can choose to use your own encryption keys instead of the default Google-managed keys. Refer to the remediation instructions if this configuration is not intended.",
+ "Remediation": {
+ "Recommendation": {
+ "Text": "For more information on CMEK refer to the Customer-managed encryption keys for BigQuery entry in the Google Cloud documentation.",
+ "Url": "https://cloud.google.com/bigquery/docs/customer-managed-encryption"
+ }
+ },
+ "ProductFields": {
+ "ProductName": "ElectricEye",
+ "Provider": "GCP",
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": table["location"],
+ "AssetDetails": assetB64,
+ "AssetClass": "Analytics",
+ "AssetService": "Google Cloud BigQuery",
+ "AssetComponent": f"Table"
+ },
+ "Resources": [
+ {
+ "Type": "GcpBigQueryTable",
+ "Id": fullTableId,
+ "Partition": awsPartition,
+ "Region": awsRegion,
+ "Details": {
+ "Other": {
+ "ProjectId": gcpProjectId,
+ "TableId": table["tableReference"]["tableId"],
+ "DatasetId": table["tableReference"]["datasetId"]
+ }
+ }
+ }
+ ],
+ "Compliance": {
+ "Status": "FAILED",
+ "RelatedRequirements": [
+ "NIST CSF V1.1 PR.DS-1",
+ "NIST SP 800-53 Rev. 4 MP-8",
+ "NIST SP 800-53 Rev. 4 SC-12",
+ "NIST SP 800-53 Rev. 4 SC-28",
+ "AICPA TSC CC6.1",
+ "ISO 27001:2013 A.8.2.3"
+ ]
+ },
+ "Workflow": {"Status": "NEW"},
+ "RecordState": "ACTIVE"
+ }
+ yield finding
+ # this is a passing check
+ else:
+ finding = {
+ "SchemaVersion": "2018-10-08",
+ "Id": f"{fullTableId}/bigquery-table-custom-cmek-check",
+ "ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
+ "GeneratorId": f"{fullTableId}/bigquery-table-custom-cmek-check",
+ "AwsAccountId": awsAccountId,
+ "Types": ["Software and Configuration Checks/AWS Security Best Practices"],
+ "FirstObservedAt": iso8601Time,
+ "CreatedAt": iso8601Time,
+ "UpdatedAt": iso8601Time,
+ "Severity": {"Label": "INFORMATIONAL"},
+ "Confidence": 99,
+ "Title": "[GCP.BigQuery.2] BigQuery Tables should be encrypted with a customer-managed encryption key (CMEK)",
+ "Description": f"BigQuery table {tableId} is encrypted with a customer-managed encryption key (CMEK). By default, BigQuery encrypts all data before it is written to disk, and decrypts it when read by an authorized user. This process is transparent to users. However, you can choose to use your own encryption keys instead of the default Google-managed keys. Refer to the remediation instructions if this configuration is not intended.",
+ "Remediation": {
+ "Recommendation": {
+ "Text": "For more information on CMEK refer to the Customer-managed encryption keys for BigQuery entry in the Google Cloud documentation.",
+ "Url": "https://cloud.google.com/bigquery/docs/customer-managed-encryption"
+ }
+ },
+ "ProductFields": {
+ "ProductName": "ElectricEye",
+ "Provider": "GCP",
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": table["location"],
+ "AssetDetails": assetB64,
+ "AssetClass": "Analytics",
+ "AssetService": "Google Cloud BigQuery",
+ "AssetComponent": f"Table"
+ },
+ "Resources": [
+ {
+ "Type": "GcpBigQueryTable",
+ "Id": fullTableId,
+ "Partition": awsPartition,
+ "Region": awsRegion,
+ "Details": {
+ "Other": {
+ "ProjectId": gcpProjectId,
+ "TableId": table["tableReference"]["tableId"],
+ "DatasetId": table["tableReference"]["datasetId"]
+ }
+ }
+ }
+ ],
+ "Compliance": {
+ "Status": "PASSED",
+ "RelatedRequirements": [
+ "NIST CSF V1.1 PR.DS-1",
+ "NIST SP 800-53 Rev. 4 MP-8",
+ "NIST SP 800-53 Rev. 4 SC-12",
+ "NIST SP 800-53 Rev. 4 SC-28",
+ "AICPA TSC CC6.1",
+ "ISO 27001:2013 A.8.2.3"
+ ]
+ },
+ "Workflow": {"Status": "RESOLVED"},
+ "RecordState": "ARCHIVED"
+ }
+ yield finding
+
+# end
\ No newline at end of file
diff --git a/eeauditor/auditors/gcp/GCP_CloudSQL_Auditor.py b/eeauditor/auditors/gcp/GCP_CloudSQL_Auditor.py
index e85cde6c..08cd89f7 100644
--- a/eeauditor/auditors/gcp/GCP_CloudSQL_Auditor.py
+++ b/eeauditor/auditors/gcp/GCP_CloudSQL_Auditor.py
@@ -26,7 +26,7 @@
registry = CheckRegister()
-def get_cloudsql_dbs(cache, gcpProjectId):
+def get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
"""
AggregatedList result provides Zone information as well as every single Instance in a Project
"""
@@ -35,7 +35,7 @@ def get_cloudsql_dbs(cache, gcpProjectId):
return response
# CloudSQL requires SQL Admin API - also doesnt need an aggregatedList
- service = googleapiclient.discovery.build('sqladmin', 'v1beta4')
+ service = googleapiclient.discovery.build('sqladmin', 'v1beta4', credentials=gcpCredentials)
instances = service.instances().list(project=gcpProjectId).execute()
if instances:
@@ -45,13 +45,13 @@ def get_cloudsql_dbs(cache, gcpProjectId):
return {}
@registry.register_check("cloudsql")
-def cloudsql_instance_public_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_public_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.1] CloudSQL Instances should not be publicly reachable
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -293,13 +293,13 @@ def cloudsql_instance_public_check(cache, awsAccountId, awsRegion, awsPartition,
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_standard_backup_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_standard_backup_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.2] CloudSQL Instances should have automated backups configured
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -483,13 +483,13 @@ def cloudsql_instance_standard_backup_check(cache, awsAccountId, awsRegion, awsP
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_mysql_pitr_backup_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_mysql_pitr_backup_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.3] CloudSQL MySQL Instances with mission-critical workloads should have point-in-time recovery (PITR) configured
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -678,13 +678,13 @@ def cloudsql_instance_mysql_pitr_backup_check(cache, awsAccountId, awsRegion, aw
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_psql_pitr_backup_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_psql_pitr_backup_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.4] CloudSQL PostgreSQL Instances with mission-critical workloads should have point-in-time recovery (PITR) configured
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -873,13 +873,13 @@ def cloudsql_instance_psql_pitr_backup_check(cache, awsAccountId, awsRegion, aws
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_private_network_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_private_network_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.5] CloudSQL Instances should use private networks
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1039,13 +1039,13 @@ def cloudsql_instance_private_network_check(cache, awsAccountId, awsRegion, awsP
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_private_gcp_services_connection_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_private_gcp_services_connection_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.6] CloudSQL Instances using private networks should enable GCP private services access
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1152,10 +1152,10 @@ def cloudsql_instance_private_gcp_services_connection_check(cache, awsAccountId,
"ProductFields": {
"ProductName": "ElectricEye",
"Provider": "GCP",
- "ProviderType": "CSP",
- "ProviderAccountId": gcpProjectId,
- "AssetRegion": zone,
- "AssetDetails": assetB64,
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": zone,
+ "AssetDetails": assetB64,
"AssetClass": "Database",
"AssetService": "Google CloudSQL",
"AssetComponent": "Database Instance"
@@ -1201,13 +1201,13 @@ def cloudsql_instance_private_gcp_services_connection_check(cache, awsAccountId,
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_password_policy_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_password_policy_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.7] CloudSQL Instances should have a password policy enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1390,13 +1390,13 @@ def cloudsql_instance_password_policy_check(cache, awsAccountId, awsRegion, awsP
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_password_min_length_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_password_min_length_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.8] CloudSQL Instances should have a password minimum length requirement defined
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1580,13 +1580,13 @@ def cloudsql_instance_password_min_length_check(cache, awsAccountId, awsRegion,
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_password_reuse_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_password_reuse_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.9] CloudSQL Instances should have a password reuse interval defined
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1770,13 +1770,13 @@ def cloudsql_instance_password_reuse_check(cache, awsAccountId, awsRegion, awsPa
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_password_username_block_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_password_username_block_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.10] CloudSQL Instances should be configured to disallow the username from being part of the password
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1959,13 +1959,13 @@ def cloudsql_instance_password_username_block_check(cache, awsAccountId, awsRegi
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_password_change_interval_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_password_change_interval_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.11] CloudSQL Instances should have a password change interval defined
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -2149,13 +2149,13 @@ def cloudsql_instance_password_change_interval_check(cache, awsAccountId, awsReg
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_storage_autoresize_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_storage_autoresize_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.12] CloudSQL Instances should have automatic storage increase enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -2332,13 +2332,13 @@ def cloudsql_instance_storage_autoresize_check(cache, awsAccountId, awsRegion, a
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_deletion_protection_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_deletion_protection_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.13] CloudSQL Instances should have deletion protection enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -2523,13 +2523,13 @@ def cloudsql_instance_deletion_protection_check(cache, awsAccountId, awsRegion,
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_query_insights_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_query_insights_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.14] CloudSQL Instances should have query insights enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -2703,13 +2703,13 @@ def cloudsql_instance_query_insights_check(cache, awsAccountId, awsRegion, awsPa
yield finding
@registry.register_check("cloudsql")
-def cloudsql_instance_tls_enforcement_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId):
+def cloudsql_instance_tls_enforcement_check(cache, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
"""
[GCP.CloudSQL.15] CloudSQL Instances should enforce SSL/TLS connectivity
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for csql in get_cloudsql_dbs(cache, gcpProjectId):
+ for csql in get_cloudsql_dbs(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(csql,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
diff --git a/eeauditor/auditors/gcp/GCP_ComputeEngine_Auditor.py b/eeauditor/auditors/gcp/GCP_ComputeEngine_Auditor.py
index dd6eaf75..f479c893 100644
--- a/eeauditor/auditors/gcp/GCP_ComputeEngine_Auditor.py
+++ b/eeauditor/auditors/gcp/GCP_ComputeEngine_Auditor.py
@@ -26,7 +26,7 @@
registry = CheckRegister()
-def get_compute_engine_instances(cache: dict, gcpProjectId: str):
+def get_compute_engine_instances(cache: dict, gcpProjectId: str, gcpCredentials):
'''
AggregatedList result provides Zone information as well as every single Instance in a Project
'''
@@ -35,7 +35,7 @@ def get_compute_engine_instances(cache: dict, gcpProjectId: str):
results = []
- compute = googleapiclient.discovery.build('compute', 'v1')
+ compute = googleapiclient.discovery.build('compute', 'v1', credentials=gcpCredentials)
aggResult = compute.instances().aggregatedList(project=gcpProjectId).execute()
@@ -60,13 +60,13 @@ def get_compute_engine_instances(cache: dict, gcpProjectId: str):
return results
@registry.register_check("gce")
-def gce_instance_deletion_protection_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_deletion_protection_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.1] Google Compute Engine VM instances should have deletion protection enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -254,13 +254,13 @@ def gce_instance_deletion_protection_check(cache: dict, awsAccountId: str, awsRe
yield finding
@registry.register_check("gce")
-def gce_instance_ip_forwarding_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_ip_forwarding_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.2] Google Compute Engine VM instances should not have IP forwarding enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -422,13 +422,13 @@ def gce_instance_ip_forwarding_check(cache: dict, awsAccountId: str, awsRegion:
yield finding
@registry.register_check("gce")
-def gce_instance_auto_restart_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_auto_restart_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.3] Google Compute Engine VM instances should have automatic restart enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -608,13 +608,13 @@ def gce_instance_auto_restart_check(cache: dict, awsAccountId: str, awsRegion: s
yield finding
@registry.register_check("gce")
-def gce_instance_secure_boot_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_secure_boot_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.4] Google Compute Engine VM instances should have Secure Boot enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -770,13 +770,13 @@ def gce_instance_secure_boot_check(cache: dict, awsAccountId: str, awsRegion: st
yield finding
@registry.register_check("gce")
-def gce_instance_vtpm_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_vtpm_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.5] Google Compute Engine VM instances should have Virtual Trusted Platform Module enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -932,13 +932,13 @@ def gce_instance_vtpm_check(cache: dict, awsAccountId: str, awsRegion: str, awsP
yield finding
@registry.register_check("gce")
-def gce_instance_integrity_mon_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_integrity_mon_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.6] Google Compute Engine VM instances should have Integrity Monitoring enabled
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1094,13 +1094,13 @@ def gce_instance_integrity_mon_check(cache: dict, awsAccountId: str, awsRegion:
yield finding
@registry.register_check("gce")
-def gce_instance_siip_auto_update_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_siip_auto_update_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.7] Google Compute Engine VM instances should be configured to auto-update the Shielded Instance Integrity Auto-learn Policy
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1256,13 +1256,13 @@ def gce_instance_siip_auto_update_check(cache: dict, awsAccountId: str, awsRegio
yield finding
@registry.register_check("gce")
-def gce_instance_confidential_compute_update_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_confidential_compute_update_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.8] Google Compute Engine VM instances containing sensitive data or high-security workloads should enable Confidential Computing
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1432,14 +1432,14 @@ def gce_instance_confidential_compute_update_check(cache: dict, awsAccountId: st
yield finding
@registry.register_check("gce")
-def gce_instance_serial_port_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_serial_port_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.9] Google Compute Engine VM instances should not enabled serial port access
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- compute = googleapiclient.discovery.build('compute', 'v1')
+ compute = googleapiclient.discovery.build('compute', 'v1', credentials=gcpCredentials)
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1614,14 +1614,14 @@ def gce_instance_serial_port_access_check(cache: dict, awsAccountId: str, awsReg
yield finding
@registry.register_check("gce")
-def gce_instance_oslogon_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_oslogon_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.10] Google Compute Engine Linux VM instances should be configured to be accessed using OS Logon
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- compute = googleapiclient.discovery.build('compute', 'v1')
+ compute = googleapiclient.discovery.build('compute', 'v1', credentials=gcpCredentials)
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1785,14 +1785,14 @@ def gce_instance_oslogon_access_check(cache: dict, awsAccountId: str, awsRegion:
yield finding
@registry.register_check("gce")
-def gce_instance_oslogon_2fa_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_oslogon_2fa_access_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.11] Google Compute Engine Linux VM instances should be configured to be accessed using OS Logon with 2FA
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- compute = googleapiclient.discovery.build('compute', 'v1')
+ compute = googleapiclient.discovery.build('compute', 'v1', credentials=gcpCredentials)
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -1993,14 +1993,14 @@ def gce_instance_oslogon_2fa_access_check(cache: dict, awsAccountId: str, awsReg
yield finding
@registry.register_check("gce")
-def gce_instance_block_proj_ssh_keys_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_block_proj_ssh_keys_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.12] Google Compute Engine VM instances should block access from Project-wide SSH Keys
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- compute = googleapiclient.discovery.build('compute', 'v1')
+ compute = googleapiclient.discovery.build('compute', 'v1', credentials=gcpCredentials)
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
@@ -2179,13 +2179,13 @@ def gce_instance_block_proj_ssh_keys_check(cache: dict, awsAccountId: str, awsRe
yield finding
@registry.register_check("gce")
-def gce_instance_public_ip_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str):
+def gce_instance_public_ip_check(cache: dict, awsAccountId: str, awsRegion: str, awsPartition: str, gcpProjectId: str, gcpCredentials):
"""
[GCP.GCE.13] Google Compute Engine VM instances should not be publicly reachable
"""
iso8601Time = datetime.datetime.now(datetime.timezone.utc).isoformat()
- for gce in get_compute_engine_instances(cache, gcpProjectId):
+ for gce in get_compute_engine_instances(cache, gcpProjectId, gcpCredentials):
# B64 encode all of the details for the Asset
assetJson = json.dumps(gce,default=str).encode("utf-8")
assetB64 = base64.b64encode(assetJson)
diff --git a/eeauditor/auditors/gcp/GCP_IAM_Auditor.py b/eeauditor/auditors/gcp/GCP_IAM_Auditor.py
new file mode 100644
index 00000000..9d78d639
--- /dev/null
+++ b/eeauditor/auditors/gcp/GCP_IAM_Auditor.py
@@ -0,0 +1,250 @@
+#This file is part of ElectricEye.
+#SPDX-License-Identifier: Apache-2.0
+
+#Licensed to the Apache Software Foundation (ASF) under one
+#or more contributor license agreements. See the NOTICE file
+#distributed with this work for additional information
+#regarding copyright ownership. The ASF licenses this file
+#to you under the Apache License, Version 2.0 (the
+#"License"); you may not use this file except in compliance
+#with the License. You may obtain a copy of the License at
+
+#http://www.apache.org/licenses/LICENSE-2.0
+
+#Unless required by applicable law or agreed to in writing,
+#software distributed under the License is distributed on an
+#"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+#KIND, either express or implied. See the License for the
+#specific language governing permissions and limitations
+#under the License.
+
+import datetime
+from check_register import CheckRegister
+import googleapiclient.discovery
+import base64
+import json
+
+registry = CheckRegister()
+
+def get_service_accounts(cache: dict, gcpProjectId, gcpCredentials) -> list[dict] | dict:
+ """Get all service accounts for a given project"""
+ response = cache.get("get_bigquery_tables")
+ if response:
+ return response
+
+ service = googleapiclient.discovery.build("iam", "v1", credentials=gcpCredentials)
+ request = service.projects().serviceAccounts().list(name=f"projects/{gcpProjectId}").execute()
+
+ serviceAccounts = request.get("accounts", [])
+
+ if serviceAccounts:
+ cache["get_bigquery_tables"] = serviceAccounts
+ return cache["get_bigquery_tables"]
+ else:
+ return {}
+
+def get_service_account_keys(serviceAccountEmail: str, gcpCredentials) -> list[dict]:
+ """Gets keys for a given service account"""
+ service = googleapiclient.discovery.build("iam", "v1", credentials=gcpCredentials)
+ request = service.projects().serviceAccounts().keys().list(
+ name=f"projects/-/serviceAccounts/{serviceAccountEmail}",
+ keyTypes="USER_MANAGED"
+ ).execute()
+
+ serviceAccountKeys = request.get("keys", [])
+
+ return serviceAccountKeys
+
+@registry.register_check("gcp.iam")
+def gcp_service_account_no_user_managed_keys_check(cache: dict, awsAccountId, awsRegion, awsPartition, gcpProjectId, gcpCredentials):
+ """[GCP.IAM.1] Ensure that there are not user-managed keys for service accounts"""
+ # ISO Time
+ iso8601Time = datetime.datetime.now(datetime.UTC).replace(tzinfo=datetime.timezone.utc).isoformat()
+ # Loop the datasets
+ for serviceAccount in get_service_accounts(cache, gcpProjectId, gcpCredentials):
+ displayName = serviceAccount["displayName"]
+ serviceAccountId = serviceAccount["uniqueId"]
+ serviceAccountName = serviceAccount["name"]
+ # If there are keys for the service account, fail the check
+ userManagedKeyFail = False
+ keys = get_service_account_keys(serviceAccount["email"], gcpCredentials)
+ if keys:
+ userManagedKeyFail = True
+ # add the keys if they exist to the asset
+ serviceAccount["keys"] = keys
+ assetJson = json.dumps(serviceAccount,default=str).encode("utf-8")
+ assetB64 = base64.b64encode(assetJson)
+
+ # this is a failing check
+ if userManagedKeyFail:
+ finding = {
+ "SchemaVersion": "2018-10-08",
+ "Id": f"{serviceAccountName}/gcp-service-account-no-user-managed-keys-check",
+ "ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
+ "GeneratorId": f"{serviceAccountName}/gcp-service-account-no-user-managed-keys-check",
+ "AwsAccountId": awsAccountId,
+ "Types": ["Software and Configuration Checks/AWS Security Best Practices"],
+ "FirstObservedAt": iso8601Time,
+ "CreatedAt": iso8601Time,
+ "UpdatedAt": iso8601Time,
+ "Severity": {"Label": "HIGH"},
+ "Confidence": 99,
+ "Title": "[GCP.IAM.1] Ensure that there are not user-managed keys for service accounts",
+ "Description": f"GCP Service Account {displayName} (Unique ID: {serviceAccountId}) contains at least one user-managed key. User managed service accounts should not have user-managed keys, Anyone who has access to the keys will be able to access resources through the service account. GCP-managed keys are used by Cloud Platform services such as App Engine and Compute Engine. These keys cannot be downloaded. Google will keep the keys and automatically rotate them on an approximately weekly basis. User-managed keys are created, downloadable, and managed by users. They expire 10 years from creation. Even with key owner precautions, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in the Downloads directory, or accidentally leaving them on support blogs/channels. It is rather ironic to include this check, given that I require the usage of Service Account keys after, better to be safe than sorry I guess! Refer to the remediation instructions if keeping the table is not intended.",
+ "Remediation": {
+ "Recommendation": {
+ "Text": "For more information on best practices for service accounts refer to the Best practices for using service accounts section of the GCP IAM documentation.",
+ "Url": "https://cloud.google.com/iam/docs/best-practices-service-accounts"
+ }
+ },
+ "ProductFields": {
+ "ProductName": "ElectricEye",
+ "Provider": "GCP",
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": "global",
+ "AssetDetails": assetB64,
+ "AssetClass": "Identity & Access Management",
+ "AssetService": "Google Cloud IAM",
+ "AssetComponent": "Service Account"
+ },
+ "Resources": [
+ {
+ "Type": "GcpIamServiceAccount",
+ "Id": serviceAccountName,
+ "Partition": awsPartition,
+ "Region": awsRegion,
+ "Details": {
+ "Other": {
+ "ProjectId": gcpProjectId,
+ "ServiceAccountName": serviceAccountName,
+ "ServiceAccountId": serviceAccountId,
+ "DisplayName": displayName
+ }
+ }
+ }
+ ],
+ "Compliance": {
+ "Status": "FAILED",
+ "RelatedRequirements": [
+ "NIST CSF V1.1 PR.AC-1",
+ "NIST SP 800-53 Rev. 4 AC-1",
+ "NIST SP 800-53 Rev. 4 AC-2",
+ "NIST SP 800-53 Rev. 4 IA-1",
+ "NIST SP 800-53 Rev. 4 IA-2",
+ "NIST SP 800-53 Rev. 4 IA-3",
+ "NIST SP 800-53 Rev. 4 IA-4",
+ "NIST SP 800-53 Rev. 4 IA-5",
+ "NIST SP 800-53 Rev. 4 IA-6",
+ "NIST SP 800-53 Rev. 4 IA-7",
+ "NIST SP 800-53 Rev. 4 IA-8",
+ "NIST SP 800-53 Rev. 4 IA-9",
+ "NIST SP 800-53 Rev. 4 IA-10",
+ "NIST SP 800-53 Rev. 4 IA-11",
+ "AICPA TSC CC6.1",
+ "AICPA TSC CC6.2",
+ "ISO 27001:2013 A.9.2.1",
+ "ISO 27001:2013 A.9.2.2",
+ "ISO 27001:2013 A.9.2.3",
+ "ISO 27001:2013 A.9.2.4",
+ "ISO 27001:2013 A.9.2.6",
+ "ISO 27001:2013 A.9.3.1",
+ "ISO 27001:2013 A.9.4.2",
+ "ISO 27001:2013 A.9.4.3",
+ "MITRE ATT&CK T1589",
+ "MITRE ATT&CK T1586",
+ "CIS Google Cloud Platform Foundation Benchmark V2.0 1.4"
+ ]
+ },
+ "Workflow": {"Status": "NEW"},
+ "RecordState": "ACTIVE"
+ }
+ yield finding
+ # this is a passing check
+ else:
+ finding = {
+ "SchemaVersion": "2018-10-08",
+ "Id": f"{serviceAccountName}/gcp-service-account-no-user-managed-keys-check",
+ "ProductArn": f"arn:{awsPartition}:securityhub:{awsRegion}:{awsAccountId}:product/{awsAccountId}/default",
+ "GeneratorId": f"{serviceAccountName}/gcp-service-account-no-user-managed-keys-check",
+ "AwsAccountId": awsAccountId,
+ "Types": ["Software and Configuration Checks/AWS Security Best Practices"],
+ "FirstObservedAt": iso8601Time,
+ "CreatedAt": iso8601Time,
+ "UpdatedAt": iso8601Time,
+ "Severity": {"Label": "INFORMATIONAL"},
+ "Confidence": 99,
+ "Title": "[GCP.IAM.1] Ensure that there are not user-managed keys for service accounts",
+ "Description": f"GCP Service Account {displayName} (Unique ID: {serviceAccountId}) does not contain any user-managed keys.",
+ "Remediation": {
+ "Recommendation": {
+ "Text": "For more information on best practices for service accounts refer to the Best practices for using service accounts section of the GCP IAM documentation.",
+ "Url": "https://cloud.google.com/iam/docs/best-practices-service-accounts"
+ }
+ },
+ "ProductFields": {
+ "ProductName": "ElectricEye",
+ "Provider": "GCP",
+ "ProviderType": "CSP",
+ "ProviderAccountId": gcpProjectId,
+ "AssetRegion": "global",
+ "AssetDetails": assetB64,
+ "AssetClass": "Identity & Access Management",
+ "AssetService": "Google Cloud IAM",
+ "AssetComponent": "Service Account"
+ },
+ "Resources": [
+ {
+ "Type": "GcpIamServiceAccount",
+ "Id": serviceAccountName,
+ "Partition": awsPartition,
+ "Region": awsRegion,
+ "Details": {
+ "Other": {
+ "ProjectId": gcpProjectId,
+ "ServiceAccountName": serviceAccountName,
+ "ServiceAccountId": serviceAccountId,
+ "DisplayName": displayName
+ }
+ }
+ }
+ ],
+ "Compliance": {
+ "Status": "PASSED",
+ "RelatedRequirements": [
+ "NIST CSF V1.1 PR.AC-1",
+ "NIST SP 800-53 Rev. 4 AC-1",
+ "NIST SP 800-53 Rev. 4 AC-2",
+ "NIST SP 800-53 Rev. 4 IA-1",
+ "NIST SP 800-53 Rev. 4 IA-2",
+ "NIST SP 800-53 Rev. 4 IA-3",
+ "NIST SP 800-53 Rev. 4 IA-4",
+ "NIST SP 800-53 Rev. 4 IA-5",
+ "NIST SP 800-53 Rev. 4 IA-6",
+ "NIST SP 800-53 Rev. 4 IA-7",
+ "NIST SP 800-53 Rev. 4 IA-8",
+ "NIST SP 800-53 Rev. 4 IA-9",
+ "NIST SP 800-53 Rev. 4 IA-10",
+ "NIST SP 800-53 Rev. 4 IA-11",
+ "AICPA TSC CC6.1",
+ "AICPA TSC CC6.2",
+ "ISO 27001:2013 A.9.2.1",
+ "ISO 27001:2013 A.9.2.2",
+ "ISO 27001:2013 A.9.2.3",
+ "ISO 27001:2013 A.9.2.4",
+ "ISO 27001:2013 A.9.2.6",
+ "ISO 27001:2013 A.9.3.1",
+ "ISO 27001:2013 A.9.4.2",
+ "ISO 27001:2013 A.9.4.3",
+ "MITRE ATT&CK T1589",
+ "MITRE ATT&CK T1586",
+ "CIS Google Cloud Platform Foundation Benchmark V2.0 1.4"
+ ]
+ },
+ "Workflow": {"Status": "RESOLVED"},
+ "RecordState": "ARCHIVED"
+ }
+ yield finding
+
+
+# end
\ No newline at end of file
diff --git a/eeauditor/cloud_utils.py b/eeauditor/cloud_utils.py
index db45bdef..8ee8c488 100644
--- a/eeauditor/cloud_utils.py
+++ b/eeauditor/cloud_utils.py
@@ -26,6 +26,7 @@
from re import compile
import json
from botocore.exceptions import ClientError
+from google.oauth2 import service_account
from azure.identity import ClientSecretCredential
from azure.mgmt.resource.subscriptions import SubscriptionClient
import snowflake.connector as snowconn
@@ -126,7 +127,7 @@ def __init__(self, assessmentTarget: str, tomlPath: str | None, useToml: str, ar
# GCP
if assessmentTarget == "GCP":
# Process ["gcp_project_ids"]
- gcpProjects = list(data["regions_and_accounts"]["gcp"]["gcp_project_ids"])
+ gcpProjects: list = data["regions_and_accounts"]["gcp"]["gcp_project_ids"]
if not gcpProjects:
logger.error("No GCP Projects were provided in [regions_and_accounts.gcp.gcp_project_ids].")
sys.exit(2)
@@ -147,7 +148,7 @@ def __init__(self, assessmentTarget: str, tomlPath: str | None, useToml: str, ar
gcpCred,
"gcp_service_account_json_payload_value"
)
- self.setup_gcp_credentials(self.gcpServiceAccountJsonPayloadValue)
+ self.gcpCredentials = self.setup_gcp_credentials(self.gcpServiceAccountJsonPayloadValue)
# Oracle Cloud Infrastructure (OCI)
if assessmentTarget == "OCI":
@@ -759,31 +760,20 @@ def get_aws_shield_advanced_eligibility(session) -> bool:
def setup_gcp_credentials(self, credentialValue) -> None:
"""
- The Python Google Client SDK defaults to checking for credentials in the "GOOGLE_APPLICATION_CREDENTIALS"
- environment variable. This can be the location of a GCP Service Account (SA) Key which is stored in a JSON file.
- ElectricEye utilizes Service Accounts and provides multi-Project support by virtue of the Email of an SA added
- to those Projects as an IAM Role Binding Member will proper Roles (Viewer & Security Reviewer) added.
-
- This function simply takes the value of the TOML configuration ["gcp_service_account_json_payload_value"] derived
- by this overall Class (CloudConfig), writes it to a JSON file, and specifies that location as the environment variable "GOOGLE_APPLICATION_CREDENTIALS"
+ Takes the credential value derived from the TOML file and creates a GCP credential object that can be passed to EEAuditor
"""
- here = path.abspath(path.dirname(__file__))
- credentials_file_path = path.join(here, 'gcp_cred.json')
+ credentials = json.loads(credentialValue)
- # Attempt to parse the credential value and write it to a file
+ # Create a GCP credential object from the JSON payload
try:
- credentials = json.loads(credentialValue)
- with open(credentials_file_path, 'w') as jsonfile:
- json.dump(credentials, jsonfile, indent=2)
- chmod(credentials_file_path, 0o600) # Set file to be readable and writable only by the owner
- except json.JSONDecodeError as e:
+ gcpCredentials = service_account.Credentials.from_service_account_info(credentials)
+ except Exception as e:
logger.error(
- "Failed to parse GCP credentials JSON: %s", e
+ "Error encountered attempting to create GCP credentials from JSON payload: %s", e
)
- raise e
+ sys.exit(2)
- logger.info("%s saved to environment variable", credentials_file_path)
- environ["GOOGLE_APPLICATION_CREDENTIALS"] = credentials_file_path
+ return gcpCredentials
def setup_oci_credentials(self, credentialValue) -> None:
"""
diff --git a/eeauditor/eeauditor.py b/eeauditor/eeauditor.py
index c808a9dc..6766d683 100644
--- a/eeauditor/eeauditor.py
+++ b/eeauditor/eeauditor.py
@@ -62,7 +62,8 @@ def __init__(self, assessmentTarget, args, useToml, tomlPath=None, searchPath=No
searchPath = "./auditors/gcp"
utils = CloudConfig(assessmentTarget, tomlPath, useToml, args)
# parse specific values for Assessment Target - these should match 1:1 with CloudConfig
- self.gcpProjectIds = utils.gcp_project_ids
+ self.gcpProjectIds = utils.gcpProjectIds
+ self.gcpCredentials = utils.gcpCredentials
# OCI
if assessmentTarget == "OCI":
searchPath = "./auditors/oci"
@@ -371,7 +372,8 @@ def run_gcp_checks(self, pluginName=None, delay=0):
awsAccountId=account,
awsRegion=region,
awsPartition=partition,
- gcpProjectId=project
+ gcpProjectId=project,
+ gcpCredentials=self.gcpCredentials
):
if finding is not None:
yield finding
diff --git a/eeauditor/processor/outputs/control_objectives.json b/eeauditor/processor/outputs/control_objectives.json
index 61c933e0..9e94a49d 100644
--- a/eeauditor/processor/outputs/control_objectives.json
+++ b/eeauditor/processor/outputs/control_objectives.json
@@ -16250,5 +16250,9 @@
{
"ControlTitle": "CIS Snowflake Foundations Benchmark V1.0.0 3.1",
"ControlDescription": "Ensure that an account-level network policy has been configured to only allow access from trusted IP addresses"
+ },
+ {
+ "ControlTitle": "CIS Google Cloud Platform Foundation Benchmark V2.0 1.4",
+ "ControlDescription": "Ensure That There Are Only GCP-Managed Service Account Keys for Each Service Account"
}
]
\ No newline at end of file
diff --git a/eeauditor/processor/outputs/iconography.yaml b/eeauditor/processor/outputs/iconography.yaml
index 4f36f8d8..224c7d79 100644
--- a/eeauditor/processor/outputs/iconography.yaml
+++ b/eeauditor/processor/outputs/iconography.yaml
@@ -194,6 +194,14 @@
ImageTag:
- AssetService: Google Compute Engine
ImageTag:
+- AssetService: Google Kubernetes Engine
+ ImageTag:
+- AssetService: Google Cloud Storage
+ ImageTag:
+- AssetService: Google Cloud IAM
+ ImageTag:
+- AssetService: Google Cloud BigQuery
+ ImageTag:
##############
# SERVICENOW #
##############