Skip to content

Automatically manage GCP CloudSQL for PostgreSQL instances atop Kubernetes.

License

Notifications You must be signed in to change notification settings

6RiverSystems/cloudsql-postgres-operator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

cloudsql-postgres-operator

Docker Repository on Quay

Documentation

Usage

One can find comprehensive usage documentation in the docs/usage/ directory of this repository. Existing usage documentation covers the following topics:

  1. Installation Guide provides instructions on how to install and configure cloudsql-postgres-operator.

  2. Managing CSQLP instances includes details on how to manage CSQLP instances.

  3. Connecting to CSQLP instances details how to connect Kubernetes workloads to CSQLP instances.

Design

The design document for cloudsql-postgres-operator can be found here.

Development Guide

Running

To run cloudsql-postgres-operator in development mode, one needs the following:

  • A Kubernetes 1.12+ cluster;

  • A Google Cloud Platform project and its ID;

  • Two IAM service accounts and their respective credential files;

    • These must have the roles/cloudsql.admin and roles/cloudsql.client roles respectively.

    • The credential files are assumed to be named admin-key.json and client-key.json.

Additionally, the following software is required to be installed in one’s workstation:

  • kubectl

  • make

  • skaffold

After making sure these prerequisites are met, one may run the following command to start cloudsql-postgres-operator in development mode:

$ ADMIN_KEY_JSON_FILE=./admin-key.json \
  CLIENT_KEY_JSON_FILE=./client-key.json \
  PROFILE=<profile> \
  PROJECT_ID=<project-id> \
  make skaffold

In the command above…​

  • …​ <profile> must be replaced by one of gke, kind or minikube;

  • …​ <project-id> must be replaced by the ID of the target GCP project.

When running this command, one must make sure that kubectl is pointing at the intended GKE/Kind/Minikube cluster.

Running the abovementioned command will build cloudsql-postgres-operator, deploy it to the target Kubernetes cluster and start streaming its logs:

(...)
[cloudsql-postgres-operator] time="2019-05-17T15:17:52Z" level=info msg="cloudsql-postgres-operator is starting" version=e1f6541-dev
(...)
[cloudsql-postgres-operator] time="2019-05-17T15:17:53Z" level=debug msg="started workers" controller=postgresqlinstance-controller

To stop cloudsql-postgres-operator and cleanup, one may hit Ctrl+C.

Testing

cloudsql-postgres-operator includes an end-to-end test suite designed to test several aspects of the lifecycle of a managed CSQLP instance. By default, the test suite tests access to CSQLP instances using public IP only, since private IP access requires a compatible GKE cluster.

In order to run the basic version of test suite, one may run the following command:

$ PATH_TO_ADMIN_KEY=./admin-key.json \
  PROJECT_ID=<project-id> \
  make test.e2e

In case one is testing against a compatible GKE cluster, one may run the full version of the test suite by running the following command instead:

$ NETWORK=<vpc-name> \
  PATH_TO_ADMIN_KEY=./admin-key.json \
  PROJECT_ID=<project-id> \
  REGION=<region> \
  TEST_PRIVATE_IP_ACCESS=true \
  make test.e2e

As mentioned above, testing private IP access to CSQLP instances requires a compatible GKE cluster. In particular, this means that the GKE cluster must be VPC-native to the <vpc-name> VPC and be located on the region indicated by <region>.

About

Automatically manage GCP CloudSQL for PostgreSQL instances atop Kubernetes.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages