Skip to content

Latest commit

 

History

History
160 lines (127 loc) · 13.1 KB

README.md

File metadata and controls

160 lines (127 loc) · 13.1 KB

OctoPerf Enterprise-Edition Helm Chart

This functionality is in beta status and may be changed or removed completely in a future release. OctoPerf will take a best effort approach to fix any issues, but beta features are not subject to the support SLA of official GA features.

Overview

This chart launches the whole OctoPerf Enterprise-Edition stack inside your Kubernetes Cluster. It includes the following components:

  • Elasticsearch: the main database used to store most of the data,
  • Backend: the backend server which serves OctoPerf REST API,
  • Frontend: the Web UI which consumes the REST API exposed by the backend. It's made of static web html/js/css files served by a NGinx server on /app,
  • Frontend Beta: the Beta Web UI which consumes the REST API exposed by the backend. It's made of static web html/js/css files served by a NGinx server on /ui,
  • Documentation: static web documentation served by a NGinx server.

For a more comprehensive understanding, see How the Enterprise-Edition works.

Dependencies

OctoPerf Enterprise-Edition helm chart depends on:

Installation

  • Add the octoperf helm charts repo:

    helm repo add octoperf https://helm.octoperf.com
    
  • Install it:

    helm install --name octoperf-ee octoperf/enterprise-edition
    

Compatibility

This chart is tested with the latest supported versions. The currently tested versions are:

15.x.x
15.2.2
15.2.1
15.2.0
15.1.0
15.0.0
14.x.x
14.5.1
14.5.0
14.4.1
14.4.0
14.3.0
14.2.0
14.1.0
14.0.0

Examples of installing older major versions can be found in the examples directory.

Getting Started

  • This repo includes a number of example configurations which can be used as a reference,
  • The default storage class for GKE is standard which by default will give you pd-ssd type persistent volumes. This is network attached storage and will not perform as well as local storage. If you are using Kubernetes version 1.10 or greater you can use Local PersistentVolumes for increased performance.
  • It is important to verify that the JVM heap size in elasticsearch.esJavaOpts and backend.config.JAVA_OPTS and to set the CPU/Memory resources to something suitable for your cluster.

Configuration

The configuration is split in 4 big sections defined by the prefix being used:

  • No prefix: global configuration settings such as Docker registry,
  • ingress. prefix: ingress configuration settings,
  • backend. prefix: backend configuration settings,
  • frontend. prefix: frontend configuration settings,
  • doc. prefix: documentation configuration settings.

An example of configuration values can be found here.

Parameter Description Default
registry Docker images registry to use registry.hub.docker.com
imagePullPolicy Kubernetes Image Pull Policy IfNotPresent
imagePullSecrets Configuration for imagePullSecrets so that you can use a private registry for your image []
ingress.enabled Enable / Disable Ingress Controller true
ingress.annotations Configurable annotations applied to all ingress pods {}
ingress.path ingress path /
ingress.hosts ingress hosts [enterprise-edition.local]
ingress.tls ingress tls secrets to use []
elasticsearch.clusterHealthCheckParams The Elasticsearch cluster health status params that will be used by readinessProbe command wait_for_status=yellow&timeout=1s
doc.enabled Enable / Disable service exposing static documentation using an NGinx deployment true
doc.image The documentation docker image enterprise-documentation
doc.annotations Configurable annotations applied to all documentation pods {}
doc.readinessProbe Documentation pods readinessProbe httpGet /doc
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
doc.livenessProbe Documentation pods livenessProbe httpGet /doc
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
doc.nodeSelector Documentation node selectors {}
doc.affinity Documentation pod affinity {}
frontend.enabled Enable / Disable frontend DaemonSet true
frontend.image Enable / Disable frontend DaemonSet true
frontend.annotations Configurable annotations applied to all frontend pods {}
frontend.readinessProbe Frontend pods readinessProbe httpGet /doc
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
frontend.livenessProbe Frontend pods livenessProbe httpGet /doc
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
frontend.config.config-ee.json Frontend configuration file content. This file is mounted as a volume on frontend pods. Json
frontend.nodeSelector Frontend node selectors {}
frontend.affinity Frontend pod affinity {}
backend.enabled Enable / Disable Backend StatefulSet true
backend.annotations Annotations that Kubernetes will use for the service {}
backend.env Backend Pods environment Variable stored in a configmap. See Enterprise-Edition Configuration for more settings. JAVA_OPTS: "-Xms256m -Xmx256m"
server.hostname: "enterprise-edition.local"
server.public.port: 80
elasticsearch.hostname: elasticsearch-master-headless
clustering.driver: hazelcast
clustering.quorum: "1"
backend.readinessProbe Backend pods readinessProbe tcpSocket http-port
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
backend.livenessProbe Backend pods livenessProbe tcpSocket http-port
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
backend.schedulerName Name of the alternate scheduler nil
backend.priorityClassName The name of the PriorityClass. No default is supplied as the PriorityClass must be created first. nil
backend.secretMounts Allows you easily mount a secret as a file inside the statefulset. Useful for mounting certificates and other secrets. See values.yaml for an example []
backend.nodeSelector Configurable nodeSelector so that you can target specific nodes. {}
backend.affinity Backend Affinity {}
backend.podManagementPolicy By default Kubernetes deploys statefulsets serially. This deploys them in parallel so that they can discover each other Parallel
backend.updateStrategy The updateStrategy for the statefulset. By default Kubernetes will wait for the cluster to be green after upgrading each pod. Setting this to OnDelete will allow you to manually delete each pod during upgrades RollingUpdate
backend.schedulerName Name of the alternate scheduler nil
backend.persistentVolume.enabled If true, backend will create/use a Persistent Volume Claim. If false, use emptyDir true
backend.persistentVolume.accessModes Backend data Persistent Volume access modes. Must match those of existing PV or dynamic provisioner. Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ [ReadWriteOnce]
backend.persistentVolume.accessModes backend data Persistent Volume Claim annotations {}
backend.persistentVolume.mountPath backend data Persistent Volume mount root path inside the pods. /home/octoperf/data
backend.persistentVolume.size backend data Persistent Volume size 1Gi
backend.persistentVolume.storageClass backend data Persistent Volume storage class nil
backend.persistentVolume.subPath backend data Persistent Volume Subdirectory of backend data Persistent Volume to mount. Useful if the volume's root directory is not empty nil
backend.resources backend resource requests and limits. Ref: http://kubernetes.io/docs/user-guide/compute-resources/ {}
backend.securityContext Security context to be added to backend pods {}
backend.headless.annotations Backend headless service annotations. {}
backend.headless.labels Backend headless service labels. {}
backend.headless.publishNotReadyAddresses Whenever non-ready backend IPs are exposed through the headless service. true
backend.service.annotations Backend service annotations. {}
backend.service.labels Backend service labels. {}
backend.service.clusterIP Backend service cluster IP. nil
backend.service.externalIPs Backend service external IPs. []
backend.service.loadBalancerIP Backend service load balancer IP. nil
backend.service.loadBalancerSourceRanges Backend service load balancer source ranges. []
backend.service.nodePort Custom nodePort port that can be set if you are using service.type: nodePort []

Local development

This chart is designed to run on production scale Kubernetes clusters with multiple nodes, lots of memory and persistent storage. For that reason it can be a bit tricky to run them against local Kubernetes environments such as minikube. Below are some examples of how to get this working locally.

Minikube

This chart also works successfully on minikube in addition to typical hosted Kubernetes environments. An example values.yaml file for minikube is provided under examples/.

In order to properly support the required persistent volume claims for the Elasticsearch StatefulSet, the default-storageclass and storage-provisioner minikube addons must be enabled.

In order to use the provided ingress controller, Ingress addon must be enabled too.

minikube addons enable ingress
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
cd examples/minikube
make

Note that if helm or kubectl timeouts occur, you may consider creating a minikube VM with more CPU cores or memory allocated.