diff --git a/hpa/README.md b/hpa/README.md index 0ced106..4a4d0bb 100644 --- a/hpa/README.md +++ b/hpa/README.md @@ -1,21 +1,274 @@ -# HPA Example +# Voting Webapp with HPA optimized using StormForge Performance Testing -## TL;DR +## Overview -Run -`stormforge generate rbac -f experiment.yaml | kubectl apply -f -` -and then -`kubect apply -n -k .` +The goal of this example is to optimize the [voting webapp](https://github.com/thestormforge/examples/tree/master/voting-webapp) using a [Horizontal Pod Autoscaler (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) +In this experiment, the load test is performed by [StormForge Performance Testing](https://www.stormforge.io/performance-testing/). This allows us to better simulate real-life traffic load on the application and show how to carefully tune HPA to identify optimal deployment resource allocations. -## Introduction +## Prerequisites -The goal of this example is to optimize the [voting webapp](https://github.com/thestormforge/examples/tree/master/voting-webapp) using a [Horizontal Pod Autoscaler (HPA)](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) for scaling up and down the `voting-service` deployment during a trial (load test). +You must have a Kubernetes cluster. We recommend using a cluster with 4 nodes, 16 vCPUs (4 on each node) and 32GB of memory (8 on each node). Additionally, you will need a local configured copy of `kubectl`. -We show how to define such stormforge optimization experiment using two load test engine: [Locust](https://locust.io) and [StormForge Performance Testing](https://www.stormforge.io/performance-testing/) +Additionally, you will need a local configured copy of `kubectl` and to initialize StormForge Optimize in your cluster. You can download a binary for your platform from the [installation guide](https://docs.stormforge.io/optimize/getting-started/install//) and run `stormforge init` (while connected to your cluster). -## Prerequisites +## Run the experiment +### Deploy the voting webapp with ingress -You must have a Kubernetes cluster. We recommend using a cluster with 4 nodes, 16 vCPUs (4 on each node) and 32GB of memory (8 on each node). Additionally, you will need a local configured copy of `kubectl`. +Because the load test resides outside of the cluster, the voting webapp needs to be exposed with a publicly accessible IP address. + +Run: +`kustomize build application | kubectl apply -f -` + +Once the IP address for the ingress is available, you can test the website by accessing the IP address in a web browser or using curl. +Write + +Once the external IP address for the voting-service is ready, insert it in the `sf-experiment/experiment.yaml` as the value for the `TARGET` env variable. + +### Set StormForge Performance Testing credentials +Set StormForge Performance Testing JWT in `sf-experiment/acessToken` +Replace the value of the `TEST_CASE` env variable with your test case e.g.,`my-organization/my-test-case-name`. + + +### Launch an experiment + +Create the RBAC permission +`stormforge generate rbac -f sf-experiment/experiment.yaml | kubectl apply -f -` + +Replace the namespace in `sf-experiment/kustomization.yaml` with the namespace in which you want to deploy. + +Launch the experiment +`kustomize build sf-experiment | kubectl apply -f` + +### Monitor the experiment progress and results + +The best way to monitor the experiment progress is to use the [web based dashboard](https://app.stormforge.io/) + +You can also access the status of the trials using the `kubectl` command line tool. + +``` +❯ kubectl get trials +NAME STATUS ASSIGNMENTS VALUES +hpa-sf-011-000 Completed memory=2098, cpu=1000, min_replicas=2, max_replicas=2, avg_utilization=50 cost=213.128192, p95-latency=4.96, p50-latency=3.67, p99-latency=6.12, error_ratio=1 +hpa-sf-011-001 Completed avg_utilization=41, cpu=1379, max_replicas=4, memory=1865, min_replicas=4 cost=282.692192, p95-latency=4.96, p50-latency=3.66, p99-latency=6.1, error_ratio=1 +hpa-sf-011-002 Completed avg_utilization=24, cpu=3983, max_replicas=3, memory=3913, min_replicas=1 cost=404.890192, p95-latency=4.96, p50-latency=3.66, p99-latency=6.09, error_ratio=1 +hpa-sf-011-003 Completed avg_utilization=63, cpu=3921, max_replicas=3, memory=550, min_replicas=1 cost=234.847192, p95-latency=4.94, p50-latency=3.64, p99-latency=6.03, error_ratio=1 +hpa-sf-011-004 Completed avg_utilization=68, cpu=423, max_replicas=2, memory=689, min_replicas=2 cost=185.056192, p95-latency=4.96, p50-latency=3.67, p99-latency=6.03, error_ratio=1 +hpa-sf-011-005 Completed avg_utilization=14, cpu=3628, max_replicas=4, memory=375, min_replicas=3 cost=354.94319200000007, p95-latency=4.95, p50-latency=3.66, p99-latency=5.98, error_ratio=1 +hpa-sf-011-006 Completed avg_utilization=68, cpu=3145, max_replicas=3, memory=390, min_replicas=1 cost=330.445192, p95-latency=4.95, p50-latency=3.66, p99-latency=6, error_ratio=1 +hpa-sf-011-007 Completed avg_utilization=53, cpu=4000, max_replicas=3, memory=128, min_replicas=1 cost=234.92419200000003, p95-latency=4.96, p50-latency=3.67, p99-latency=6.05, error_ratio=1 +``` + +## Technical Process +The experiment is fully automated as defined in the experiment.yaml + +In the experiment spec, you can see the parameters we are using for our experiment. + +``` +spec: + parameters: + - name: memory + baseline: 2098 + min: 128 + max: 4096 + - name: cpu + baseline: 1000 + min: 100 + max: 4000 + - name: min_replicas + baseline: 2 + min: 1 + max: 4 + - name: max_replicas + baseline: 2 + min: 1 + max: 4 + - name: avg_utilization + baseline: 50 + min: 10 + max: 80 +``` + + +Because it does not make sense for the Machine Learning engine to experiment a deployment where min_replicas is larger than max_replicas, we can provide guidelines to the Machine Learning engine in our experiment file by declaring constraints. + +``` + constraints: + - order: + lowerParameter: min_replicas + upperParameter: max_replicas +``` + +You can find more details on constraints [here](https://docs.stormforge.io/experiment/parameters/#parameter-constraints) + +Next, we need to define the metrics or objectives we are optimizing for. + +``` + metrics: + - name: cost + type: prometheus + port: 9090 + minimize: true + query: ({{ cpuRequests . "" }} * 17) + ({{ memoryRequests . "" | GB }} * 3) + - name: error_ratio + type: prometheus + port: 9090 + minimize: true + query: scalar(error_ratio{job="trialRun",instance="{{ .Trial.Name }}"}) + - name: p95-latency + type: prometheus + port: 9090 + minimize: true + optimize: false + query: scalar(percentile_95{job="trialRun",instance="{{ .Trial.Name }}"}) + - name: p50-latency + type: prometheus + port: 9090 + minimize: true + optimize: false + query: scalar(median{job="trialRun",instance="{{ .Trial.Name }}"}) + - name: p99-latency + type: prometheus + port: 9090 + # max: "1000" # RW + minimize: true + optimize: false + query: scalar(percentile_99{job="trialRun",instance="{{ .Trial.Name }}"}) +``` + +Please note that the cost is calculated based on the CPU and Memory consumed in that trial. + +Finally, we define our patches and our trial template. + +``` + patches: + - targetRef: + name: voting-hpa + apiVersion: autoscaling/v2beta2 + kind: HorizontalPodAutoscaler + patch: | + spec: + maxReplicas: {{ .Values.max_replicas }} + minReplicas: {{ .Values.min_replicas }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.avg_utilization }} + - targetRef: + name: voting-service + apiVersion: apps/v1 + kind: Deployment + patch: | + spec: + template: + spec: + containers: + - name: voting-service + resources: + limits: + memory: '{{ .Values.memory }}M' + cpu: '{{ .Values.cpu }}m' + requests: + memory: '{{ .Values.memory }}M' + cpu: '{{ .Values.cpu }}m' + + template: # trial + spec: + initialDelaySeconds: 15 + template: # job + spec: + template: # pod + spec: + containers: + - image: thecrudge/cstress:latest + name: cassandra-stress + + trialTemplate: + metadata: + labels: + stormforge.io/application: hpa-sf + stormforge.io/scenario: standard + spec: + jobTemplate: + metadata: + labels: + stormforge.io/application: hpa-sf + stormforge.io/scenario: standard + spec: + template: + metadata: + labels: + stormforge.io/application: hpa-sf + stormforge.io/scenario: standard + spec: + containers: + - name: stormforger + image: thestormforge/optimize-trials:v0.0.1-stormforger +``` + +You can see here how we are patching the voting-service deployment for CPU and memory allocation as well as the number of replicas for HPA. You can also see here that we use the custom thestormforge/optimize-trials:v0.0.1-stormforger container image for load generation. We can validate our trial patch by describing a voting-service pod and verifying the trial settings by describing the trial + +``` +kubectl describe pod voting-service-595f79c587-f6fpt +Name: voting-service-595f79c587-f6fpt +... +Containers: + voting-service: + Container ID: containerd://96d626db8486942208e0d279831088c4c993644f7933a9b3b6b627124b15dae2 + Image: dockersamples/examplevotingapp_vote + Image ID: docker.io/dockersamples/examplevotingapp_vote@sha256:b4e60557febfed6d345a09e5dce52aeeff997b7c16a64428ccf5f3d8f3c60dde + Port: 80/TCP + Host Port: 0/TCP + State: Running + Started: Fri, 17 Dec 2021 09:06:27 -0600 + Ready: False + Restart Count: 0 + Limits: + cpu: 422m + memory: 380M + Requests: + cpu: 422m + memory: 380M + Readiness: http-get http://:80/ delay=5s timeout=1s period=5s #success=1 #failure=3 + Environment: + Mounts: + /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6bs7q (ro) +... +``` +``` +kubectl describe trial hpa-sf-011-008 +Name: hpa-sf-011-008 +Namespace: default +Labels: stormforge.io/application=hpa-sf + stormforge.io/experiment=hpa-sf-011 + stormforge.io/scenario=standard +Annotations: stormforge.io/report-trial-url: https://api.stormforge.dev/v1/experiments/hpa-sf-011/trials/8 +API Version: optimize.stormforge.io/v1beta2 +Kind: Trial +... 5 + Target Ref: + API Version: apps/v1 + Kind: Deployment + Name: voting-service + Namespace: default + Start Time: 2021-12-22T23:47:51Z + Values: cost=237.33619200000004, p95-latency=4.96, p50-latency=3.67, p99-latency=6.13, error_ratio=1 +Events: +``` + +## Results +The image below shows us that the Machine Learning engine has recommended trial number #163. With this trial, we can see we have an Error Ratio reduction by 100% +compared to our baseline in Trial #0. + + + +In this image, we can see all of our trials, with the recommended trial highlighted. + + + +And finally, we can get the parameter settings or export the config itself -Additionally, you will need a local configured copy of `kubectl` and to initialize StormForge Optimize in your cluster. You can download a binary for your platform from the [installation guide](https://docs.stormforge.io/getting-started/install/) and run `stormforge init` (while connected to your cluster). + \ No newline at end of file diff --git a/hpa/sf-perftest/application/hpa.yaml b/hpa/application/hpa.yaml similarity index 100% rename from hpa/sf-perftest/application/hpa.yaml rename to hpa/application/hpa.yaml diff --git a/hpa/sf-perftest/application/ingress.yaml b/hpa/application/ingress.yaml similarity index 100% rename from hpa/sf-perftest/application/ingress.yaml rename to hpa/application/ingress.yaml diff --git a/hpa/sf-perftest/application/kustomization.yaml b/hpa/application/kustomization.yaml similarity index 100% rename from hpa/sf-perftest/application/kustomization.yaml rename to hpa/application/kustomization.yaml diff --git a/hpa/sf-perftest/application/node-port-patch.yaml b/hpa/application/node-port-patch.yaml similarity index 100% rename from hpa/sf-perftest/application/node-port-patch.yaml rename to hpa/application/node-port-patch.yaml diff --git a/hpa/img/results1.png b/hpa/img/results1.png new file mode 100644 index 0000000..b4c9a2d Binary files /dev/null and b/hpa/img/results1.png differ diff --git a/hpa/img/results2.png b/hpa/img/results2.png new file mode 100644 index 0000000..02be67b Binary files /dev/null and b/hpa/img/results2.png differ diff --git a/hpa/img/results3.png b/hpa/img/results3.png new file mode 100644 index 0000000..7361158 Binary files /dev/null and b/hpa/img/results3.png differ diff --git a/hpa/locust/README.md b/hpa/locust/README.md deleted file mode 100644 index 1204ad6..0000000 --- a/hpa/locust/README.md +++ /dev/null @@ -1,54 +0,0 @@ -# HPA optimization using Locust - -To allow the controller to patch the deployments and the HPA during the experiment, generate the proper RBAC permissions by running the following: -`stormforge generate rbac -f experiment.yaml | kubectl apply -f -` - -The `experiment.yaml` file is the actual experiment object manifest; this includes the definition of the experiment itself (in terms of assignable parameters and observable metrics) and the instructions for carrying out the experiment. - -## Experiment lifecycle - -For each trial, we create a locust load test using the [locust trial pod](https://github.com/thestormforge/optimize-trials/tree/main/locust) and `locustfile.py`. - -Environment variables of the trial pod are used to configure the load test. The load test is 180 seconds long, uses 500 clients that are added at a rate of 50 client/second. -``` -containers: -- env: - - name: HOST - value: http://voting-service - - name: NUM_USERS - value: "500" - - name: SPAWN_RATE - value: "50" - - name: RUN_TIME - value: "180" - image: thestormforge/optimize-trials:v0.0.1-locust - name: locust -``` - -You can increase this rate to make sure that your HPA scales fast enough to increase in load. -[However, with a kubernetes version lower than `v1.18` you cannot change the scaling policies through the `v2beta2` API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/). In this experiment, we are going to optimize for the `target CPU utilization` that the HPA uses to scale the `voting-service` deployment. - -We optimize (minimization) for `cost` and `p95-latency`. We also track `failures_per_s` without optimizing for it. We set a max latency of 500 milliseconds. -``` -- minimize: true - name: p95-latency - query: scalar(p95{job="trialRun",instance="{{ .Trial.Name }}"}) - type: prometheus - max: "500" -``` -If locust measures an p95 latency over the length of the load test (aka trial) higher than this value, we will report this trial as failed. Similarly, we set a maximum for `failures_per_s` we want a trial to report. -``` -- minimize: true - name: failures_per_s - optimize: false - query: scalar(failures_per_s{job="trialRun",instance="{{ .Trial.Name }}"}) - type: prometheus -``` -An error rate higher than this value will fail the trial. - -Launch the experiment using: -`kustomize build . | kubect apply -n -f -` - -You can visualize the progress of the experiment at `https://app.stormforge.io`. You should see something similar to this: - -![](hpa-results.png) diff --git a/hpa/locust/experiment.yaml b/hpa/locust/experiment.yaml deleted file mode 100644 index 8427205..0000000 --- a/hpa/locust/experiment.yaml +++ /dev/null @@ -1,189 +0,0 @@ -apiVersion: optimize.stormforge.io/v1beta2 -kind: Experiment -metadata: - name: hpa-example - labels: - stormforge.io/application: 'hpa' - stormforge.io/scenario: 'locust' -spec: - parameters: - - name: voting_cpu - baseline: 400 - min: 100 - max: 1000 - - name: min_replicas - baseline: 1 - min: 1 - max: 4 - - name: max_replicas - baseline: 2 - min: 1 - max: 4 - - name: avg_utilization - baseline: 50 - min: 10 - max: 80 - constraints: - - order: - lowerParameter: min_replicas - upperParameter: max_replicas - metrics: - - name: p95-latency - type: prometheus - max: "500" - minimize: true - query: scalar(p95{job="trialRun",instance="{{ .Trial.Name }}"}) - - name: cost-gcp - type: prometheus - minimize: true - query: ({{ cpuRequests . "" }} * 17) + ({{ memoryRequests . "" | GB }} * 2) - - name: failures_per_s - type: prometheus - minimize: true - optimize: false - query: scalar(failures_per_s{job="trialRun",instance="{{ .Trial.Name }}"}) - patches: - - targetRef: - name: voting-service - apiVersion: apps/v1 - kind: Deployment - patch: | - spec: - replicas: {{ .Values.min_replicas }} - template: - spec: - containers: - - name: voting-service - resources: - limits: - cpu: "{{ .Values.voting_cpu }}m" - memory: "250Mi" - requests: - cpu: "{{ .Values.voting_cpu }}m" - memory: "250Mi" - - targetRef: - name: voting-hpa - apiVersion: autoscaling/v2beta2 - kind: HorizontalPodAutoscaler - patch: | - spec: - maxReplicas: {{ .Values.max_replicas }} - minReplicas: {{ .Values.min_replicas }} - metrics: - - type: Resource - resource: - name: cpu - target: - type: Utilization - averageUtilization: {{ .Values.avg_utilization }} - trialTemplate: - spec: - jobTemplate: - spec: - template: - spec: - containers: - - name: locust - image: thestormforge/optimize-trials:v0.0.1-locust - env: - - name: HOST - value: http://voting-service - - name: NUM_USERS - value: "500" - - name: SPAWN_RATE - value: "50" - - name: RUN_TIME - value: "180" - resources: - requests: - cpu: "1" - volumeMounts: - - name: locustfile - readOnly: true - mountPath: /mnt/locust - volumes: - - name: locustfile - configMap: - name: locustfile - setupServiceAccountName: stormforge-setup - setupTasks: - - name: monitoring - args: - - prometheus - - $(MODE) ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: stormforge-setup - labels: - stormforge.io/application: votingapp ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: stormforge-prometheus - labels: - stormforge.io/application: votingapp -rules: -- resources: - - clusterroles - - clusterrolebindings - apiGroups: - - rbac.authorization.k8s.io - verbs: - - get - - create - - delete -- resources: - - serviceaccounts - - services - - configmaps - apiGroups: - - "" - verbs: - - get - - create - - delete -- resources: - - deployments - apiGroups: - - apps - verbs: - - get - - create - - delete - - list - - watch -- resources: - - nodes - - nodes/metrics - - nodes/proxy - - services - apiGroups: - - "" - verbs: - - list - - watch - - get -- resources: - - pods - apiGroups: - - "" - verbs: - - list - - watch ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: stormforge-setup-prometheus - labels: - stormforge.io/application: votingapp -roleRef: - name: stormforge-prometheus - kind: ClusterRole - apiGroup: rbac.authorization.k8s.io -subjects: -- name: stormforge-setup - kind: ServiceAccount diff --git a/hpa/locust/hpa-results.png b/hpa/locust/hpa-results.png deleted file mode 100644 index 0d6bf7f..0000000 Binary files a/hpa/locust/hpa-results.png and /dev/null differ diff --git a/hpa/locust/hpa.yaml b/hpa/locust/hpa.yaml deleted file mode 100644 index ef93622..0000000 --- a/hpa/locust/hpa.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: autoscaling/v2beta2 -kind: HorizontalPodAutoscaler -metadata: - name: voting-hpa -spec: - maxReplicas: 10 - minReplicas: 3 - scaleTargetRef: - apiVersion: apps/v1 - kind: Deployment - name: voting-service - metrics: - - type: Resource - resource: - name: cpu - target: - type: Utilization - averageUtilization: 30 diff --git a/hpa/locust/kustomization.yaml b/hpa/locust/kustomization.yaml deleted file mode 100644 index a97ae3a..0000000 --- a/hpa/locust/kustomization.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: kustomize.config.k8s.io/v1beta1 -kind: Kustomization - -namespace: recipe - -resources: - - ../../webserver/voting-webapp/application - - hpa.yaml - - experiment.yaml - -configMapGenerator: -- name: locustfile - files: - - locustfile.py - -generatorOptions: - disableNameSuffixHash: true diff --git a/hpa/locust/locustfile.py b/hpa/locust/locustfile.py deleted file mode 100644 index 7a5dc92..0000000 --- a/hpa/locust/locustfile.py +++ /dev/null @@ -1,19 +0,0 @@ -import os - -from uuid import uuid4 -import random -from locust import HttpUser, task, between - - -CAT_FRACTION = os.getenv("CAT_FRACTION", 0.25) - - -class MyUser(HttpUser): - wait_time = between(5, 10) - @task - def vote(self): - voter_id = uuid4().hex - vote = "a" if (random.uniform(0, 1) < CAT_FRACTION) else "b" - self.client.post("/", - cookies={"voter_id": voter_id}, - data={"vote": vote}) diff --git a/hpa/sf-experiment/accessToken b/hpa/sf-experiment/accessToken new file mode 100644 index 0000000..45fa2af --- /dev/null +++ b/hpa/sf-experiment/accessToken @@ -0,0 +1 @@ +[add your token here]] \ No newline at end of file diff --git a/hpa/sf-perftest/sf-experiment/experiment.yaml b/hpa/sf-experiment/experiment.yaml similarity index 96% rename from hpa/sf-perftest/sf-experiment/experiment.yaml rename to hpa/sf-experiment/experiment.yaml index 27e8e7b..f385123 100644 --- a/hpa/sf-perftest/sf-experiment/experiment.yaml +++ b/hpa/sf-experiment/experiment.yaml @@ -1,7 +1,7 @@ apiVersion: optimize.stormforge.io/v1beta2 kind: Experiment metadata: - name: hpa-sf + name: hpa-sf-011 labels: stormforge.io/application: hpa-sf stormforge.io/scenario: standard @@ -37,16 +37,10 @@ spec: port: 9090 minimize: true query: ({{ cpuRequests . "" }} * 17) + ({{ memoryRequests . "" | GB }} * 3) - - name: error_ratio - type: prometheus - port: 9090 - minimize: true - query: scalar(error_ratio{job="trialRun",instance="{{ .Trial.Name }}"}) - name: p95-latency type: prometheus port: 9090 minimize: true - optimize: false query: scalar(percentile_95{job="trialRun",instance="{{ .Trial.Name }}"}) - name: p50-latency type: prometheus @@ -57,10 +51,16 @@ spec: - name: p99-latency type: prometheus port: 9090 - max: "1000" + # max: "1000" # RW minimize: true optimize: false query: scalar(percentile_99{job="trialRun",instance="{{ .Trial.Name }}"}) + - name: error_ratio + type: prometheus + port: 9090 + minimize: true + optimize: false + query: scalar(error_ratio{job="trialRun",instance="{{ .Trial.Name }}"}) patches: - targetRef: name: voting-hpa @@ -121,11 +121,11 @@ spec: fieldRef: fieldPath: metadata.name - name: TEST_CASE - value: sf_sandbox/hpa-sf-standard + value: sf_sandbox/hpa-sf-richard - name: TEST_CASE_FILE value: /forge-init.d/testcase.js - name: TARGET - value: http://my-url-example.com + value: http://35.244.255.189/ - name: STORMFORGER_JWT valueFrom: secretKeyRef: @@ -222,3 +222,4 @@ roleRef: subjects: - name: stormforge-setup kind: ServiceAccount + namespace: default diff --git a/hpa/sf-perftest/sf-experiment/kustomization.yaml b/hpa/sf-experiment/kustomization.yaml similarity index 94% rename from hpa/sf-perftest/sf-experiment/kustomization.yaml rename to hpa/sf-experiment/kustomization.yaml index 13b0005..230cdaf 100644 --- a/hpa/sf-perftest/sf-experiment/kustomization.yaml +++ b/hpa/sf-experiment/kustomization.yaml @@ -1,7 +1,7 @@ apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization -namespace: hpa-sf +namespace: default resources: # - sfjwt.yaml diff --git a/hpa/sf-perftest/sf-experiment/testcase.js b/hpa/sf-experiment/testcase.js similarity index 100% rename from hpa/sf-perftest/sf-experiment/testcase.js rename to hpa/sf-experiment/testcase.js diff --git a/hpa/sf-perftest/README.md b/hpa/sf-perftest/README.md deleted file mode 100644 index a6f25f2..0000000 --- a/hpa/sf-perftest/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# Voting Webapp with HPA optimized using StormForge Performance Testing - -The goal of this recipe is to optimize the HPA used for the voting webapp. -In this experiment the load test is performed by [StormForge Performance Testing](https://www.stormforge.io/performance-testing/). This allows to generate much heavier load on the website and show how carefully tuning HPA along with the deployed application allows to handle such traffic. - -## Deploy the voting webapp with ingress - -Because the load test resides outside of the cluster, the voting webapp needs to be exposed publicly. - -Run: -`kustomize build application | kubectl apply -f -` - -Once the IP address for the ingress is available you can test the website by accessing the IP address in a web browser or using curl. -Write - -Once the external IP address for the voting-service is ready insert it in the `sf-experiment/experiment.yaml` as the value for the `TARGET` env variable. - -## Insert your StormForge Performance Testing credentials -Insert your StormForge Performance Testing JWT in `sf-experiment/acessToken` -Replace the value of the `TEST_CASE` env variable with your test case e.g.,`my-organization/my-test-case-name`. - - -## Create an experiment - -Create the RBAC permission -`stormforge generate rbac -f sf-experiment/experiment.yaml | kubectl apply -f -` -Replace the namespace in `sf-experiment/kustomization.yaml` with the namespace in which you want to deploy. -Launch the experiment -`kustomize build sf-experiment | kubectl apply -f` diff --git a/hpa/sf-perftest/sf-experiment/accessToken b/hpa/sf-perftest/sf-experiment/accessToken deleted file mode 100644 index 90a1d60..0000000 --- a/hpa/sf-perftest/sf-experiment/accessToken +++ /dev/null @@ -1 +0,0 @@ -... \ No newline at end of file