Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion charts/ojp-server/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ maintainers:
url: https://github.com/petruki

type: application
version: 0.1.4
version: 0.1.5
appVersion: "0.1.0-beta"
53 changes: 51 additions & 2 deletions charts/ojp-server/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,39 @@

Deploy OJP Server using `ojp/ojp-server` Helm Charts.

## Architecture

The OJP Server Helm chart uses a **StatefulSet** deployment model with individual per-pod services, allowing each OJP instance to be individually addressable. By default, the chart creates:
- 3 replicas (configurable via `replicaCount`)
- A headless service for StatefulSet pod discovery
- Individual LoadBalancer services for each pod (e.g., `ojp-server-0`, `ojp-server-1`, `ojp-server-2`)

This architecture ensures stable network identities and allows direct access to specific OJP instances.

### Why Per-Pod Services?

Each OJP instance gets its own LoadBalancer service to enable:
- **Direct addressability**: Clients can connect to a specific OJP instance by its unique DNS name or external IP
- **Stable network identity**: Each pod has a predictable, persistent DNS name (e.g., `ojp-server-0.ojp-server.namespace.svc.cluster.local`)
- **Independent external access**: Each instance receives its own external IP address via LoadBalancer
- **Connection affinity**: Clients requiring persistent connections to the same instance can do so reliably

### DNS Names and Connectivity

**Internal (within cluster):**
- Pods accessible via StatefulSet DNS: `ojp-server-0.ojp-server.namespace.svc.cluster.local`
- Individual pods: `ojp-server-{0,1,2}.ojp-server.namespace.svc.cluster.local`

**External (outside cluster):**
- LoadBalancer services: `ojp-server-0`, `ojp-server-1`, `ojp-server-2` (each gets an external IP)
- Access via: `<external-ip>:1059`

**Ports:**
- **Port 1059**: Main OJP server port - clients connect here for OJP functionality
- **Port 9090**: Prometheus metrics port - for monitoring/observability only

**Note:** LoadBalancer is the default service type for cloud environments (AWS, GCP, Azure). For on-premise deployments, use `NodePort` instead by setting `service.perPodService.type: NodePort`.

## Usage
Install OJP Server
```console
Expand All @@ -21,11 +54,27 @@ Uninstall OJP Server
helm uninstall ojp-server --namespace ojp
```

## Configuration

### Deployment Parameters
| Name | Description | Value |
| -------------------------- | ---------------------------------------------- | ---------------------- |
| `replicaCount` | Number of OJP Server replicas | `3` |
| `autoscaling.enabled` | Enable autoscaling (overrides replicaCount, disables per-pod services) | `false` |

### Service Parameters
| Name | Description | Value |
| -------------------------- | ---------------------------------------------- | ---------------------- |
| `service.type` | Service type (always ClusterIP for headless service) | `ClusterIP` |
| `service.port` | OJP Server service port (main application port for client connections) | `1059` |
| `service.perPodService.enabled` | Enable individual per-pod services (disabled when autoscaling is enabled) | `true` |
| `service.perPodService.type` | Type for per-pod services - LoadBalancer (cloud) or NodePort (on-premise) | `LoadBalancer` |

### App parameters
| Name | Description | Value |
| -------------------------- | ---------------------------------------------- | ---------------------- |
| `server.port` | OJP Server Port | `1059` |
| `server.prometheusPort` | OJP Server Prometheus Port | `9159` |
| `server.port` | OJP Server Port (main application port for client connections) | `1059` |
| `server.prometheusPort` | OJP Server Prometheus Port (metrics only, not for client connections) | `9090` |
| `server.threadPoolSize` | OJP Server Thread Pool Size | `200` |
| `server.maxRequestSize` | OJP Server Max Request Size | `4194304` |
| `server.connectionIdleTimeout` | OJP Server Connection Idle Timeout | `30000` |
Expand Down
25 changes: 21 additions & 4 deletions charts/ojp-server/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,27 @@
Test the installation with the steps below:

1. Port-forward Server service:
1. Port-forward to a specific OJP Server pod:

{{- if .Values.server.opentelemetry.enabled }}
kubectl -n {{ .Release.Namespace }} port-forward svc/ojp-server {{ .Values.server.prometheusPort }}:{{ .Values.server.prometheusPort }} &
kubectl -n {{ .Release.Namespace }} port-forward {{ include "ojp-server.fullname" . }}-0 {{ .Values.server.prometheusPort }}:{{ .Values.server.prometheusPort }} &
{{- end }}
kubectl -n {{ .Release.Namespace }} port-forward svc/ojp-server {{ .Values.service.port }}:{{ .Values.service.port }} &
kubectl -n {{ .Release.Namespace }} port-forward {{ include "ojp-server.fullname" . }}-0 {{ .Values.service.port }}:{{ .Values.service.port }} &

2. Happy Proxying with OJP!
{{- if .Values.service.perPodService.enabled }}

2. Access individual OJP Server instances via their per-pod services:

{{- if eq .Values.service.perPodService.type "LoadBalancer" }}
# Get LoadBalancer external IPs for each pod
{{- range $i := until (int .Values.replicaCount) }}
kubectl -n {{ $.Release.Namespace }} get svc {{ include "ojp-server.fullname" $ }}-{{ $i }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
{{- end }}
{{- else if eq .Values.service.perPodService.type "NodePort" }}
# Get NodePort for each pod service
{{- range $i := until (int .Values.replicaCount) }}
kubectl -n {{ $.Release.Namespace }} get svc {{ include "ojp-server.fullname" $ }}-{{ $i }} -o jsonpath='{.spec.ports[0].nodePort}'
{{- end }}
{{- end }}
{{- end }}

3. Happy Proxying with OJP!
40 changes: 40 additions & 0 deletions charts/ojp-server/templates/per-pod-service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
{{- /*
Per-pod services are only created when autoscaling is disabled because:
- Autoscaling dynamically changes the number of pods
- Per-pod services are created based on a static replicaCount value
- This prevents service/pod mismatches (missing services for new pods or orphaned services)
*/ -}}
{{- if and .Values.service.perPodService.enabled (not .Values.autoscaling.enabled) }}
{{- $fullName := include "ojp-server.fullname" . -}}
{{- $serviceType := .Values.service.perPodService.type -}}
{{- $serverPort := .Values.server.port -}}
{{- $prometheusPort := .Values.server.prometheusPort -}}
{{- $labels := include "ojp-server.labels" . -}}
{{- $selectorLabels := include "ojp-server.selectorLabels" . -}}
{{- $namespace := .Release.Namespace -}}
{{- range $i := until (int .Values.replicaCount) }}
---
apiVersion: v1
kind: Service
metadata:
namespace: {{ $namespace }}
name: {{ $fullName }}-{{ $i }}
labels:
{{- $labels | nindent 4 }}
statefulset.kubernetes.io/pod-name: {{ $fullName }}-{{ $i }}
spec:
type: {{ $serviceType }}
ports:
- port: {{ $serverPort }}
targetPort: http
protocol: TCP
name: http
- port: {{ $prometheusPort }}
targetPort: prometheus
protocol: TCP
name: prometheus
selector:
{{- $selectorLabels | nindent 4 }}
statefulset.kubernetes.io/pod-name: {{ $fullName }}-{{ $i }}
{{- end }}
{{- end }}
4 changes: 3 additions & 1 deletion charts/ojp-server/templates/service.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ metadata:
labels:
{{- include "ojp-server.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
# StatefulSets require a headless service (clusterIP: None) for pod DNS resolution
# The service.type value is ignored for StatefulSet services
clusterIP: None
ports:
- port: {{ .Values.service.port }}
targetPort: http
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
apiVersion: apps/v1
kind: Deployment
kind: StatefulSet
metadata:
namespace: {{ .Release.Namespace }}
name: {{ include "ojp-server.fullname" . }}
Expand All @@ -9,6 +9,7 @@ spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
serviceName: {{ include "ojp-server.fullname" . }}
selector:
matchLabels:
{{- include "ojp-server.selectorLabels" . | nindent 6 }}
Expand Down
11 changes: 9 additions & 2 deletions charts/ojp-server/values.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
replicaCount: 1
replicaCount: 3

server:
port: 1059
Expand Down Expand Up @@ -47,8 +47,15 @@ securityContext: {}
# runAsUser: 1000

service:
# Main headless service for StatefulSet (always clusterIP: None)
type: ClusterIP
port: 1059
port: 1059 # Main OJP server port for client connections
# Per-pod services allow individual addressability for each StatefulSet pod
# Each pod gets its own LoadBalancer with external IP for direct access
# Note: Per-pod services are only created when autoscaling is disabled
perPodService:
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot Does this means that each pod of OJP server gets its own loadbalancer in front? what is the advantage here? is it to just have a DNS name?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, each pod gets its own LoadBalancer. The advantages are:

  • Direct addressability: Clients can connect to a specific instance
  • Stable identity: Each pod has a predictable DNS name
  • Independent external IPs: Each instance accessible via unique external IP
  • Connection affinity: Clients requiring persistent connections to the same instance

This enables use cases where clients need to maintain connections to specific OJP instances rather than load-balanced connections. See the new "Why Per-Pod Services?" section in the README (commit b2231fd).

enabled: true
type: LoadBalancer # LoadBalancer for cloud (AWS/GCP/Azure), NodePort for on-premise

resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
Expand Down
Loading