Skip to content

Commit 06b5502

Browse files
authored
Merge branch 'main' into www_readme
2 parents 8f1bcd0 + 87c48df commit 06b5502

File tree

13 files changed

+350
-20
lines changed

13 files changed

+350
-20
lines changed

examples/hackernews/README.md

Lines changed: 55 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,18 @@ This example is made of a Skip reactive service (in `reactive_service/`), a
44
Flask web service (in `web_service/`), a React front-end (in `www/`), a HAProxy
55
reverse-proxy (in `reverse_proxy/`), and a PostgreSQL database (in `db/`).
66

7-
In order to run it, do:
8-
```
7+
We provide configurations to run it using either Docker Compose or Kubernetes.
8+
The Docker Compose version is simpler and easier if you just want to get
9+
started with as few dependencies as possible; the Kubernetes version may be
10+
useful for users who are either already using Kubernetes for other deployments
11+
or require elastic horizontal scaling of their Skip service.
12+
13+
## Docker Compose
14+
15+
To build and run the application using Docker Compose, first install and run
16+
Docker on your system, then run:
17+
18+
```bash
919
$ docker compose up --build
1020
```
1121

@@ -17,10 +27,52 @@ of computing and maintaining resources in a round-robin fashion.
1727

1828
This distributed configuration requires only configuration changes, is
1929
transparent to clients, and can be run with:
20-
```
30+
```bash
2131
$ docker compose -f compose.distributed.yml up --build
2232
```
2333

34+
## Kubernetes
35+
36+
To run the application in a local Kubernetes cluster, you'll need several other
37+
prerequisites in addition to Docker. Perform the following steps, which will
38+
run and deploy the full application (in a distributed leader-follower
39+
configuration) to a local Kubernetes cluster.
40+
41+
1. Install [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl)
42+
(configuration tool to talk to a running cluster),
43+
[`helm`](https://helm.sh/docs/intro/install/) (Kubernetes package manager)
44+
and [`minikube`](https://minikube.sigs.k8s.io/docs/start) (local Kubernetes
45+
cluster), and initialize a cluster with `minikube start`.
46+
47+
2. Enable the local Docker `registry` addon for `minikube` to use
48+
locally-built images : `minkube addons enable registry` and expose its port
49+
5000: `docker run --rm -it --network=host alpine ash -c "apk add socat &&
50+
socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000" &`
51+
52+
3. Build docker images for each component of this example, then tag and publish
53+
each one to the `minikube` registry:
54+
```bash
55+
docker compose -f kubernetes/compose.distributed.yml build
56+
for image in web-service reactive-service www db ; do
57+
docker tag reactive-hackernews/$image localhost:5000/$image;
58+
docker push localhost:5000/$image;
59+
done
60+
```
61+
62+
4. Deploy these images to your local Kubernetes cluster: `kubectl apply -f 'kubernetes/*.yaml'`
63+
64+
5. Configure and run HAProxy as a Kubernetes ingress controller, mediating
65+
external traffic ("ingress") and distributing it to the relevant Kubernetes
66+
service(s).
67+
```bash
68+
kubectl create configmap haproxy-auxiliary-configmap --from-file kubernetes/haproxy-aux.cfg
69+
helm install haproxy haproxytech/kubernetes-ingress -f reverse_proxy/kubernetes.yaml
70+
```
71+
72+
6. `minikube service haproxy-kubernetes-ingress` to open a tunnel to the
73+
now-running ingress service, and point your browser at the output host/port
74+
to see the service up and running!
75+
2476
### Overall System Design with optional leader/followers
2577

2678
```mermaid
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
backend skip_control
2+
mode http
3+
http-request set-path %[path,regsub(^/control/,/v1/)]
4+
use-server leader if { path_beg -i /v1/inputs/ }
5+
# placeholder address for leader; will be overwritten by actual leader on startup
6+
server leader localhost:8081 weight 0
7+
balance roundrobin
8+
9+
backend skip_stream
10+
mode http
11+
http-request set-path /v1/streams/%[path,field(4,/)]
12+
use-server %[req.hdr(Follower-Prefix)] if TRUE
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
apiVersion: v1
2+
kind: ConfigMap
3+
metadata:
4+
name: rhn-haproxy-config
5+
data:
6+
syslog-server: "address:stdout, format: raw, facility:daemon"
7+
frontend-config-snippet: |
8+
http-request set-header Follower-Prefix %[path,field(3,/)] if { path_beg -i /streams/ }
9+
use_backend skip_stream if { path_beg -i /streams/ }
10+
use_backend skip_control if { path_beg -i /control/ }
11+
global-config-snippet: |
12+
stats socket ipv4@*:9999 level admin expose-fd listeners
13+
---
14+
apiVersion: networking.k8s.io/v1
15+
kind: Ingress
16+
metadata:
17+
name: haproxy-kubernetes-ingress
18+
spec:
19+
ingressClassName: haproxy
20+
rules:
21+
- http:
22+
paths:
23+
- path: /api
24+
pathType: Prefix
25+
backend:
26+
service:
27+
name: rhn-web
28+
port:
29+
number: 3031
30+
- http:
31+
paths:
32+
- path: /
33+
pathType: Prefix
34+
backend:
35+
service:
36+
name: rhn-www
37+
port:
38+
number: 80
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: rhn-pg
5+
labels:
6+
app.kubernetes.io/name: rhn-pg
7+
spec:
8+
selector:
9+
app.kubernetes.io/name: rhn-pg
10+
ports:
11+
- port: 5432
12+
protocol: TCP
13+
---
14+
apiVersion: apps/v1
15+
kind: Deployment
16+
metadata:
17+
name: rhn-pg
18+
labels:
19+
app.kubernetes.io/name: rhn-pg
20+
spec:
21+
replicas: 1
22+
selector:
23+
matchLabels:
24+
app.kubernetes.io/name: rhn-pg
25+
template:
26+
metadata:
27+
labels:
28+
app.kubernetes.io/name: rhn-pg
29+
spec:
30+
containers:
31+
- name: rhn-pg
32+
image: localhost:5000/db
33+
ports:
34+
- containerPort: 5432
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: rhn-skip
5+
labels:
6+
app.kubernetes.io/name: rhn-skip
7+
spec:
8+
ports:
9+
- port: 8080
10+
name: streaming
11+
- port: 8081
12+
name: control
13+
clusterIP: None
14+
selector:
15+
app.kubernetes.io/name: rhn-skip
16+
---
17+
apiVersion: apps/v1
18+
kind: StatefulSet
19+
metadata:
20+
name: rhn-skip
21+
labels:
22+
app.kubernetes.io/name: rhn-skip
23+
spec:
24+
selector:
25+
matchLabels:
26+
app.kubernetes.io/name: rhn-skip
27+
serviceName: rhn-skip
28+
replicas: 4
29+
template:
30+
metadata:
31+
labels:
32+
app.kubernetes.io/name: rhn-skip
33+
spec:
34+
containers:
35+
- name: rhn-skip
36+
image: localhost:5000/reactive-service
37+
command:
38+
- bash
39+
- "-c"
40+
- |
41+
set -ex
42+
# use Kubernetes pod index as ID:
43+
# - index 0 is leader
44+
# - index >= 1 is follower, using that number in resource prefix
45+
[[ $HOSTNAME =~ -([0-9]+)$ ]] || exit 1
46+
id=${BASH_REMATCH[1]}
47+
ip=$(hostname -i)
48+
49+
if [[ $id -eq 0 ]]; then
50+
export SKIP_LEADER=true
51+
echo "set server skip_control/leader addr $ip port 8081 ; enable server skip_control/leader " | socat stdio tcp4-connect:haproxy-kubernetes-ingress:9999
52+
else
53+
export SKIP_FOLLOWER=true
54+
export SKIP_RESOURCE_PREFIX=follower$id
55+
export SKIP_LEADER_HOST=rhn-skip-0.rhn-skip.default.svc.cluster.local
56+
# Self-register both the control and event streaming server with the haproxy load balancer.
57+
# Calling 'set server' after 'add server' is redundant on initial scale-up, but necessary for subsequent scale-ups when a server of that name already exists.
58+
# Enabling HAProxy TCP health checks ensures that servers are taken out of rotation when the system scales down or instances crash/disconnect for other reasons.
59+
echo "\
60+
add server skip_control/follower$id $ip:8081 check ;\
61+
set server skip_control/follower$id addr $ip port 8081 ;\
62+
enable server skip_control/follower$id ;\
63+
enable health skip_control/follower$id ;\
64+
add server skip_stream/follower$id $ip:8080 check ;\
65+
set server skip_stream/follower$id addr $ip port 8080 ;\
66+
enable server skip_stream/follower$id ;\
67+
enable health skip_stream/follower$id\
68+
" | socat stdio tcp4-connect:haproxy-kubernetes-ingress:9999
69+
fi
70+
npm start
71+
ports:
72+
- name: streaming
73+
containerPort: 8080
74+
- name: control
75+
containerPort: 8081
76+
env:
77+
- name: PG_HOST
78+
value: "rhn-pg.default.svc.cluster.local"
79+
- name: PG_PORT
80+
value: "5432"
81+
readinessProbe:
82+
exec:
83+
command:
84+
- wget
85+
- "--spider"
86+
- http://localhost:8081/v1/healthcheck
87+
initialDelaySeconds: 1
88+
periodSeconds: 2
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: rhn-web
5+
labels:
6+
app.kubernetes.io/name: rhn-web
7+
annotations:
8+
haproxy.org/backend-config-snippet: |
9+
http-request set-path %[path,regsub(^/api/,/)]
10+
spec:
11+
ports:
12+
- port: 3031
13+
name: api
14+
selector:
15+
app.kubernetes.io/name: rhn-web
16+
---
17+
apiVersion: apps/v1
18+
kind: Deployment
19+
metadata:
20+
name: rhn-web
21+
spec:
22+
replicas: 1
23+
selector:
24+
matchLabels:
25+
app.kubernetes.io/name: rhn-web
26+
template:
27+
metadata:
28+
labels:
29+
app.kubernetes.io/name: rhn-web
30+
spec:
31+
containers:
32+
- name: rhn-web
33+
image: localhost:5000/web-service
34+
ports:
35+
- containerPort: 3031
36+
env:
37+
- name: SKIP_CONTROL_URL
38+
value:
39+
"http://haproxy-kubernetes-ingress.default.svc.cluster.local/control"
40+
- name: PG_HOST
41+
value: "rhn-pg.default.svc.cluster.local"
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
apiVersion: v1
2+
kind: Service
3+
metadata:
4+
name: rhn-www
5+
labels:
6+
app.kubernetes.io/name: rhn-www
7+
spec:
8+
ports:
9+
- port: 80
10+
name: http
11+
selector:
12+
app.kubernetes.io/name: rhn-www
13+
type: NodePort
14+
---
15+
apiVersion: apps/v1
16+
kind: Deployment
17+
metadata:
18+
name: rhn-www
19+
spec:
20+
replicas: 1
21+
selector:
22+
matchLabels:
23+
app.kubernetes.io/name: rhn-www
24+
template:
25+
metadata:
26+
labels:
27+
app.kubernetes.io/name: rhn-www
28+
spec:
29+
containers:
30+
- name: rhn-www
31+
image: localhost:5000/www
32+
ports:
33+
- containerPort: 80

examples/hackernews/reactive_service/Dockerfile

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
FROM node:lts-alpine3.19
22
WORKDIR /app
3+
RUN apk add --no-cache bash
4+
RUN apk add --no-cache socat
35
COPY package.json package.json
46
RUN npm install
57
COPY . .

examples/hackernews/reactive_service/package.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
"@skip-adapter/postgres": "0.0.16"
1717
},
1818
"devDependencies": {
19+
"@types/node": "^22.10.0",
1920
"@skiplabs/eslint-config": "^0.0.1",
2021
"@skiplabs/tsconfig": "^0.0.1"
2122
}

examples/hackernews/reactive_service/server.js

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,20 +4,20 @@ import { service } from "./dist/hackernews.service.js";
44

55
if (process.env["SKIP_LEADER"] == "true") {
66
console.log("Running as leader...");
7-
runService(asLeader(service));
7+
await runService(asLeader(service)).catch(console.error);
88
} else if (process.env["SKIP_FOLLOWER"] == "true") {
99
console.log("Running as follower...");
10-
runService(
10+
await runService(
1111
asFollower(service, {
1212
leader: {
13-
host: "skip_leader",
13+
host: process.env["SKIP_LEADER_HOST"] || "skip_leader",
1414
streaming_port: 8080,
1515
control_port: 8081,
1616
},
1717
collections: ["postsWithUpvotes", "sessions"],
1818
}),
19-
);
19+
).catch(console.error);
2020
} else {
2121
console.log("Running non-distributed...");
22-
runService(service);
22+
await runService(service).catch(console.error);
2323
}

0 commit comments

Comments
 (0)