You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+17-15
Original file line number
Diff line number
Diff line change
@@ -10,38 +10,40 @@ We know that in Kubernetes, there are generally 3 ways to expose workloads publi
10
10
11
11
> kubectl proxy and similar dev/debug solutions are not counted in.
12
12
13
-
`NodePort` Service comes almost as early as born of Kubernetes. But due to limitation on ports range (30000~32767), randomness of port, and the need to expose public network of (almost) the whole cluster, `NodePort` Service is usally not considered as a good L4 solution in serious production workloads.
13
+
`NodePort` Service comes almost as early as born of Kubernetes. But due to limitation on ports range (30000~32767), randomness of port, and the need to expose public network of (almost) the whole cluster, `NodePort` Service is usually not considered as a good L4 solution in serious production workloads.
14
14
15
-
A viable solution today for L4 apps is `LoadBalancer` service. It's implemented differently in different Kubernetes offerings, by connecting an Kubernetes Service object with a real/virtual IaaS LoadBalancer, so that traffic going through LoadBlancer endpoint can be routed to destination pods properly.
15
+
A viable solution today for L4 apps is `LoadBalancer` service. It's implemented differently in different Kubernetes offerings, by connecting an Kubernetes Service object with a real/virtual IaaS LoadBalancer, so that traffic going through LoadBalancer endpoint can be routed to destination pods properly.
16
16
17
-
However, in reality, L7 (e.g. HTTP) workloads are way more than raw L4 ones. So community comes up with the `Ingress` concept. `Ingress` object defines how incoming request can be routed to internal Service, and under the hood there is an ingress controller (1) dealing with `Ingress` objects, setting up mapping rules by leveraging Nginx/Envoy/etc. and also (2) (normally) exposing via `LoadBalancer` externally.
17
+
However, in reality, L7 (e.g. HTTP) workloads are way more widely used than L4 ones. So community comes up with the `Ingress` concept. `Ingress` object defines how incoming request can be routed to internal Service, and under the hood there is an ingress controller (1) dealing with `Ingress` objects, setting up mapping rules by leveraging Nginx/Envoy/etc. and also (2) (normally) exposing via `LoadBalancer` externally.
18
18
19
-
> There is a misunderstanding that using Ingress, it's also doable to manage L4 workloads. It's not true. Why Ingress can work is b/c it can tell the difference of different requests by HTTP headers, but in L4 protocol, it's only ip + port.
19
+
> There is a misunderstanding that using Ingress, it's also doable to manage L4 workloads. It's not true. Why Ingress can work is b/c it can differentiate requests by HTTP headers, but for a L4 packet, it's only ip + port.
20
20
21
21
## Motivation
22
22
23
-
Ingress introduces a possibility which enables you to expose multiple internal L7 services through **one** public endpoint.
23
+
Ingress introduces a possibility which enables you to expose multiple internal L7 services through **one** public endpoint. But it doesn't work for L4 workloads.
24
24
25
25

26
26
27
-
And you might wonder that where's the missing piece for L4 services? This is exactly the problem we're trying to solve in this project. And following factors are considered:
27
+
From the above picture, you might wonder where's the missing piece for L4 services? This is exactly the problem we're trying to solve in this project. And following factors are considered:
28
28
29
-
- Cost
30
-
- User Experience
31
-
-Architecture Overhead
32
-
-Kubernetes Compatible
33
-
-Performance (todo)
29
+
- Cost effective
30
+
- User friendly
31
+
-Reusing existing Kubernetes assets
32
+
-Minimum operation efforts
33
+
-Consistent with Kubernetes roadmap
34
34
35
35
## How It Works
36
36
37
-
Without a "SharedLoadBalancer Manager", it's N Services (of type LoadBalancer) mapped to N LoadBalancer endpoints:
37
+
We introduce a "SharedLoadBalancer Controller" to customize current Kubernetes behavior.
38
+
39
+
Without a "SharedLoadBalancer Controller", it's N Services (of type LoadBalancer) mapped to N LoadBalancer endpoints:
38
40
39
41

40
42
41
-
With a "SharedLoadBalancer Manager", it's N SharedLB CR objects mapped to 1 LoadBalancer endpoint (on different ports):
43
+
With a "SharedLoadBalancer Controller", it's N SharedLB CR objects mapped to 1 LoadBalancer endpoint (on different ports):
42
44
43
45

44
46
45
-
## Links
47
+
## More Info
46
48
47
-
-https://sched.co/GrUd
49
+
Want to get more info on this? Join us at KubeCon + CloudNativeCon North America 2018 in Seattle, December 11-13, we will be giving a [session](https://sched.co/GrUd) on this.
0 commit comments