|
1 | | -# Platforms Guide |
| 1 | +This document has been moved: |
2 | 2 |
|
3 | | -This document outlines the necessary steps to either add or remove supported |
4 | | -platform builds in Kubernetes. |
5 | | - |
6 | | -## Adding supported platforms |
7 | | - |
8 | | -The default Kubernetes platform is `linux/amd64`. This platform is fully tested, |
9 | | -where build and release systems initially supported only that. A while ago we |
10 | | -started an [effort to support multiple architectures][0]. As part of this |
11 | | -effort, we added support in our build and release pipelines for the |
12 | | -architectures `arm`, `arm64`, `ppc64le` and `s390x` on different operating |
13 | | -systems like Linux, Windows and macOS. |
14 | | - |
15 | | -[0]: https://github.com/kubernetes/kubernetes/issues/38067 |
16 | | - |
17 | | -The main focus was to have binaries and container images to be available for |
18 | | -these architectures/operating systems. Contributors should be able to to take |
19 | | -these artifacts and set up CI jobs to adequately test these platforms. |
20 | | -Specifically to call out the ability to run conformance tests on these |
21 | | -platforms. |
22 | | - |
23 | | -Target of this document is to provide a starting point for adding new platforms |
24 | | -to Kubernetes from a SIG Architecture and SIG Release perspective. This does not |
25 | | -include release mechanics or supportability in terms of functionality. |
26 | | - |
27 | | -### Step 1: Building |
28 | | - |
29 | | -The container image based build infrastructure should support this architecture. |
30 | | -This implicitly requires the following: |
31 | | - |
32 | | -- golang should support the platform |
33 | | -- All dependencies, whether vendored or run separately, should support this |
34 | | - platform |
35 | | - |
36 | | -In other words, anyone in the community should be able to use our build infra to |
37 | | -generate all artifacts required to stand up Kubernetes. |
38 | | - |
39 | | -More information about how to build Kubernetes can be found in [the build |
40 | | -documentation][1]. |
41 | | - |
42 | | -[1]: https://github.com/kubernetes/kubernetes/tree/3f7c09e/build#building-kubernetes |
43 | | - |
44 | | -### Step 2: Testing |
45 | | - |
46 | | -It is not enough for builds to work as it gets bit-rotted quickly when we vendor |
47 | | -in new changes, update versions of things we use etc. So we need a good set of |
48 | | -tests that exercise a wide battery of jobs in this new architecture. |
49 | | - |
50 | | -A good starting point from a testing perspective are: |
51 | | - |
52 | | -- unit tests |
53 | | -- e2e tests |
54 | | -- node e2e tests |
55 | | - |
56 | | -This will ensure that community members can rely on these architectures on a |
57 | | -consistent basis. This will give folks who are making changes a signal when they |
58 | | -break things in a specific architecture. |
59 | | - |
60 | | -This implies a set of folks who stand up and maintain both post-submit and |
61 | | -periodic tests, watch them closely and raise the flag when things break. They |
62 | | -will also have to help debug and fix any platform specific issues as well. |
63 | | - |
64 | | -Creating custom [testgrid][4] dashboards can help to monitor platform specific |
65 | | -tests. |
66 | | - |
67 | | -[4]: https://testgrid.k8s.io |
68 | | - |
69 | | -### Step 3: Releasing |
70 | | - |
71 | | -With the first 2 steps we have a reasonable expectation that there are people |
72 | | -taking care of a supported platform and it works in a reproducible environment. |
73 | | - |
74 | | -Getting to the next level is a big jump from here. We are talking about real |
75 | | -users who are betting their business literally on the work we are doing here. So |
76 | | -we need guarantees around "can we really ship this!?" question. |
77 | | - |
78 | | -Specifically we are talking about a set of CI jobs in our release-informing and |
79 | | -release-blocking tabs of our testgrid. The Kubernetes release team has a "CI |
80 | | -signal" team that relies on the status(es) of these jobs to either ship or hold |
81 | | -a release. Essentially, if things are mostly red with occasional green, it would |
82 | | -be prudent to not even bother making this architecture as part of the release. |
83 | | -CI jobs get added to release-informing first and when these get to a point where |
84 | | -they work really well, then they get promoted to release-blocking. |
85 | | - |
86 | | -The problem here is once we start shipping something, users will start to rely |
87 | | -on it, whether we like it or not. So it becomes a trust issue on this team that |
88 | | -is talking care of a platform/architecture. Do we really trust this team not |
89 | | -just for this release but on an ongoing basis. Do they show up consistently when |
90 | | -things break, do they proactively work with testing/release on ongoing efforts |
91 | | -and try to apply them to their architectures. It's very easy to setup a CI job |
92 | | -as a one time thing, tick a box and advocate to get something added. It's a |
93 | | -totally different ball game to be there consistently over time and show that you |
94 | | -mean it. There has to be a consistent body of people working on this over time |
95 | | -(life happens!). |
96 | | - |
97 | | -What are we looking for here, a strong green CI signal for release managers |
98 | | -to cut a release and for folks to be able to report problems and them getting |
99 | | -addressed. This includes [conformance testing][2] as use of the Kubernetes |
100 | | -trademark is controlled through a conformance ensurance process. So we are |
101 | | -looking for folks here to work with [the conformance sub project][3] in addition |
102 | | -to testing and release. |
103 | | - |
104 | | -[2]: https://github.com/cncf/k8s-conformance |
105 | | -[3]: https://bit.ly/sig-architecture-conformance |
106 | | - |
107 | | -### Step 4: Finishing |
108 | | - |
109 | | -If you got this far, you really have made it! You have a clear engagement with |
110 | | -the community, you are working seamlessly with all the relevant SIGs, you have |
111 | | -your content in the Kubernetes release and get end users to adopt your |
112 | | -architecture. Having achieved conformance, you will gain conditional use of the |
113 | | -Kubernetes trademark relative to your offerings. |
114 | | - |
115 | | -### Generic rules to consider |
116 | | - |
117 | | -- We should keep it easy for contributors to get into Step 1. |
118 | | -- Step 1, by default things should not build and should be switched off. |
119 | | -- Step 1, should not place undue burden on review or infrastructure (case in |
120 | | - point - Windows). |
121 | | -- Once Step 2 is done, we could consider switching things on by default (but |
122 | | - still not in release artifacts). |
123 | | -- Once Step 3 is done, binaries / images in arch can ship with release. |
124 | | -- Step 2 is at least the default e2e-gce equivalent, PLUS the node e2e tests. |
125 | | - More the better. |
126 | | -- Step 2 will involve 3rd party reporting to test-grid at the least. |
127 | | -- Step 2 may end up needing boskos etc to run against clouds (with these arches) |
128 | | - where we have credits. |
129 | | -- Step 3 is at least the conformance test suite. More the better. Using |
130 | | - community tools like prow/kubeadm is encouraged but not mandated. |
131 | | -- Step 4 is where we take this up to CNCF trademark program. For at least a year |
132 | | - in Step 3 before we go to Step 4. |
133 | | -- If at any stage things bit rot, we go back to a previous step, giving an |
134 | | - opportunity for the community to step up. |
135 | | - |
136 | | -## Deprecating and removing supported platforms |
137 | | - |
138 | | -Supported platforms may be considered as deprecated for various reasons, for |
139 | | -example if they are being replaced by new ones, are not actively used or |
140 | | -maintained any more. Deprecating an already supported platform has to follow a |
141 | | -couple of steps: |
142 | | - |
143 | | -1. The platform deprecation has been announced on k-dev and links to a k/k issue |
144 | | - for further discussions and consensus. |
145 | | - |
146 | | -1. The deprecation will be active immediately after consensus has been reached |
147 | | - at a set deadline. This incorporates approval from SIG Release and |
148 | | - Architecture. |
149 | | - |
150 | | -1. Removing the supported platform will be done in the beginning of the next |
151 | | - minor (v1.N+1.0) release cycle, which means to: |
152 | | - - Update the k/k build scripts to exclude the platform from all targets |
153 | | - - Update the k/sig-release repository to reflect the current set of supported |
154 | | - platforms. |
155 | | - |
156 | | -Please note that actively supported release branches are not affected by the |
157 | | -removal. This ensures compatibility with existing artifact consumers. |
| 3 | +https://github.com/kubernetes/community/blob/master/contributors/guide/platforms.md |
0 commit comments