You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/book/src/developer/core/e2e.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -50,10 +50,10 @@ Using the config file it is possible to:
50
50
51
51
- Define the list of providers to be installed in the management cluster. Most notably,
52
52
for each provider it is possible to define:
53
-
- One or more versions of the providers manifest (built from the sources, or pulled from a
54
-
remote location).
55
-
- A list of additional files to be added to the provider repository, to be used e.g.
56
-
to provide `cluster-templates.yaml` files.
53
+
- One or more versions of the providers manifest (built from the sources, or pulled from a
54
+
remote location).
55
+
- A list of additional files to be added to the provider repository, to be used e.g.
56
+
to provide `cluster-templates.yaml` files.
57
57
- Define the list of variables to be used when doing `clusterctl init` or
58
58
`clusterctl generate cluster`.
59
59
- Define a list of intervals to be used in the test specs for defining timeouts for the
@@ -135,7 +135,7 @@ defined in the [Cluster API test framework] to check if the operation completed
135
135
136
136
### Naming the test spec
137
137
138
-
You can categorize the test with a custom label that can be used to filter a category of E2E tests to be run. Currently, the cluster-api codebase has [these labels](./testing.md#running-specific-tests) which are used to run a focused subset of tests.
138
+
You can categorize the test with a custom label that can be used to filter a category of E2E tests to be run. Currently, the cluster-api codebase has [these labels](testing.md#running-specific-tests) which are used to run a focused subset of tests.
139
139
140
140
## Tear down
141
141
@@ -189,7 +189,7 @@ The [test E2E package] provides examples of how this can be achieved by implemen
189
189
test specs for the most common Cluster API use cases.
190
190
191
191
<!-- links -->
192
-
[Cluster API quick start]: ../user/quick-start.md
192
+
[Cluster API quick start]: ../../user/quick-start.md
193
193
[Cluster API test framework]: https://pkg.go.dev/sigs.k8s.io/cluster-api/test/framework?tab=doc
Copy file name to clipboardExpand all lines: docs/book/src/developer/core/logging.md
+14-15
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ In Cluster API we strive to follow three principles while implementing logging:
16
16
17
17
## Upstream Alignment
18
18
19
-
Kubernetes defines a set of [logging conventions](https://git.k8s.io/community/contributors/devel/sig-instrumentation/logging.md),
19
+
Kubernetes defines a set of [logging conventions](https://git.k8s.io/community/contributors/devel/sig-instrumentation/logging.md),
20
20
as well as tools and libraries for logging.
21
21
22
22
## Continuous improvement
@@ -28,16 +28,16 @@ The foundational items of Cluster API logging are:
28
28
- Adding a minimal set of key/value pairs in the logger at the beginning of each reconcile loop, so all the subsequent
29
29
log entries will inherit them (see [key value pairs](#keyvalue-pairs)).
30
30
31
-
Starting from the above foundations, then the long tail of small improvements will consist of following activities:
32
-
31
+
Starting from the above foundations, then the long tail of small improvements will consist of following activities:
32
+
33
33
- Improve consistency of additional key/value pairs added by single log entries (see [key value pairs](#keyvalue-pairs)).
34
34
- Improve log messages (see [log messages](#log-messages)).
35
35
- Improve consistency of log levels (see [log levels](#log-levels)).
36
36
37
37
## Log Format
38
38
39
39
Controllers MUST provide support for [structured logging](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging)
40
-
and for the [JSON output format](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging#json-output-format);
40
+
and for the [JSON output format](https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/1602-structured-logging#json-output-format);
41
41
quoting the Kubernetes documentation, these are the key elements of this approach:
42
42
43
43
- Separate a log message from its arguments.
@@ -61,7 +61,7 @@ beginning of the chain are then inherited by all the subsequent log entries crea
61
61
Contextual logging is also embedded in controller runtime; In Cluster API we use contextual logging via controller runtime's
62
62
`LoggerFrom(ctx)` and `LoggerInto(ctx, log)` primitives and this ensures that:
63
63
64
-
- The logger passed to each reconcile call has a unique `reconcileID`, so all the logs being written during a single
64
+
- The logger passed to each reconcile call has a unique `reconcileID`, so all the logs being written during a single
65
65
reconcile call can be easily identified (note: controller runtime also adds other useful key value pairs by default).
66
66
- The logger has a key value pair identifying the objects being reconciled,e.g. a Machine Deployment, so all the logs
67
67
impacting this object can be easily identified.
@@ -85,18 +85,18 @@ one of the above practices is really important for Cluster API developers
85
85
- Developers MUST use `klog.KObj` or `klog.KRef` functions when logging key value pairs for Kubernetes objects, thus
86
86
ensuring a key value pair representing a Kubernetes object is formatted consistently in all the logs.
87
87
- Developers MUST use consistent log keys:
88
-
- kinds should be written in upper camel case, e.g. `MachineDeployment`, `MachineSet`
89
-
- Note: we cannot use lower camel case for kinds consistently because there is no way to
90
-
automatically calculate the correct log key for provider CRDs like `AWSCluster`
91
-
- all other keys should use lower camel case, e.g. `resourceVersion`, `oldReplicas` to align to Kubernetes log conventions
88
+
- kinds should be written in upper camel case, e.g. `MachineDeployment`, `MachineSet`
89
+
- Note: we cannot use lower camel case for kinds consistently because there is no way to
90
+
automatically calculate the correct log key for provider CRDs like `AWSCluster`
91
+
- all other keys should use lower camel case, e.g. `resourceVersion`, `oldReplicas` to align to Kubernetes log conventions
92
92
93
93
Please note that, in order to ensure logs can be easily searched it is important to ensure consistency for the following
94
94
key value pairs (in order of importance):
95
95
96
96
- Key value pairs identifying the object being reconciled, e.g. a MachineDeployment.
97
97
- Key value pairs identifying the hierarchy of objects being reconciled, e.g. the Cluster a MachineDeployment belongs
98
98
to.
99
-
- Key value pairs identifying side effects on other objects, e.g. while reconciling a MachineDeployment, the controller
99
+
- Key value pairs identifying side effects on other objects, e.g. while reconciling a MachineDeployment, the controller
100
100
creates a MachineSet.
101
101
- Other Key value pairs.
102
102
@@ -117,9 +117,9 @@ for log levels; as a small integration on the above guidelines we would like to
117
117
- Logs at the lower levels of verbosity (<=3) are meant to document “what happened” by describing how an object status
118
118
is being changed by controller/reconcilers across subsequent reconciliations; as a rule of thumb, it is reasonable
119
119
to assume that a person reading those logs has a deep knowledge of how the system works, but it should not be required
120
-
for those persons to have knowledge of the codebase.
120
+
for those persons to have knowledge of the codebase.
121
121
- Logs at higher levels of verbosity (>=4) are meant to document “how it happened”, providing insight on thorny parts of
122
-
the code; a person reading those logs usually has deep knowledge of the codebase.
122
+
the code; a person reading those logs usually has deep knowledge of the codebase.
123
123
- Don’t use verbosity higher than 5.
124
124
125
125
We are using log level 2 as a default verbosity for all core Cluster API
@@ -140,7 +140,7 @@ Our [Tilt](tilt.md) setup offers a batteries-included log suite based on [Promta
140
140
We are working to continuously improving this experience, allowing Cluster API developers to use logs and improve them as part of their development process.
141
141
142
142
For the best experience exploring the logs using Tilt:
143
-
1. Set `--logging-format=json`.
143
+
1. Set `--logging-format=json`.
144
144
2. Set a high log verbosity, e.g. `v=5`.
145
145
3. Enable Promtail, Loki, and Grafana under `deploy_observability`.
146
146
@@ -168,7 +168,7 @@ extra_args:
168
168
- "--v=5"
169
169
- "--logging-format=json"
170
170
```
171
-
The above options can be combined with other settings from our [Tilt](tilt.md) setup. Once Tilt is up and running with these settings users will be able to browse logs using the Grafana Explore UI.
171
+
The above options can be combined with other settings from our [Tilt](tilt.md) setup. Once Tilt is up and running with these settings users will be able to browse logs using the Grafana Explore UI.
172
172
173
173
This will normally be available on `localhost:3001`. To explore logs from Loki, open the Explore interface for the DataSource 'Loki'. [This link](http://localhost:3001/explore?datasource%22:%22Loki%22) should work as a shortcut with the default Tilt settings.
174
174
@@ -220,4 +220,3 @@ we encourage providers to adopt and contribute to the guidelines defined in this
220
220
221
221
It is also worth noting that the foundational elements of the approach described in this document are easy to achieve
222
222
by leveraging default Kubernetes tooling for logging.
0 commit comments