Skip to content

Commit 6a15c9b

Browse files
authored
Merge pull request honojs#416 from calvin-puram/fix-typos
fix typos
2 parents a184c20 + 4117b69 commit 6a15c9b

File tree

12 files changed

+67
-67
lines changed

12 files changed

+67
-67
lines changed

docs/administrator/backup/working-with-velero.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Integrate Velero to back up and restore Karmada resources
33
---
44

5-
[Velero](https://github.com/vmware-tanzu/velero) gives you tools to back up and restore
5+
[Velero](https://github.com/vmware-tanzu/velero) gives you tools to back up and restore
66
your Kubernetes cluster resources and persistent volumes. You can run Velero with a public
77
cloud platform or on-premises.
88

@@ -48,9 +48,9 @@ Run this command to start `MinIO`:
4848
./minio server /data --console-address="0.0.0.0:20001" --address="0.0.0.0:9000"
4949
```
5050

51-
Replace `/data` with the path to the drive or directory in which you want `MinIO` to store data. And now we can visit
51+
Replace `/data` with the path to the drive or directory in which you want `MinIO` to store data. And now we can visit
5252
`http://{SERVER_EXTERNAL_IP}/20001` in the browser to visit `MinIO` console UI. And `Velero` can use
53-
`http://{SERVER_EXTERNAL_IP}/9000` to connect `MinIO`. The two configuration will make our follow-up work easier and more convenient.
53+
`http://{SERVER_EXTERNAL_IP}/9000` to connect `MinIO`. The two configurations will make our follow-up work easier and more convenient.
5454

5555
Please visit `MinIO` console to create region `minio` and bucket `velero`, these will be used by `Velero`.
5656

@@ -64,32 +64,32 @@ Velero consists of two components:
6464
```shell
6565
wget https://github.com/vmware-tanzu/velero/releases/download/v1.7.0/velero-v1.7.0-linux-amd64.tar.gz
6666
```
67-
67+
6868
2. Extract the tarball:
6969
```shell
7070
tar -zxvf velero-v1.7.0-linux-amd64.tar.gz
7171
```
72-
72+
7373
3. Move the extracted velero binary to somewhere in your $PATH (/usr/local/bin for most users).
7474
```shell
7575
cp velero-v1.7.0-linux-amd64/velero /usr/local/bin/
7676
```
7777

7878
- ### A server that runs on your cluster
7979
We will use `velero install` to set up server components.
80-
80+
8181
For more details about how to use `MinIO` and `Velero` to backup resources, please ref: https://velero.io/docs/v1.7/contributions/minio/
82-
82+
8383
1. Create a Velero-specific credentials file (credentials-velero) in your local directory:
8484
```shell
8585
[default]
8686
aws_access_key_id = minio
8787
aws_secret_access_key = minio123
8888
```
8989
The two values should keep the same with `MinIO` username and password that we set when we install `MinIO`
90-
90+
9191
2. Start the server.
92-
92+
9393
We need to install `Velero` in both `member1` and `member2`, so we should run the below command in shell for both two clusters,
9494
this will start Velero server. Please run `kubectl config use-context member1` and `kubectl config use-context member2`
9595
to switch to the different member clusters: `member1` or `member2`.
@@ -104,13 +104,13 @@ Velero consists of two components:
104104
```
105105
Replace `{SERVER_EXTERNAL_IP}` with your own server external IP.
106106

107-
3. Deploy the nginx application to cluster `member1`:
108-
107+
3. Deploy the nginx application to cluster `member1`:
108+
109109
Run the below command in the Karmada directory.
110110
```shell
111111
kubectl apply -f samples/nginx/deployment.yaml
112112
```
113-
113+
114114
And then you will find nginx is deployed successfully.
115115
```shell
116116
# kubectl get deployment.apps
@@ -152,7 +152,7 @@ NAME BACKUP STATUS STARTED
152152
nginx-backup-20211210151807 nginx-backup Completed 2021-12-10 15:18:07 +0800 CST 2021-12-10 15:18:07 +0800 CST 0 0 2021-12-10 15:18:07 +0800 CST <none>
153153
```
154154

155-
And then you can find deployment nginx will be restored successfully.
155+
And then you can find deployment nginx will be restored successfully.
156156
```shell
157157
# kubectl get deployment.apps/nginx
158158
NAME READY UP-TO-DATE AVAILABLE AGE
@@ -161,9 +161,9 @@ nginx 2/2 2 2 21s
161161

162162
### Backup and restore of kubernetes resources through Velero combined with karmada
163163

164-
In Karmada control plane, we need to install velero crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances.Based on work API here, they will be encapsulated as a work object deliverd to member clusters and reconciled by velero controllers in member clusters finally.
164+
In Karmada control plane, we need to install velero crds but do not need controllers to reconcile them. They are treated as resource templates, not specific resource instances. Based on work API here, they will be encapsulated as a work object delivered to member clusters and reconciled by velero controllers in member clusters finally.
165165

166-
Create velero crds in Karmada control plane:
166+
Create velero crds in Karmada control plane:
167167
remote velero crd directory: `https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds/`
168168

169169
Create a backup in `karmada-apiserver` and Distributed to `member1` cluster through PropagationPolicy
@@ -245,7 +245,7 @@ EOF
245245

246246
```
247247

248-
And then you can find deployment nginx will be restored on member2 successfully.
248+
And then you can find deployment nginx will be restored on member2 successfully.
249249
```shell
250250
# kubectl get deployment.apps/nginx
251251
NAME READY UP-TO-DATE AVAILABLE AGE

docs/administrator/migration/migration-from-kubefed.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,26 +2,26 @@
22
title: Migration From Kubefed
33
---
44

5-
Karmada is developed in continuation of Kubernetes [Federation v1](https://github.com/kubernetes-retired/federation)
5+
Karmada is developed in continuation of Kubernetes [Federation v1](https://github.com/kubernetes-retired/federation)
66
and [Federation v2(aka Kubefed)](https://github.com/kubernetes-sigs/kubefed). Karmada inherited a lot of concepts
77
from these two versions. For example:
88

9-
- **Resource template**: Karmada uses Kubernetes Native API definition for federated resource template,
9+
- **Resource template**: Karmada uses Kubernetes Native API definition for federated resource template,
1010
to make it easy to integrate with existing tools that already adopt Kubernetes.
11-
- **Propagation Policy**: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster
11+
- **Propagation Policy**: Karmada offers a standalone Propagation(placement) Policy API to define multi-cluster
1212
scheduling and spreading requirements.
1313
- **Override Policy**: Karmada provides a standalone Override Policy API for specializing cluster relevant
14-
configuration automation.
14+
configuration automation.
1515

1616
Most of the features in Kubefed have been reformed in Karmada, so Karmada would be the natural successor.
1717

1818
Generally speaking, migrating from Kubefed to Karmada would be pretty easy.
19-
This document outlines the basic migrate path for Kubefed users.
19+
This document outlines the basic migration path for Kubefed users.
2020
**Note:** This document is a work in progress, any feedback would be welcome.
2121

2222
## Cluster Registration
2323

24-
Kubefed provides `join` and `unjoin` commands in `kubefedctl` command line tool, Karmada also implemented the
24+
Kubefed provides `join` and `unjoin` commands in `kubefedctl` command line tool, Karmada also implemented the
2525
two commands in `karmadactl`.
2626

2727
Refer to [Kubefed Cluster Registration](https://github.com/kubernetes-sigs/kubefed/blob/master/docs/cluster-registration.md),
@@ -38,7 +38,7 @@ kubefedctl join cluster1 --cluster-context cluster1 --host-cluster-context clust
3838

3939
Now with Karmada, you can use `karmadactl` tool to do the same thing:
4040
```
41-
karmadactl join cluster1 --cluster-context cluster1 --karmada-context karmada
41+
karmadactl join cluster1 --cluster-context cluster1 --karmada-context karmada
4242
```
4343

4444
The behavior behind the `join` command is similar between Kubefed and Karmada. For Kubefed, it will create a
@@ -68,7 +68,7 @@ member1 v1.20.7 Push True 66s
6868
```
6969

7070
Kubefed manages clusters with `Push` mode, however Karmada supports both `Push` and `Pull` modes.
71-
Refer to [Overview of cluster mode](https://karmada.io/docs/userguide/clustermanager/cluster-registration) for
71+
Refer to [Overview of cluster mode](https://karmada.io/docs/userguide/clustermanager/cluster-registration) for
7272
more details.
7373

7474
### Unjoining clusters
@@ -82,7 +82,7 @@ kubefedctl unjoin cluster2 --cluster-context cluster2 --host-cluster-context clu
8282
Now with Karmada, you can use `karmadactl` tool to do the same thing:
8383

8484
```
85-
karmadactl unjoin cluster2 --cluster-context cluster2 --karmada-context karmada
85+
karmadactl unjoin cluster2 --cluster-context cluster2 --karmada-context karmada
8686
```
8787

8888
The behavior behind the `unjoin` command is similar between Kubefed and Karmada, they both remove the cluster
@@ -178,8 +178,8 @@ spec:
178178
- cluster2
179179
```
180180

181-
The `PropagationPolicy` defines the rules of which resources(`resourceSelectors`) should be propagated to
182-
where (`placement`).
181+
The `PropagationPolicy` defines the rules of which resources(`resourceSelectors`) should be propagated to
182+
where (`placement`).
183183
See [Resource Propagating](https://karmada.io/docs/userguide/scheduling/resource-propagating) for more details.
184184

185185
For the `override` part, Karmada provides `OverridePolicy` API to hold the rules for differentiation:
@@ -215,7 +215,7 @@ spec:
215215
value: 1.17.0-alpine
216216
```
217217

218-
The `OverridePolicy` defines the rules of which resources(`resourceSelectors`) should be overwritten when
218+
The `OverridePolicy` defines the rules of which resources(`resourceSelectors`) should be overwritten when
219219
propagating to where(`targetCluster`).
220220

221221
In addition to Kubefed, Karmada offers various alternatives to declare the override rules, see
@@ -225,9 +225,9 @@ In addition to Kubefed, Karmada offers various alternatives to declare the overr
225225

226226
### Will Karmada provide tools to smooth the migration?
227227

228-
We don't have the plan yet, as we reached some Kubefed users and found that they're usually not using vanilla
228+
We don't have the plan yet, as we reached some Kubefed users and found that they're usually not using vanilla
229229
Kubefed but the forked version, they extended Kubefed a lot to meet their requirements. So, it might be pretty
230230
hard to maintain a common tool to satisfy most users.
231231

232-
We are also looking forward more feedback now, please feel free to reach us, and we are glad to support you
232+
We are also looking forward to more feedback now, please feel free to reach us, and we are glad to support you
233233
finish the migration.

docs/administrator/monitoring/working-with-filebeat.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,21 +2,21 @@
22
title: Use Filebeat to collect logs of Karmada member clusters
33
---
44

5-
[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [kafka](https://github.com/apache/kafka) for indexing.
5+
[Filebeat](https://github.com/elastic/beats/tree/master/filebeat) is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to [Elasticsearch](https://www.elastic.co/products/elasticsearch) or [kafka](https://github.com/apache/kafka) for indexing.
66

7-
This document demonstrates how to use the `Filebeat` to collect logs of Karmada member clusters.
7+
This document demonstrates how to use the `Filebeat` to collect logs of Karmada member clusters.
88

99
## Start up Karmada clusters
1010

11-
You just need to clone Karmada repo, and run the following script in Karmada directory.
11+
You just need to clone Karmada repo, and run the following script in Karmada directory.
1212

1313
```bash
1414
hack/local-up-karmada.sh
1515
```
1616

1717
## Start Filebeat
1818

19-
1. Create resource objects of Filebeat, the content is as follows. You can specify a list of inputs in the `filebeat.inputs` section of the `filebeat.yml`. Inputs specify how Filebeat locates and processes input data, also you can configure Filebeat to write to a specific output by setting options in the `Outputs` section of the `filebeat.yml` config file. The example will collect the log information of each container and write the collected logs to a file. More detailed information about the input and output configuration, please refer to: https://github.com/elastic/beats/tree/master/filebeat/docs
19+
1. Create resource objects of Filebeat, the content is as follows. You can specify a list of inputs in the `filebeat.inputs` section of the `filebeat.yml`. Inputs specify how Filebeat locates and processes input data, also you can configure Filebeat to write to a specific output by setting options in the `Outputs` section of the `filebeat.yml` config file. The example will collect the log information of each container and write the collected logs to a file. For more detailed information about the input and output configuration, please refer to: https://github.com/elastic/beats/tree/master/filebeat/docs
2020

2121
```yaml
2222
apiVersion: v1
@@ -89,11 +89,11 @@ hack/local-up-karmada.sh
8989
# type: container
9090
# paths:
9191
# - /var/log/containers/*${data.kubernetes.container.id}.log
92-
92+
9393
processors:
9494
- add_cloud_metadata:
9595
- add_host_metadata:
96-
96+
9797
#output.elasticsearch:
9898
# hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
9999
# username: ${ELASTICSEARCH_USERNAME}
@@ -179,7 +179,7 @@ hack/local-up-karmada.sh
179179
type: DirectoryOrCreate
180180
```
181181
182-
2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy.
182+
2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy.
183183
184184
```
185185
cat <<EOF | kubectl apply -f -

docs/contributor/cherry-picks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ patch release branches.
8080
```
8181

8282
- Be aware the cherry pick script assumes you have a git remote called
83-
`upstream` that points at the Karmada github org.
83+
`upstream` that points to the Karmada github org.
8484

8585
- You will need to run the cherry pick script separately for each patch
8686
release you want to cherry pick to. Cherry picks should be applied to all

docs/contributor/contribute-docs.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ the `karmada-io/website` repository.
1313
- Docs need to be translated into multiple languages for readers from different regions.
1414
The community now supports both Chinese and English.
1515
English is the official language of documentation.
16-
- For our docs we use markdown. If you are unfamiliar with Markdown, please see https://guides.github.com/features/mastering-markdown/ or https://www.markdownguide.org/ if you are looking for something more substantial.
16+
- For our docs we use markdown. If you are unfamiliar with Markdown, please see https://guides.github.com/features/mastering-markdown/ or https://www.markdownguide.org/ if you are looking for something more substantial.
1717
- We get some additions through [Docusaurus 2](https://docusaurus.io/), a model static website generator.
1818

1919
## Setup
@@ -86,7 +86,7 @@ title: A doc with tags
8686
```
8787

8888
The top section between two lines of --- is the Front Matter section. Here we define a couple of entries which tell Docusaurus how to handle the article:
89-
* Title is the equivalent of the `<h1>` in a HTML document or `# <title>` in a Markdown article.
89+
* Title is the equivalent of the `<h1>` in an HTML document or `# <title>` in a Markdown article.
9090
* Each document has a unique ID. By default, a document ID is the name of the document (without the extension) related to the root docs directory.
9191

9292
### Linking to other docs
@@ -102,9 +102,9 @@ You can easily route to other places by adding any of the following links:
102102
Now we store public pictures about Karmada in `/docs/resources/general`. You can use the following to link the pictures:
103103
* `![Git workflow](../resources/contributor/git_workflow.png)`
104104

105-
### Directory organization
105+
### Directory organization
106106

107-
Docusaurus 2 uses a sidebar to manage documents.
107+
Docusaurus 2 uses a sidebar to manage documents.
108108

109109
Creating a sidebar is useful to:
110110
* Group multiple related documents
@@ -157,14 +157,14 @@ If you add a document, you must add it to `sidebars.js` to make it display prope
157157
### About Chinese docs
158158

159159
If you want to contribute to our Chinese documentation, you can:
160-
* Translate our existing English docs to Chinese. In this case, you need to modify the corresponding file content from <https://github.com/karmada-io/website/tree/main/i18n/zh/docusaurus-plugin-content-docs/current>.
160+
* Translate our existing English docs to Chinese. In this case, you need to modify the corresponding file content from <https://github.com/karmada-io/website/tree/main/i18n/zh/docusaurus-plugin-content-docs/current>.
161161
The organization of this directory is exactly the same as the outer layer. `current.json` holds translations for the documentation directory. You can edit it if you want to translate the name of directory.
162162
* Submit Chinese docs without the English version. No limits on the topic or category. In this case, you can add an empty article and its title to the main directory first, and complete the rest later.
163163
Then add the corresponding Chinese content to the Chinese directory.
164164

165165
## Debugging docs
166166

167-
Now you have already completed docs. After you start a PR to `karmada.io/website`, if you have passed CI, you can get a preview of your document on the website.
167+
Now you have already completed the docs. After you start a PR to `karmada.io/website`, if you have passed CI, you can get a preview of your document on the website.
168168

169169
Click **Details** marked in red, and you will enter the preview view of the website.
170170

docs/contributor/count-contributions.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ title: Correct your information for better contribution
44

55
After contributing to [karmada-io](https://github.com/karmada-io) through issues, comments, pull requests, etc., you can check your contributions [here](https://karmada.devstats.cncf.io/d/66/developer-activity-counts-by-companies).
66

7-
If you notice that the information in the company column is either incorrect or blank, we highly recommend that you correct it.
7+
If you notice that the information in the company column is either incorrect or blank, we highly recommend that you correct it.
88

99
For instance, `Huawei Technologies Co. Ltd`should be used instead of `HUAWEI`:
1010
![Wrong Information](../resources/contributor/contributions_list.png)
@@ -14,11 +14,11 @@ Here are the steps to fix this issue.
1414
## Verify your organization in the CNCF system
1515
To begin, visit your profile [page](https://openprofile.dev/edit/profile) and ensure that your organization is accurate.
1616
![organization-check](../resources/contributor/organization_check.png)
17-
* If the organization incorrect, please select the right one.
18-
* If your organization is not in the list, clieck on **Add** to add your organization.
17+
* If the organization is incorrect, please select the right one.
18+
* If your organization is not on the list, click on **Add** to add your organization.
1919

2020
## Update the CNCF repository used for calculating your contributions
21-
Once you have verified your organization in the CNCF system, you must create a pull request in gitdm with the updated affiliations.
21+
Once you have verified your organization in the CNCF system, you must create a pull request in gitdm with the updated affiliations.
2222
To do this, you'll need to modify two files: `company_developers*.txt` and `developers_affiliations*.txt`. For reference, please see this example pull request: [PR Example](https://github.com/cncf/gitdm/pull/1257).
2323

2424
After the pull request has been successfully merged, it may take up to four weeks for the changes to be synced.

0 commit comments

Comments
 (0)