Skip to content

Commit c3d4232

Browse files
authored
Merge pull request kubernetes#1165 from pigmej/typos_englishify_ug
Typos and englishify user-guide
2 parents a541ae6 + 7112d4c commit c3d4232

24 files changed

+40
-41
lines changed

docs/user-guide/compute-resources.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -122,11 +122,11 @@ runner (Docker or rkt).
122122
When using Docker:
123123
124124
- The `spec.container[].resources.requests.cpu` is converted to its core value (potentially fractional),
125-
and multipled by 1024, and used as the value of the [`--cpu-shares`](
125+
and multiplied by 1024, and used as the value of the [`--cpu-shares`](
126126
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
127127
command.
128128
- The `spec.container[].resources.limits.cpu` is converted to its millicore value,
129-
multipled by 100000, and then divided by 1000, and used as the value of the [`--cpu-quota`](
129+
multiplied by 100000, and then divided by 1000, and used as the value of the [`--cpu-quota`](
130130
https://docs.docker.com/reference/run/#runtime-constraints-on-resources) flag to the `docker run`
131131
command. The [`--cpu-period`] flag is set to 100000 which represents the default 100ms period
132132
for measuring quota usage. The kubelet enforces cpu limits if it was started with the

docs/user-guide/configuring-containers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ assignees:
1010

1111
## Configuration in Kubernetes
1212

13-
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](/docs/user-guide/quick-start), Kubernetes supports declarative configuration. Often times, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
13+
In addition to the imperative-style commands, such as `kubectl run` and `kubectl expose`, described [elsewhere](/docs/user-guide/quick-start), Kubernetes supports declarative configuration. Oftentimes, configuration files are preferable to imperative commands, since they can be checked into version control and changes to the files can be code reviewed, which is especially important for more complex configurations, producing a more robust, reliable and archival system.
1414

1515
In the declarative style, all configuration is stored in YAML or JSON configuration files using Kubernetes's API resource schemas as the configuration schemas. `kubectl` can create, update, delete, and get API resources. The `apiVersion` (currently 'v1'?), resource `kind`, and resource `name` are used by `kubectl` to construct the appropriate API path to invoke for the specified operation.
1616

docs/user-guide/deployments.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ nginx-deployment-2035384211-qqcnn 1/1 Running 0 18s app
7878

7979
The created Replica Set will ensure that there are three nginx Pods at all times.
8080

81-
**Note:** You must specify appropriate selector and pod template labels of a Deployment (in this case, `app = nginx`), i.e. don't overlap with other controllers (including Deployments, Replica Sets, Replication Controllers, etc.) Kubernetes won't stop you from doing that, and if you end up with multiple controllers that have overlapping selectors, those controllers will fight with each others and won't behave correctly.
81+
**Note:** You must specify appropriate selector and pod template labels of a Deployment (in this case, `app = nginx`), i.e. don't overlap with other controllers (including Deployments, Replica Sets, Replication Controllers, etc.) Kubernetes won't stop you from doing that, and if you end up with multiple controllers that have overlapping selectors, those controllers will fight with each other's and won't behave correctly.
8282

8383
## The Status of a Deployment
8484

@@ -503,7 +503,7 @@ number of Pods are less than the desired number.
503503

504504
Note that you should not create other pods whose labels match this selector, either directly, via another Deployment or via another controller such as Replica Sets or Replication Controllers. Otherwise, the Deployment will think that those pods were created by it. Kubernetes will not stop you from doing this.
505505

506-
If you have multiple controllers that have overlapping selectors, the controllers will fight with each others and won't behave correctly.
506+
If you have multiple controllers that have overlapping selectors, the controllers will fight with each other's and won't behave correctly.
507507

508508
### Strategy
509509

docs/user-guide/federation/federated-services.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ assignees:
77

88
This guide explains how to use Kubernetes Federated Services to deploy
99
a common Service across multiple Kubernetes clusters. This makes it
10-
easy to achieve cross-cluster service discovery and availibility zone
10+
easy to achieve cross-cluster service discovery and availability zone
1111
fault tolerance for your Kubernetes applications.
1212

1313

@@ -42,7 +42,7 @@ Once created, the Federated Service automatically:
4242

4343
1. creates matching Kubernetes Services in every cluster underlying your Cluster Federation,
4444
2. monitors the health of those service "shards" (and the clusters in which they reside), and
45-
3. manages a set of DNS records in a public DNS provder (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
45+
3. manages a set of DNS records in a public DNS provider (like Google Cloud DNS, or AWS Route 53), thus ensuring that clients
4646
of your federated service can seamlessly locate an appropriate healthy service endpoint at all times, even in the event of cluster,
4747
availability zone or regional outages.
4848

@@ -200,7 +200,7 @@ nginx.mynamespace.myfederation.svc.asia-east1-b.example.com. CNAME 180 ngin
200200
nginx.mynamespace.myfederation.svc.asia-east1-c.example.com. A 180 130.211.56.221
201201
nginx.mynamespace.myfederation.svc.asia-east1.example.com. A 180 130.211.57.243, 130.211.56.221
202202
nginx.mynamespace.myfederation.svc.europe-west1.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.example.com.
203-
nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.europe-west1.example.com.
203+
nginx.mynamespace.myfederation.svc.europe-west1-d.example.com. CNAME 180 nginx.mynamespace.myfederation.svc.europe-west1.example.com.
204204
... etc.
205205
```
206206

@@ -224,7 +224,7 @@ due to caching by intermediate DNS servers.
224224

225225
### Some notes about the above example
226226

227-
1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
227+
1. Notice that there is a normal ('A') record for each service shard that has at least one healthy backend endpoint. For example, in us-central1-a, 104.197.247.191 is the external IP address of the service shard in that zone, and in asia-east1-a the address is 130.211.56.221.
228228
2. Similarly, there are regional 'A' records which include all healthy shards in that region. For example, 'us-central1'. These regional records are useful for clients which do not have a particular zone preference, and as a building block for the automated locality and failover mechanism described below.
229229
2. For zones where there are currently no healthy backend endpoints, a CNAME ('Canonical Name') record is used to alias (automatically redirect) those queries to the next closest healthy zone. In the example, the service shard in us-central1-f currently has no healthy backend endpoints (i.e. Pods), so a CNAME record has been created to automatically redirect queries to other shards in that region (us-central1 in this case).
230230
3. Similarly, if no healthy shards exist in the enclosing region, the search progresses further afield. In the europe-west1-d availability zone, there are no healthy backends, so queries are redirected to the broader europe-west1 region (which also has no healthy backends), and onward to the global set of healthy addresses (' nginx.mynamespace.myfederation.svc.example.com.')
@@ -295,7 +295,7 @@ availability zones and regions other than the ones local to a Pod by
295295
specifying the appropriate DNS names explicitly, and not relying on
296296
automatic DNS expansion. For example,
297297
"nginx.mynamespace.myfederation.svc.europe-west1.example.com" will
298-
resolve to all of the currently healthy service shards in Europe, even
298+
resolve to all of the currently healthy service shards in europe, even
299299
if the Pod issuing the lookup is located in the U.S., and irrespective
300300
of whether or not there are healthy shards of the service in the U.S.
301301
This is useful for remote monitoring and other similar applications.
@@ -366,7 +366,7 @@ Check that:
366366
1. Your federation name, DNS provider, DNS domain name are configured correctly. Consult the [federation admin guide](/docs/admin/federation/) or [tutorial](https://github.com/kelseyhightower/kubernetes-cluster-federation) to learn
367367
how to configure your Cluster Federation system's DNS provider (or have your cluster administrator do this for you).
368368
2. Confirm that the Cluster Federation's service-controller is successfully connecting to and authenticating against your selected DNS provider (look for `service-controller` errors or successes in the output of `kubectl logs federation-controller-manager --namespace federation`)
369-
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in it's logs explaining in more detail what's failing).
369+
3. Confirm that the Cluster Federation's service-controller is successfully creating DNS records in your DNS provider (or outputting errors in its logs explaining in more detail what's failing).
370370

371371
#### Matching DNS records are created in my DNS provider, but clients are unable to resolve against those names
372372
Check that:

docs/user-guide/jobs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ parallelism, for a variety or reasons:
167167
A Container in a Pod may fail for a number of reasons, such as because the process in it exited with
168168
a non-zero exit code, or the Container was killed for exceeding a memory limit, etc. If this
169169
happens, and the `.spec.template.containers[].restartPolicy = "OnFailure"`, then the Pod stays
170-
on the node, but the Container is re-run. Therefore, your program needs to handle the the case when it is
170+
on the node, but the Container is re-run. Therefore, your program needs to handle the case when it is
171171
restarted locally, or else specify `.spec.template.containers[].restartPolicy = "Never"`.
172172
See [pods-states](/docs/user-guide/pod-states) for more information on `restartPolicy`.
173173

docs/user-guide/jobs/expansions/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ job-banana.yaml
5454
job-cherry.yaml
5555
```
5656

57-
Here, we used `sed` to replace the string `$ITEM` with the the loop variable.
57+
Here, we used `sed` to replace the string `$ITEM` with the loop variable.
5858
You could use any type of template language (jinja2, erb) or write a program
5959
to generate the Job objects.
6060

docs/user-guide/jobs/work-queue-1/index.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -122,8 +122,7 @@ root@temp-loe07:/#
122122
```
123123

124124
In the last command, the `amqp-consume` tool takes one message (`-c 1`)
125-
from the queue, and passes that message to the standard input of an
126-
an arbitrary command. In this case, the program `cat` is just printing
125+
from the queue, and passes that message to the standard input of an arbitrary command. In this case, the program `cat` is just printing
127126
out what it gets on the standard input, and the echo is just to add a carriage
128127
return so the example is readable.
129128

@@ -169,7 +168,7 @@ example program:
169168

170169
{% include code.html language="python" file="worker.py" ghlink="/docs/user-guide/job/work-queue-1/worker.py" %}
171170

172-
Now, build an an image. If you are working in the source
171+
Now, build an image. If you are working in the source
173172
tree, then change directory to `examples/job/work-queue-1`.
174173
Otherwise, make a temporary directory, change to it,
175174
download the [Dockerfile](Dockerfile?raw=true),
@@ -275,7 +274,7 @@ not all items will be processed.
275274
If the number of completions is set to more than the number of items in the queue,
276275
then the Job will not appear to be completed, even though all items in the queue
277276
have been processed. It will start additional pods which will block waiting
278-
for a mesage.
277+
for a message.
279278

280279
There is an unlikely race with this pattern. If the container is killed in between the time
281280
that the message is acknowledged by the amqp-consume command and the time that the container

docs/user-guide/jobs/work-queue-2/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Here is an overview of the steps in this example:
3131

3232
For this example, for simplicitly, we will start a single instance of Redis.
3333
See the [Redis Example](https://github.com/kubernetes/kubernetes/tree/{{page.githubbranch}}/examples/redis/README.md) for an example
34-
of deploying Redis scaleably and redundantly.
34+
of deploying Redis scalably and redundantly.
3535

3636
Start a temporary Pod running Redis and a service so we can find it.
3737

docs/user-guide/kubeconfig-file.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ So in order to easily switch between multiple clusters, for multiple users, a ku
1616

1717
This file contains a series of authentication mechanisms and cluster connection information associated with nicknames. It also introduces the concept of a tuple of authentication information (user) and cluster connection information called a context that is also associated with a nickname.
1818

19-
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged together along with override options specified from the command line (see [rules](#loading-and-merging) below).
19+
Multiple kubeconfig files are allowed, if specified explicitly. At runtime they are loaded and merged along with override options specified from the command line (see [rules](#loading-and-merging) below).
2020

2121
## Related discussion
2222

docs/user-guide/kubectl-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ $ kubectl exec -ti <pod-name> /bin/bash
266266
// Return a snapshot of the logs from pod <pod-name>.
267267
$ kubectl logs <pod-name>
268268

269-
// Start streaming the logs from pod <pod-name>. This is similiar to the 'tail -f' Linux command.
269+
// Start streaming the logs from pod <pod-name>. This is similar to the 'tail -f' Linux command.
270270
$ kubectl logs -f <pod-name>
271271
```
272272

0 commit comments

Comments
 (0)