Skip to content
This repository was archived by the owner on Feb 12, 2021. It is now read-only.

Commit eee8dcc

Browse files
committed
etcd: feedback on etcd-member doc.
1 parent 65f8ff7 commit eee8dcc

File tree

2 files changed

+37
-28
lines changed

2 files changed

+37
-28
lines changed

etcd/getting-started-with-etcd-manually.md

+36-27
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,12 @@
1-
# Setting up etcd v3 on Container Linux "by hand"
1+
# Manual Configuration of etcd3 on Container Linux
22

3-
The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!
3+
The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt.
44

5-
**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
5+
f **Before we begin** if you are able to use Container Linux Configs [to provision your Container Linux nodes][easier-setup], you should go that route. Use this guide only if you must set up etcd the *hard* way.
66

7-
This tutorial outlines how to setup the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
7+
This tutorial outlines how to set up the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
8+
9+
It is expected that you have some familiarity with etcd operations before entering this guide and have at least skimmed the [etcd clustering guide][etcd-clustering] first.
810

911
We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.
1012

@@ -13,7 +15,9 @@ We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. Th
1315
| 0 | 192.168.100.100 | my-etcd-0 |
1416
| 1 | 192.168.100.101 | my-etcd-1 |
1517

16-
First, run `sudo systemctl edit etcd-member` and paste the following code into the editor:
18+
These IP addresses are visible from within your two machines as well as on the host machine. Once the VMs are setup you should be able to run `ping 192.168.100.100` and `ping 192.168.100.101`, where those are the ip addresses of the VMs.
19+
20+
SSH into your first node and run `systemctl edit etcd-member` and paste the following code into the editor:
1721

1822
```ini
1923
[Service]
@@ -29,6 +33,8 @@ Environment="ETCD_OPTS=\
2933
--initial-cluster-state=\"new\""
3034
```
3135

36+
This will create a systemd unit *override* and open the new file in `vi`. The file is empty to begin and *you* populate it with the above code. Paste the above code into the editor and `:wq` to save it.
37+
3238
Replace:
3339

3440
| Variable | value |
@@ -39,10 +45,12 @@ Replace:
3945
| `my-etcd-1` | The other node's name. |
4046
| `f7b787ea26e0c8d44033de08c2f80632` | The discovery token obtained from https://discovery.etcd.io/new?size=2 (generate your own!). |
4147

42-
*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*
48+
> To create a cluster of more than 2 nodes, set `size=#`, where `#` is the number of nodes you wish to create. If not set, any extra nodes will become proxies.
4349
44-
1. Edit the file appropriately and save it. Run `systemctl daemon-reload`.
45-
2. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look like this:
50+
1. Edit the service override.
51+
2. Save the changes.
52+
3. Run `systemctl daemon-reload`.
53+
4. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look something like this:
4654

4755

4856
```ini
@@ -59,15 +67,17 @@ Environment="ETCD_OPTS=\
5967
--initial-cluster-state=\"new\""
6068
```
6169

62-
*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*
70+
Note that the arguments used in this configuration file are the same as those passed to the etcd binary when starting a cluster. For more information on help and sanity checks, see the [etcd clustering guide][etcd-clustering].
6371

6472
## Verification
6573

66-
You can verify that the services have been configured by running `systemctl cat etcd-member`. This will print the service and it's override conf to the screen. You should see your changes on both nodes.
74+
1. To verify that services have been configured, run `systemctl cat etcd-member` on the manually configured nodes. This will print the service and it's override conf to the screen. You should see the overrides on both nodes.
6775

68-
On both nodes run `systemctl enable etcd-member && systemctl start etcd-member`.
76+
2. To enable the service on boot, run `systemctl enable etcd-member` on all nodes.
6977

70-
If this command hangs for a very long time, <Ctrl>+c to exit out and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
78+
3. To start the service, run `systemctl start etcd-member`. This command may take a while to complete becuase it is downloading a rkt container and setting up etcd.
79+
80+
If the last command hangs for a very long time (10+ minutes), press <Ctrl>+c on your keyboard to exit the commadn and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
7181

7282
```sh
7383
$ rm -rf /var/lib/etcd
@@ -87,22 +97,24 @@ $ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379"
8797
true
8898
```
8999

90-
There you have it! You have now setup etcd v3 by hand. Pat yourself on the back. Take five.
100+
There you have it! You have now set up etcd v3 by hand. Pat yourself on the back. Take five.
91101

92102
## Troubleshooting
93103

94-
In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:
104+
In the process of setting up your etcd cluster if you got it into a non-working state, you have a few options:
95105

96-
1. Reference the [runtime configuration guide][runtime-guide].
97-
2. Reset your environment.
106+
* Reference the [runtime configuration guide][runtime-guide].
107+
* Reset your environment.
98108

99109
Since etcd is running in a container, the second option is very easy.
100110

101-
Start by stopping the `etcd-member` service (run these commands *on* the Container Linux nodes).
111+
Run the following commands on the Container Linux nodes:
112+
113+
114+
1. `systemctl stop etcd-member` to stop the service.
115+
2. `systemctl status etcd-member` to verify the service has exited. The output should look like:
102116

103117
```sh
104-
$ systemctl stop etcd-member
105-
$ systemctl status etcd-member
106118
● etcd-member.service - etcd (System Application Container)
107119
Loaded: loaded (/usr/lib/systemd/system/etcd-member.service; disabled; vendor preset: disabled)
108120
Drop-In: /etc/systemd/system/etcd-member.service.d
@@ -111,16 +123,13 @@ $ systemctl status etcd-member
111123
Docs: https://github.com/coreos/etcd
112124
```
113125

114-
Next, delete the etcd data (again, run on the Container Linux nodes):
115-
116-
```sh
117-
$ rm /var/lib/etcd2
118-
$ rm /var/lib/etcd
119-
```
126+
3. `rm /var/lib/etcd2` to remove the etcd v2 data.
127+
4. `rm /var/lib/etcd` to remove the etcd v3 data.
120128

121-
*If you set the etcd-member to have a custom data directory, you will need to run a different `rm` command.*
129+
> If you set a custom data directory for the etcd-member service, you will need to run a modified `rm` command.
122130
123-
Edit the etcd-member service, restart the `etcd-member` service, and basically start this guide again from the top.
131+
5. Edit the etcd-member service with `systemctl edit etcd-member`.
132+
6. Restart the etcd-member service with `systemctl start etcd-member`.
124133

125134
[runtime-guide]: https://coreos.com/etcd/docs/latest/op-guide/runtime-configuration.html
126135
[etcd-clustering]: https://coreos.com/etcd/docs/latest/op-guide/clustering.html

etcd/getting-started-with-etcd.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ etcd:
3030
initial_cluster_state: new
3131
```
3232

33-
If you are unable to provision your machine using Container Linux configs, check out the [Setting up etcd v3 on Container Linux "by hand"][by-hand]
33+
If you are unable to provision your machine using Container Linux configs, refer to [Setting up etcd v3 on Container Linux "by hand"][by-hand].
3434

3535
## Reading and writing to etcd
3636

0 commit comments

Comments
 (0)