Skip to content
This repository was archived by the owner on Feb 12, 2021. It is now read-only.

Commit 65f8ff7

Browse files
committed
etcd: configuring etcd-member by hand.
1 parent b79621a commit 65f8ff7

File tree

2 files changed

+130
-0
lines changed

2 files changed

+130
-0
lines changed
+127
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# Setting up etcd v3 on Container Linux "by hand"
2+
3+
The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!
4+
5+
**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
6+
7+
This tutorial outlines how to setup the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
8+
9+
We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.
10+
11+
| Node # | IP | etcd member name |
12+
| ------ | --------------- | ---------------- |
13+
| 0 | 192.168.100.100 | my-etcd-0 |
14+
| 1 | 192.168.100.101 | my-etcd-1 |
15+
16+
First, run `sudo systemctl edit etcd-member` and paste the following code into the editor:
17+
18+
```ini
19+
[Service]
20+
Environment="ETCD_IMAGE_TAG=v3.1.7"
21+
Environment="ETCD_OPTS=\
22+
--name=\"my-etcd-0\" \
23+
--listen-client-urls=\"http://192.168.100.100:2379\" \
24+
--advertise-client-urls=\"http://192.168.100.100:2379\" \
25+
--listen-peer-urls=\"http://192.168.100.100:2380\" \
26+
--initial-advertise-peer-urls=\"http://192.168.100.100:2380\" \
27+
--initial-cluster=\"my-etcd-0=http://192.168.100.100:2380,my-etcd-1=http://192.168.100.101:2380\" \
28+
--initial-cluster-token=\"f7b787ea26e0c8d44033de08c2f80632\" \
29+
--initial-cluster-state=\"new\""
30+
```
31+
32+
Replace:
33+
34+
| Variable | value |
35+
| ---------------------------------- | -------------------------------------------------------------------------------------------- |
36+
| `http://192.168.100.100` | Your first node's IP address. Found easily by running `ifconfig`. |
37+
| `http://192.168.100.101` | The second node's IP address. |
38+
| `my-etcd-0` | The first node's name (can be whatever you want). |
39+
| `my-etcd-1` | The other node's name. |
40+
| `f7b787ea26e0c8d44033de08c2f80632` | The discovery token obtained from https://discovery.etcd.io/new?size=2 (generate your own!). |
41+
42+
*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*
43+
44+
1. Edit the file appropriately and save it. Run `systemctl daemon-reload`.
45+
2. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look like this:
46+
47+
48+
```ini
49+
[Service]
50+
Environment="ETCD_IMAGE_TAG=v3.1.7"
51+
Environment="ETCD_OPTS=\
52+
--name=\"my-etcd-1\" \
53+
--listen-client-urls=\"http://192.168.100.101:2379\" \
54+
--advertise-client-urls=\"http://192.168.100.101:2379\" \
55+
--listen-peer-urls=\"http://192.168.100.101:2380\" \
56+
--initial-advertise-peer-urls=\"http://192.168.100.101:2380\" \
57+
--initial-cluster=\"my-etcd-0=http://192.168.100.100:2380,my-etcd-1=http://192.168.100.101:2380\" \
58+
--initial-cluster-token=\"f7b787ea26e0c8d44033de08c2f80632\" \
59+
--initial-cluster-state=\"new\""
60+
```
61+
62+
*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*
63+
64+
## Verification
65+
66+
You can verify that the services have been configured by running `systemctl cat etcd-member`. This will print the service and it's override conf to the screen. You should see your changes on both nodes.
67+
68+
On both nodes run `systemctl enable etcd-member && systemctl start etcd-member`.
69+
70+
If this command hangs for a very long time, <Ctrl>+c to exit out and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
71+
72+
```sh
73+
$ rm -rf /var/lib/etcd
74+
$ systemctl restart etcd-member
75+
```
76+
77+
On your local machine, you should be able to run etcdctl commands which talk to this etcd cluster.
78+
79+
```sh
80+
$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" cluster-health
81+
member fccad8b3e5be5a7 is healthy: got healthy result from http://192.168.100.100:2379
82+
member c337d56ffee02e40 is healthy: got healthy result from http://192.168.100.101:2379
83+
cluster is healthy
84+
$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" set it-works true
85+
true
86+
$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" get it-works
87+
true
88+
```
89+
90+
There you have it! You have now setup etcd v3 by hand. Pat yourself on the back. Take five.
91+
92+
## Troubleshooting
93+
94+
In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:
95+
96+
1. Reference the [runtime configuration guide][runtime-guide].
97+
2. Reset your environment.
98+
99+
Since etcd is running in a container, the second option is very easy.
100+
101+
Start by stopping the `etcd-member` service (run these commands *on* the Container Linux nodes).
102+
103+
```sh
104+
$ systemctl stop etcd-member
105+
$ systemctl status etcd-member
106+
● etcd-member.service - etcd (System Application Container)
107+
Loaded: loaded (/usr/lib/systemd/system/etcd-member.service; disabled; vendor preset: disabled)
108+
Drop-In: /etc/systemd/system/etcd-member.service.d
109+
└─override.conf
110+
Active: inactive (dead)
111+
Docs: https://github.com/coreos/etcd
112+
```
113+
114+
Next, delete the etcd data (again, run on the Container Linux nodes):
115+
116+
```sh
117+
$ rm /var/lib/etcd2
118+
$ rm /var/lib/etcd
119+
```
120+
121+
*If you set the etcd-member to have a custom data directory, you will need to run a different `rm` command.*
122+
123+
Edit the etcd-member service, restart the `etcd-member` service, and basically start this guide again from the top.
124+
125+
[runtime-guide]: https://coreos.com/etcd/docs/latest/op-guide/runtime-configuration.html
126+
[etcd-clustering]: https://coreos.com/etcd/docs/latest/op-guide/clustering.html
127+
[easier-setup]: getting-started-with-etcd.md

etcd/getting-started-with-etcd.md

+3
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,8 @@ etcd:
3030
initial_cluster_state: new
3131
```
3232

33+
If you are unable to provision your machine using Container Linux configs, check out the [Setting up etcd v3 on Container Linux "by hand"][by-hand]
34+
3335
## Reading and writing to etcd
3436

3537
The HTTP-based API is easy to use. This guide will show both `etcdctl` and `curl` examples.
@@ -235,3 +237,4 @@ $ curl http://127.0.0.1:2379/v2/keys/foo
235237
[etcd-v3-upgrade]: https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md
236238
[os-faq]: os-faq.md
237239
[setup-internal-anchor]: #setting-up-etcd
240+
[by-hand]: getting-started-with-etcd-manually.md

0 commit comments

Comments
 (0)