Skip to content

Commit ffc11c7

Browse files
authored
Merge pull request #3911 from clumens/docs
Update documentation
2 parents 2eb045c + 4fa2b8b commit ffc11c7

39 files changed

+1847
-1831
lines changed

doc/sphinx/Clusters_from_Scratch/active-active.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ being fenced every time quorum is lost.
258258

259259
To address this situation, set ``no-quorum-policy`` to ``freeze`` when GFS2 is
260260
in use. This means that when quorum is lost, the remaining partition will do
261-
nothing until quorum is regained.
261+
nothing until quorum is regained.
262262

263263
.. code-block:: console
264264

doc/sphinx/Clusters_from_Scratch/ap-configuration.rst

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ Final Cluster Configuration
8585
pcmk-1 pcmk-2
8686
Pacemaker Nodes:
8787
pcmk-1 pcmk-2
88-
88+
8989
Resources:
9090
Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)
9191
Attributes: cidr_netmask=24 ip=192.168.122.120
@@ -121,13 +121,13 @@ Final Cluster Configuration
121121
Operations: monitor interval=20s timeout=40s (WebFS-monitor-interval-20s)
122122
start interval=0s timeout=60s (WebFS-start-interval-0s)
123123
stop interval=0s timeout=60s (WebFS-stop-interval-0s)
124-
124+
125125
Stonith Devices:
126126
Resource: fence_dev (class=stonith type=some_fence_agent)
127127
Attributes: pcmk_delay_base=pcmk-1:5s;pcmk-2:0s pcmk_host_map=pcmk-1:almalinux9-1;pcmk-2:almalinux9-2
128128
Operations: monitor interval=60s (fence_dev-monitor-interval-60s)
129129
Fencing Levels:
130-
130+
131131
Location Constraints:
132132
Resource: WebSite
133133
Enabled on:
@@ -143,17 +143,17 @@ Final Cluster Configuration
143143
WebSite with WebFS-clone (score:INFINITY) (id:colocation-WebSite-WebFS-INFINITY)
144144
WebFS-clone with dlm-clone (score:INFINITY) (id:colocation-WebFS-dlm-clone-INFINITY)
145145
Ticket Constraints:
146-
146+
147147
Alerts:
148148
No alerts defined
149-
149+
150150
Resources Defaults:
151151
Meta Attrs: build-resource-defaults
152152
resource-stickiness=100
153153
Operations Defaults:
154154
Meta Attrs: op_defaults-meta_attributes
155155
timeout=240s
156-
156+
157157
Cluster Properties:
158158
cluster-infrastructure: corosync
159159
cluster-name: mycluster
@@ -162,10 +162,10 @@ Final Cluster Configuration
162162
last-lrm-refresh: 1658896047
163163
no-quorum-policy: freeze
164164
stonith-enabled: true
165-
165+
166166
Tags:
167167
No tags defined
168-
168+
169169
Quorum:
170170
Options:
171171

doc/sphinx/Clusters_from_Scratch/cluster-setup.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ that will make our lives easier:
5050
.. code-block:: console
5151
5252
# dnf install -y pacemaker pcs psmisc policycoreutils-python3
53-
53+
5454
.. NOTE::
5555

5656
This document uses ``pcs`` for cluster management. Other alternatives,
@@ -206,10 +206,10 @@ Start by taking some time to familiarize yourself with what ``pcs`` can do.
206206
.. code-block:: console
207207
208208
[root@pcmk-1 ~]# pcs
209-
209+
210210
Usage: pcs [-f file] [-h] [commands]...
211211
Control and configure pacemaker and corosync.
212-
212+
213213
Options:
214214
-h, --help Display usage and exit.
215215
-f file Perform actions on file instead of active CIB.

doc/sphinx/Clusters_from_Scratch/index.rst

Lines changed: 2 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@ Clusters from Scratch
44
*Step-by-Step Instructions for Building Your First High-Availability Cluster*
55

66

7-
Abstract
8-
--------
97
This document provides a step-by-step guide to building a simple high-availability
108
cluster using Pacemaker.
119

@@ -22,9 +20,6 @@ included. However, the guide is primarily composed of commands, the reasons for
2220
executing them, and their expected outputs.
2321

2422

25-
Table of Contents
26-
-----------------
27-
2823
.. toctree::
2924
:maxdepth: 3
3025
:numbered:
@@ -41,9 +36,5 @@ Table of Contents
4136
ap-configuration
4237
ap-corosync-conf
4338
ap-reading
44-
45-
Index
46-
-----
47-
48-
* :ref:`genindex`
49-
* :ref:`search`
39+
:ref:`genindex`
40+
:ref:`search`

doc/sphinx/Clusters_from_Scratch/installation.rst

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Install |CFS_DISTRO| |CFS_DISTRO_VER|
77
Boot the Install Image
88
______________________
99

10-
Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to
10+
Download the latest |CFS_DISTRO| |CFS_DISTRO_VER| DVD ISO by navigating to
1111
the |CFS_DISTRO| `mirrors list <https://mirrors.almalinux.org/isos.html>`_,
1212
selecting the latest 9.x version for your machine's architecture, selecting a
1313
download mirror that's close to you, and finally selecting the latest .iso file
@@ -192,13 +192,13 @@ Ensure that the machine has the static IP address you configured earlier.
192192
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
193193
inet 127.0.0.1/8 scope host lo
194194
valid_lft forever preferred_lft forever
195-
inet6 ::1/128 scope host
195+
inet6 ::1/128 scope host
196196
valid_lft forever preferred_lft forever
197197
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
198198
link/ether 52:54:00:32:cf:a9 brd ff:ff:ff:ff:ff:ff
199199
inet 192.168.122.101/24 brd 192.168.122.255 scope global noprefixroute enp1s0
200200
valid_lft forever preferred_lft forever
201-
inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute
201+
inet6 fe80::c3e1:3ba:959:fa96/64 scope link noprefixroute
202202
valid_lft forever preferred_lft forever
203203
204204
.. NOTE::
@@ -219,7 +219,7 @@ Next, ensure that the routes are as expected:
219219
.. code-block:: console
220220
221221
[root@pcmk-1 ~]# ip route
222-
default via 192.168.122.1 dev enp1s0 proto static metric 100
222+
default via 192.168.122.1 dev enp1s0 proto static metric 100
223223
192.168.122.0/24 dev enp1s0 proto kernel scope link src 192.168.122.101 metric 100
224224
225225
If there is no line beginning with ``default via``, then use ``nmcli`` to add a
@@ -238,7 +238,7 @@ testing whether we can reach the gateway we configured.
238238
[root@pcmk-1 ~]# ping -c 1 192.168.122.1
239239
PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data.
240240
64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.492 ms
241-
241+
242242
--- 192.168.122.1 ping statistics ---
243243
1 packets transmitted, 1 received, 0% packet loss, time 0ms
244244
rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms
@@ -250,7 +250,7 @@ Now try something external; choose a location you know should be available.
250250
[root@pcmk-1 ~]# ping -c 1 www.clusterlabs.org
251251
PING mx1.clusterlabs.org (95.217.104.78) 56(84) bytes of data.
252252
64 bytes from mx1.clusterlabs.org (95.217.104.78): icmp_seq=1 ttl=54 time=134 ms
253-
253+
254254
--- mx1.clusterlabs.org ping statistics ---
255255
1 packets transmitted, 1 received, 0% packet loss, time 0ms
256256
rtt min/avg/max/mdev = 133.987/133.987/133.987/0.000 ms
@@ -269,11 +269,11 @@ From another host, check whether we can see the new host at all:
269269
[gchin@gchin ~]$ ping -c 1 192.168.122.101
270270
PING 192.168.122.101 (192.168.122.101) 56(84) bytes of data.
271271
64 bytes from 192.168.122.101: icmp_seq=1 ttl=64 time=0.344 ms
272-
272+
273273
--- 192.168.122.101 ping statistics ---
274274
1 packets transmitted, 1 received, 0% packet loss, time 0ms
275275
rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms
276-
276+
277277
Next, login as ``root`` via SSH.
278278

279279
.. code-block:: console
@@ -283,9 +283,9 @@ Next, login as ``root`` via SSH.
283283
ECDSA key fingerprint is SHA256:NBvcRrPDLIt39Rf0Tz4/f2Rd/FA5wUiDOd9bZ9QWWjo.
284284
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
285285
Warning: Permanently added '192.168.122.101' (ECDSA) to the list of known hosts.
286-
[email protected]'s password:
286+
[email protected]'s password:
287287
Last login: Tue Jan 10 20:46:30 2021
288-
[root@pcmk-1 ~]#
288+
[root@pcmk-1 ~]#
289289
290290
Apply Updates
291291
_____________
@@ -351,7 +351,7 @@ Confirm that you can communicate between the two new nodes:
351351
64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=1.22 ms
352352
64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.795 ms
353353
64 bytes from 192.168.122.102: icmp_seq=3 ttl=64 time=0.751 ms
354-
354+
355355
--- 192.168.122.102 ping statistics ---
356356
3 packets transmitted, 3 received, 0% packet loss, time 2054ms
357357
rtt min/avg/max/mdev = 0.751/0.923/1.224/0.214 ms
@@ -378,7 +378,7 @@ We can now verify the setup by again using ``ping``:
378378
64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=1 ttl=64 time=0.295 ms
379379
64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=2 ttl=64 time=0.616 ms
380380
64 bytes from pcmk-2.localdomain (192.168.122.102): icmp_seq=3 ttl=64 time=0.809 ms
381-
381+
382382
--- pcmk-2.localdomain ping statistics ---
383383
3 packets transmitted, 3 received, 0% packet loss, time 2043ms
384384
rtt min/avg/max/mdev = 0.295/0.573/0.809/0.212 ms
@@ -444,10 +444,10 @@ Install the key on the other node:
444444
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
445445
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
446446
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
447-
root@pcmk-2's password:
448-
447+
root@pcmk-2's password:
448+
449449
Number of key(s) added: 1
450-
450+
451451
Now try logging into the machine, with: "ssh 'pcmk-2'"
452452
and check to make sure that only the key(s) you wanted were added.
453453

doc/sphinx/Clusters_from_Scratch/shared-storage.rst

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -90,16 +90,16 @@ which is more than sufficient for a single HTML file and (later) GFS2 metadata.
9090
.. code-block:: console
9191
9292
[root@pcmk-1 ~]# vgs
93-
VG #PV #LV #SN Attr VSize VFree
93+
VG #PV #LV #SN Attr VSize VFree
9494
almalinux_pcmk-1 1 2 0 wz--n- <19.00g <13.00g
9595
9696
[root@pcmk-1 ~]# lvcreate --name drbd-demo --size 512M almalinux_pcmk-1
9797
Logical volume "drbd-demo" created.
9898
[root@pcmk-1 ~]# lvs
9999
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
100-
drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m
101-
root almalinux_pcmk-1 -wi-ao---- 4.00g
102-
swap almalinux_pcmk-1 -wi-ao---- 2.00g
100+
drbd-demo almalinux_pcmk-1 -wi-a----- 512.00m
101+
root almalinux_pcmk-1 -wi-ao---- 4.00g
102+
swap almalinux_pcmk-1 -wi-ao---- 2.00g
103103
104104
Repeat for the second node, making sure to use the same size:
105105

@@ -210,9 +210,9 @@ Run them on one node:
210210
The server's response is:
211211
212212
you are the 25212th user to install this version
213-
213+
214214
We can confirm DRBD's status on this node:
215-
215+
216216
.. code-block:: console
217217
218218
[root@pcmk-1 ~]# drbdadm status
@@ -596,7 +596,7 @@ it can no longer host resources, and eventually all the resources will move.
596596
* Promoted: [ pcmk-1 ]
597597
* Stopped: [ pcmk-2 ]
598598
* WebFS (ocf:heartbeat:Filesystem): Started pcmk-1
599-
599+
600600
Daemon Status:
601601
corosync: active/disabled
602602
pacemaker: active/disabled
@@ -630,7 +630,7 @@ eligible to host resources again.
630630
* Promoted: [ pcmk-1 ]
631631
* Unpromoted: [ pcmk-2 ]
632632
* WebFS (ocf:heartbeat:Filesystem): Started pcmk-1
633-
633+
634634
Daemon Status:
635635
corosync: active/disabled
636636
pacemaker: active/disabled

doc/sphinx/Clusters_from_Scratch/verification.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -68,17 +68,17 @@ Next, check the membership and quorum APIs:
6868

6969
.. code-block:: console
7070
71-
[root@pcmk-1 ~]# corosync-cmapctl | grep members
71+
[root@pcmk-1 ~]# corosync-cmapctl | grep members
7272
runtime.members.1.config_version (u64) = 0
73-
runtime.members.1.ip (str) = r(0) ip(192.168.122.101)
73+
runtime.members.1.ip (str) = r(0) ip(192.168.122.101)
7474
runtime.members.1.join_count (u32) = 1
7575
runtime.members.1.status (str) = joined
7676
runtime.members.2.config_version (u64) = 0
77-
runtime.members.2.ip (str) = r(0) ip(192.168.122.102)
77+
runtime.members.2.ip (str) = r(0) ip(192.168.122.102)
7878
runtime.members.2.join_count (u32) = 1
7979
runtime.members.2.status (str) = joined
8080
81-
[root@pcmk-1 ~]# pcs status corosync
81+
[root@pcmk-1 ~]# pcs status corosync
8282
8383
Membership information
8484
----------------------

0 commit comments

Comments
 (0)