diff --git a/xml/ha_installation_overview.xml b/xml/ha_installation_overview.xml
index f921b6dd..73b98d81 100644
--- a/xml/ha_installation_overview.xml
+++ b/xml/ha_installation_overview.xml
@@ -16,7 +16,7 @@
-You can also use a combination of both setup methods, for example: set up one node with YaST cluster and then use one of the bootstrap scripts to integrate more nodes (or vice versa).
+ This chapter shows an overview of the steps required to set up a working and supported &ha; cluster. It includes all of the options for a full cluster. If you want to start with a basic two-node cluster with only the default options, see .
@@ -24,27 +24,128 @@ You can also use a combination of both setup methods, for example: set up one no
yes
-
- If you are setting up a &ha; cluster with &productnamereg; for the first time, the
- easiest way is to start with a basic two-node cluster. You can also use the
- two-node cluster to run some tests. Afterward, you can add more
- nodes by cloning existing cluster nodes with &ay;. The cloned nodes will
- have the same packages installed and the same system configuration as the
- original ones.
-
-
-
- Workflow options
-
-
-
-
-
-
- Preconfiguration options
-
-
-
-
-
+
+ Overview of installing a &ha; cluster
+
+
+ Review to make sure your nodes and other infrastructure meet the requirements for a &ha; cluster.
+
+
+
+
+ You can set up the cluster with either the &rootuser; user or a user with sudo privileges. Review to determine the appropriate user for your requirements.
+
+
+
+
+ Install the &ha; extension and packages on the nodes as described in .
+
+
+
+
+ Set up the cluster on the nodes. You can use either of the following methods:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ You can also use a combination of both methods. For example, you could set up one node with the &yast; cluster module, then use the bootstrap scripts to add more nodes.
+
+
+ The following table shows the components that are configured by each method:
+
+
+
+
+
+
+
+
+ Component
+ Bootstrap scripts
+ &yast; cluster module
+
+
+
+
+ Cluster name
+ Yes
+ Yes
+
+
+ &corosync;
+ Yes
+ Yes
+
+
+ &csync;
+ Yes
+ Yes
+
+
+ Firewall ports
+ Yes
+ Yes
+
+
+ Passwordless SSH
+ Yes
+ No; configure before setup
+
+
+ &qdevice;To configure &qdevice; with either setup method, you must set up the &qnet; server before you initialize the cluster. See .
+ Optional
+ Optional
+
+
+ &stonith; (node fencing)
+ Optional (SBD only)To configure SBD with the bootstrap scripts, you must set up shared storage and the watchdog before you initialize the cluster. See .
+ No; configure after setup
+
+
+ Virtual IP address for &hawk2;
+ Yes
+ No; configure after setup
+
+
+
+
+
+
+
+ Add more nodes. You can use the bootstrap scripts or the &yast; cluster module, or you can clone nodes for mass deployment as described in .
+
+
+
+
+ To be supported, a &sleha; cluster must have &stonith; (node fencing) enabled. A node fencing mechanism can be one of the following:
+
+
+
+
+ A physical device (a power switch). To configure the cluster to use physical &stonith; devices, see .
+
+
+
+
+ SBD (&stonith; Block Device) in combination with a watchdog. To configure SBD devices and the watchdog, see .
+
+
+
+
+
+
+ Continue to and to set up cluster resources, data replication, and other components as needed.
+
+
+
diff --git a/xml/ha_log_in.xml b/xml/ha_log_in.xml
index fdc04895..34d0d7f7 100644
--- a/xml/ha_log_in.xml
+++ b/xml/ha_log_in.xml
@@ -69,7 +69,7 @@
A user with sudo privileges (with SSH agent forwarding)
- You can use SSH forwarding to pass your local SSH keys to the cluster nodes.
+ You can use SSH agent forwarding to pass your local SSH keys to the cluster nodes.
This can be useful if you need to avoid storing SSH keys on the nodes, but requires
additional configuration on your local machine and on the cluster nodes.