diff --git a/DC-SLE-HA-deployment b/DC-SLE-HA-deployment
deleted file mode 100644
index 0f0b4c94..00000000
--- a/DC-SLE-HA-deployment
+++ /dev/null
@@ -1,25 +0,0 @@
-## ----------------------------
-## Doc Config File for SUSE Linux Enterprise High Availability Extension
-## Full installation guide
-## ----------------------------
-##
-## Basics
-MAIN="MAIN.SLEHA.xml"
-ROOTID=book-deployment
-
-## Profiling
-PROFOS="sles"
-PROFCONDITION="suse-product"
-
-## stylesheet location
-STYLEROOT="/usr/share/xml/docbook/stylesheet/suse2022-ns"
-FALLBACK_STYLEROOT="/usr/share/xml/docbook/stylesheet/suse-ns"
-
-## enable sourcing
-export DOCCONF=$BASH_SOURCE
-
-##do not show remarks directly in the (PDF) text
-#XSLTPARAM="--param use.xep.annotate.pdf=0"
-
-### Sort the glossary
-XSLTPARAM="--param glossary.sort=1"
diff --git a/xml/MAIN.SLEHA.xml b/xml/MAIN.SLEHA.xml
index 1aefb5c2..812a9156 100644
--- a/xml/MAIN.SLEHA.xml
+++ b/xml/MAIN.SLEHA.xml
@@ -42,9 +42,6 @@
-
-
-
diff --git a/xml/book_administration.xml b/xml/book_administration.xml
index 819f3fd7..6eb0abd6 100644
--- a/xml/book_administration.xml
+++ b/xml/book_administration.xml
@@ -55,6 +55,22 @@
+
+
+
+
+ Installation and setup
+
+
+
+
+
+
+
+
+
+
+
diff --git a/xml/ha_bootstrap_install.xml b/xml/ha_bootstrap_install.xml
index c1a5cc8b..f409e7da 100644
--- a/xml/ha_bootstrap_install.xml
+++ b/xml/ha_bootstrap_install.xml
@@ -16,7 +16,9 @@
-
+ &productname; includes bootstrap scripts to simplify the installation of a cluster.
+ You can use these scripts to set up the cluster on the first node, add more nodes to the
+ cluster, remove nodes from the cluster, and adjust certain settings in an existing cluster.
diff --git a/xml/ha_install_intro.xml b/xml/ha_install_intro.xml
deleted file mode 100644
index dd655a3d..00000000
--- a/xml/ha_install_intro.xml
+++ /dev/null
@@ -1,30 +0,0 @@
-
-
- %entities;
-]>
-
-
- Preface
-
-
-
- editing
-
-
- yes
-
-
-
-
-
-
-
-
-
-
-
diff --git a/xml/ha_sbd_watchdog.xml b/xml/ha_sbd_watchdog.xml
deleted file mode 100644
index df3848d7..00000000
--- a/xml/ha_sbd_watchdog.xml
+++ /dev/null
@@ -1,216 +0,0 @@
-
-
-
- %entities;
-]>
-
-
- Setting up a watchdog for SBD
-
-
-
- If you are using SBD as your &stonith; device, you must enable a watchdog on each
- cluster node. If you are using a different &stonith; device, you can skip this chapter.
-
-
-
-
- yes
-
-
-
-
-
-
- &productname; ships with several kernel modules that provide hardware-specific watchdog drivers.
- For clusters in production environments, we recommend using a hardware watchdog.
- However, if no watchdog matches your hardware, the software watchdog
- (softdog) can be used instead.
-
-
- &productname; uses the SBD daemon as the software component that feeds the watchdog.
-
-
-
- Using a hardware watchdog
-
- Finding the right watchdog kernel module for a given system is not
- trivial. Automatic probing fails often. As a result, many modules
- are already loaded before the right one gets a chance.
-
- The following table lists some commonly used watchdog drivers. However, this is
- not a complete list of supported drivers. If your hardware is not listed here,
- you can also find a list of choices in the following directories:
-
-
-
-
- /lib/modules/KERNEL_VERSION/kernel/drivers/watchdog
-
-
-
-
- /lib/modules/KERNEL_VERSION/kernel/drivers/ipmi
-
-
-
-
- Alternatively, ask your hardware or
- system vendor for details on system-specific watchdog configuration.
-
-
- Commonly used watchdog drivers
-
-
-
- Hardware
- Driver
-
-
-
-
- HP
- hpwdt
-
-
- Dell, Lenovo (Intel TCO)
- iTCO_wdt
-
-
- Fujitsu
- ipmi_watchdog
-
-
- LPAR on IBM Power
- pseries-wdt
-
-
- VM on IBM z/VM
- vmwatchdog
-
-
- Xen VM (DomU)
- xen_xdt
-
-
- VM on VMware vSphere
- wdat_wdt
-
-
- Generic
- softdog
-
-
-
-
-
- Accessing the watchdog timer
-
- Some hardware vendors ship systems management software that uses the
- watchdog for system resets (for example, HP ASR daemon). If the watchdog is
- used by SBD, disable such software. No other software must access the
- watchdog timer.
-
-
-
- Loading the correct kernel module
-
-
- List the drivers that are installed with your kernel version:
-
-&prompt.root;rpm -ql kernel-VERSION | grep watchdog
-
-
-
- List any watchdog modules that are currently loaded in the kernel:
-
-&prompt.root;lsmod | egrep "(wd|dog)"
-
-
-
- If you get a result, unload the wrong module:
-
-&prompt.root;rmmod WRONG_MODULE
-
-
-
- Enable the watchdog module that matches your hardware:
-
-&prompt.root;echo WATCHDOG_MODULE > /etc/modules-load.d/watchdog.conf
-&prompt.root;systemctl restart systemd-modules-load
-
-
-
- Test whether the watchdog module is loaded correctly:
-
-&prompt.root;lsmod | grep dog
-
-
-
- Verify if the watchdog device is available:
-
-&prompt.root;ls -l /dev/watchdog*
-&prompt.root;sbd query-watchdog
-
- If the watchdog device is not available, check the module name and options.
- Maybe use another driver.
-
-
-
-
- Verify if the watchdog device works:
-
-&prompt.root;sbd -w WATCHDOG_DEVICE test-watchdog
-
-
-
- Reboot your machine to make sure there are no conflicting kernel modules. For example,
- if you find the message cannot register ... in your log, this would indicate
- such conflicting modules. To ignore such modules, refer to
- .
-
-
-
-
-
-
- Using the software watchdog (softdog)
-
- For clusters in production environments, we recommend using a hardware-specific watchdog
- driver. However, if no watchdog matches your hardware,
- softdog can be used instead.
-
-
- Softdog limitations
-
- The softdog driver assumes that at least one CPU is still running. If all CPUs are stuck,
- the code in the softdog driver that should reboot the system is never executed.
- In contrast, hardware watchdogs keep working even if all CPUs are stuck.
-
-
-
- Loading the softdog kernel module
-
-
- Enable the softdog watchdog:
-
-&prompt.root;echo softdog > /etc/modules-load.d/watchdog.conf
-&prompt.root;systemctl restart systemd-modules-load
-
-
-
- Check whether the softdog watchdog module is loaded correctly:
-
-&prompt.root;lsmod | grep softdog
-
-
-
-
-