diff --git a/xml/ha_cluster_md.xml b/xml/ha_cluster_md.xml index d34c4c51..26dc904b 100644 --- a/xml/ha_cluster_md.xml +++ b/xml/ha_cluster_md.xml @@ -155,7 +155,7 @@ ARRAY /dev/md0 UUID=1d70f103:49740ef1:af2afce5:fcf6a489 Configure a CRM resource as follows: - Create a Raid1 primitive: + Create a Raid1 primitive for the Cluster MD device: &prompt.crm.conf;primitive raider Raid1 \ params raidconf="/etc/mdadm.conf" raiddev=/dev/md0 \ force_clones=true \ @@ -164,17 +164,32 @@ ARRAY /dev/md0 UUID=1d70f103:49740ef1:af2afce5:fcf6a489 op stop timeout=20s interval=0 - Add the raider resource to the base group for storage that you have created for - DLM: - &prompt.crm.conf;modgroup g-storage add raider - The add sub-command appends the new group - member by default. - - If not already done, clone the g-storage group so that it runs on all nodes: - - &prompt.crm.conf;clone cl-storage g-storage \ - meta interleave=true target-role=Started - + + Make sure the Raid1 primitive can only run on nodes where the + DLM resource is already running: + + + + + You can add a single Raid1 primitive to the cloned + g-storage group, which already has internal colocation + and order constraints: + +&prompt.crm.conf;modgroup g-storage add raider + + + + Do not add multiple Raid1 primitives to + the group, because this creates a dependency between the Cluster MD devices. + For multiple devices, clone the primitives and add constraints to the clones: + +&prompt.crm.conf;crm configure clone cl-raider1 raider1 meta interleave=true +&prompt.crm.conf;crm configure clone cl-raider2 raider2 meta interleave=true +&prompt.crm.conf;crm configure colocation col-cmd-with-dlm inf: ( cl-raider1 cl-raider2 ) cl-storage +&prompt.crm.conf;crm configure order o-dlm-before-cmd Mandatory: cl-storage ( cl-raider1 cl-raider2 ) + + + Review your changes with show.