From 958a53a9452bee87a1a0ff2625cf60ca038ccce0 Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 18 Dec 2023 16:42:43 +0100 Subject: [PATCH 001/123] ocf_suse_SAPHanaFilesystem.7: typo --- man/ocf_suse_SAPHanaFilesystem.7 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/man/ocf_suse_SAPHanaFilesystem.7 b/man/ocf_suse_SAPHanaFilesystem.7 index 00284517..5a57a476 100644 --- a/man/ocf_suse_SAPHanaFilesystem.7 +++ b/man/ocf_suse_SAPHanaFilesystem.7 @@ -48,7 +48,7 @@ In case of NFS failure, HANA might stop working but the Linux cluster might not take action in reasonable time. Due to obligatory NFS for the directory /hana/shared/$SID/, scale-out systems are affected more often than scale-up systems. -The SAPHanaFilsystem RA can be used on local filesystems as well. This migth be +The SAPHanaFilsystem RA can be used on local filesystems as well. This might be useful for scale-up systems. Even if SAPHanaFilesystem improves Linux cluster reaction on failed filesystems, reliable filesystems are cornerstones for SAP HANA database availability. From 4f99fe7d3467fe98f1535a5b3db49f0be15bd6b4 Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 18 Dec 2023 16:45:01 +0100 Subject: [PATCH 002/123] ocf_suse_SAPHanaFilesystem.7: details --- man/ocf_suse_SAPHanaFilesystem.7 | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/man/ocf_suse_SAPHanaFilesystem.7 b/man/ocf_suse_SAPHanaFilesystem.7 index 5a57a476..6eef86a1 100644 --- a/man/ocf_suse_SAPHanaFilesystem.7 +++ b/man/ocf_suse_SAPHanaFilesystem.7 @@ -53,9 +53,9 @@ useful for scale-up systems. Even if SAPHanaFilesystem improves Linux cluster reaction on failed filesystems, reliable filesystems are cornerstones for SAP HANA database availability. .PP -SAPHanaFilesystem relies on cluster attributes set by SAPHanaTopology, -particularly hana_$SID_site_srHook_$SITE. See manual pages -ocf_suse_SAPHanaTopology(7) and SAPHanaSR-showAttr(8). +SAPHanaFilesystem relies on cluster attributes set by SAPHanaTopology and +susHanaSR.py, particularly hana_$SID_site_srHook_$SITE. See manual pages +ocf_suse_SAPHanaTopology(7), susHanaSR.py(7) and SAPHanaSR-showAttr(8). .PP Please see also the REQUIREMENTS section below. .PP From 5abac4bf816088df3f54e8e96b10e0f3b0bdca88 Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 18 Dec 2023 16:46:39 +0100 Subject: [PATCH 003/123] ocf_suse_SAPHanaFilesystem.7: formatting --- man/ocf_suse_SAPHanaFilesystem.7 | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/man/ocf_suse_SAPHanaFilesystem.7 b/man/ocf_suse_SAPHanaFilesystem.7 index 6eef86a1..d9b9f297 100644 --- a/man/ocf_suse_SAPHanaFilesystem.7 +++ b/man/ocf_suse_SAPHanaFilesystem.7 @@ -7,7 +7,7 @@ SAPHanaFilesystem \- Monitors mounted SAP HANA filesystems. .PP .\" .SH SYNOPSIS -\fBSAPHanaFilesystem\fP [start | stop | status | monitor | meta\-data | validate\-all | reload | methods | usage ] +\fBSAPHanaFilesystem\fP [ start | stop | status | monitor | meta\-data | validate\-all | reload | methods | usage ] .PP .\" .SH DESCRIPTION @@ -90,7 +90,7 @@ Optional. Default: "DIRECTORY=/hana/shared/$SID/". .PP \fBON_FAIL_ACTION\fR .RS 4 -Internal RA decision in case of monitor failure. Values: [ ignore | fence]. +Internal RA decision in case of monitor failure. Values: [ ignore | fence ]. .br - ignore: do nothing, just report failure into logs. .br From 09fc194a3789d2788fb8de169a82816f7e3941a5 Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 18 Dec 2023 17:12:13 +0100 Subject: [PATCH 004/123] SAPHanaSR-ScaleOut.7 SAPHanaSR.7: wording --- man/SAPHanaSR-ScaleOut.7 | 5 ++--- man/SAPHanaSR.7 | 4 ++-- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/man/SAPHanaSR-ScaleOut.7 b/man/SAPHanaSR-ScaleOut.7 index 8ff6cb09..a0c0d81c 100644 --- a/man/SAPHanaSR-ScaleOut.7 +++ b/man/SAPHanaSR-ScaleOut.7 @@ -1,10 +1,9 @@ .\" Version: 1.001 .\" -.TH SAPHanaSR-ScaleOut 7 "31 Oct 2023" "" "SAPHanaSR-angi" +.TH SAPHanaSR-ScaleOut 7 "18 Dec 2023" "" "SAPHanaSR-angi" .\" .SH NAME -SAPHanaSR-ScaleOut \- Tools for automating SAP HANA system replication in -scale-out setups. +SAPHanaSR-ScaleOut \- Automating SAP HANA system replication in scale-out setups. .PP .\" .SH DESCRIPTION diff --git a/man/SAPHanaSR.7 b/man/SAPHanaSR.7 index 66295f9c..aed5fcd4 100644 --- a/man/SAPHanaSR.7 +++ b/man/SAPHanaSR.7 @@ -1,9 +1,9 @@ .\" Version: 1.001 .\" -.TH SAPHanaSR 7 "18 Sep 2023" "" "SAPHanaSR-angi" +.TH SAPHanaSR 7 "18 Dec 2023" "" "SAPHanaSR-angi" .\" .SH NAME -SAPHanaSR \- Tools for automating SAP HANA system replication in scale-up setups. +SAPHanaSR \- Automating SAP HANA system replication in scale-up setups. .PP .\" .SH DESCRIPTION From 1776a0317b7956d7e0c7738f880f26603d979f49 Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 18 Dec 2023 17:17:11 +0100 Subject: [PATCH 005/123] SAPHanaSR-ScaleOut.7 SAPHanaSR.7: wording --- man/SAPHanaSR-ScaleOut.7 | 2 +- man/SAPHanaSR.7 | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/man/SAPHanaSR-ScaleOut.7 b/man/SAPHanaSR-ScaleOut.7 index a0c0d81c..99006ab5 100644 --- a/man/SAPHanaSR-ScaleOut.7 +++ b/man/SAPHanaSR-ScaleOut.7 @@ -12,7 +12,7 @@ SAPHanaSR-ScaleOut \- Automating SAP HANA system replication in scale-out setups .PP This manual page SAPHanaSR-ScaleOut provides information for setting up and managing automation of SAP HANA system replication (SR) in scale-out setups. -For scale-up, please refer to SAPHanaSR(7). +For scale-up, please refer to SAPHanaSR(7), see also SAPHanaSR-angi(7). .PP System replication will help to replicate the database data from one site to another site in order to compensate for database failures. With this mode of diff --git a/man/SAPHanaSR.7 b/man/SAPHanaSR.7 index aed5fcd4..e871df30 100644 --- a/man/SAPHanaSR.7 +++ b/man/SAPHanaSR.7 @@ -12,7 +12,7 @@ SAPHanaSR \- Automating SAP HANA system replication in scale-up setups. .PP This manual page SAPHanaSR provides information for setting up and managing automation of SAP HANA system replication (SR) in scale-up setups. -For scale-out, please refer to SAPHanaSR-ScaleOut(7). +For scale-out, please refer to SAPHanaSR-ScaleOut(7), see also SAPhanaSR-angi(7). .PP System replication will help to replicate the database data from one site to another site in order to compensate for database failures. With this mode of From e3b620a90f99a3aec48c1a7f11eb24a2a8e5d68b Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 18 Dec 2023 17:30:39 +0100 Subject: [PATCH 006/123] SAPHanaSR.7 ocf_suse_SAPHana.7 susHanaSR.py.7: aligned dates in copyright --- man/SAPHanaSR.7 | 4 ++-- man/ocf_suse_SAPHana.7 | 4 ++-- man/susHanaSR.py.7 | 4 ++-- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/man/SAPHanaSR.7 b/man/SAPHanaSR.7 index e871df30..802a5c91 100644 --- a/man/SAPHanaSR.7 +++ b/man/SAPHanaSR.7 @@ -343,9 +343,9 @@ A.Briel, F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2015-2018 SUSE Linux GmbH, Germany. +(c) 2015-2017 SUSE Linux GmbH, Germany. .br -(c) 2019-2023 SUSE LLC +(c) 2018-2023 SUSE LLC .br The package SAPHanaSR-angi comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man/ocf_suse_SAPHana.7 b/man/ocf_suse_SAPHana.7 index bad5e97a..5c57b6c6 100644 --- a/man/ocf_suse_SAPHana.7 +++ b/man/ocf_suse_SAPHana.7 @@ -601,9 +601,9 @@ F.Herschel, L.Pinne. .SH COPYRIGHT (c) 2014 SUSE Linux Products GmbH, Germany. .br -(c) 2015-2018 SUSE Linux GmbH, Germany. +(c) 2015-2017 SUSE Linux GmbH, Germany. .br -(c) 2019-2023 SUSE LLC +(c) 2018-2023 SUSE LLC .br The resource agent SAPHana comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man/susHanaSR.py.7 b/man/susHanaSR.py.7 index 3ed2c1c3..b6958a30 100644 --- a/man/susHanaSR.py.7 +++ b/man/susHanaSR.py.7 @@ -309,9 +309,9 @@ A.Briel, F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2015-2018 SUSE Linux GmbH, Germany. +(c) 2015-2017 SUSE Linux GmbH, Germany. .br -(c) 2019-2023 SUSE LLC +(c) 2018-2023 SUSE LLC .br susHanaSR.py comes with ABSOLUTELY NO WARRANTY. .br From b194d26627a42ca590ae99c80e75b54524c3b916 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 19 Dec 2023 12:27:13 +0100 Subject: [PATCH 007/123] sct_test_restart_cluster_turn_hana: wait for SR before starting the cluster --- test/sct_test_restart_cluster_turn_hana | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/test/sct_test_restart_cluster_turn_hana b/test/sct_test_restart_cluster_turn_hana index 6e208738..3cff5c64 100755 --- a/test/sct_test_restart_cluster_turn_hana +++ b/test/sct_test_restart_cluster_turn_hana @@ -80,6 +80,15 @@ ssh "$currSecondary" 'su - '"$sidadm"' -c "hdbnsutil -sr_takeover --suspendPrima logger --id -t "sct_test_restart_cluster_turn_hana" -s "currPrimary=$currPrimary: sr_register online remoteHost=$vhostSecn" ssh "$currPrimary" 'su - '"$sidadm"' -c "hdbnsutil -sr_register --remoteHost='"$vhostSecn"' --remoteInstance='"$instNr"' --name='"$sitePrimary"' --replicationMode='"$srMode"' --operationMode='"$opMode"' --online"' +while true; do + ssh "$currSecondary" 'su - '"$sidadm"' -c "python3 exe/python_support/systemReplicationStatus.py 1>/dev/null"'; rc=$? + if [[ "$rc" != 15 ]]; then + sleep 60 + else + break + fi +done + logger --id -t "sct_test_restart_cluster_turn_hana" -s "Start Cluster" ssh "$node01" 'crm cluster run "crm cluster start"' From 2d08a615bc4f5f2d8d6c740570b08b1624e0ef89 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 19 Dec 2023 14:15:17 +0100 Subject: [PATCH 008/123] saphana-common-lib: mark some parts for future support of suspended primary detection --- ra/saphana-common-lib | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/ra/saphana-common-lib b/ra/saphana-common-lib index 15f026e9..93f3ed6b 100755 --- a/ra/saphana-common-lib +++ b/ra/saphana-common-lib @@ -581,12 +581,16 @@ function saphana_init_attribute_definitions() { # get system replication mode and local site name # # globals: gSrMode(w), gSite(w), gSrr(w) +# gSrSuspended # function get_local_sr_config() { # called by: TODO # * get sr_mode (primary|sync|synmem|async|none) # * get local site name (MAINZ, KOELN, DUESSELDORF, AACHEN) # TODO PRIO3: NG - move SAP commands to command-init function + # TODO: PRIO2: Check for suspended primary and set grSrSuspended either to false, empty(init), unknown(gP does not cntain this info) or true + grSrSuspended="" + # TODO: PRIO2: for suspended primary detection (lss 3 or 4) we would need hdbnsutil -sr_state) hdbState="hdbnsutil -sr_stateConfiguration" # hdbMap="hdbnsutil -sr_stateHostMapping" #### SAP-CALL @@ -605,6 +609,7 @@ function get_local_sr_config() { # if 'actual_mode' is not available, fallback to 'mode' if [ -z "$gSrMode" ]; then [[ "$hdbANSWER" =~ "/mode="([^$'\n']+) ]] && gSrMode=${BASH_REMATCH[1]} + grSrSuspended="unknown" fi super_ocf_log info "ACT: hdbnsutil not answering - using global.ini as fallback - srmode=$gSrMode" ;; @@ -688,6 +693,9 @@ function get_local_virtual_name() { # output: state like P,S,N,D # rc: 0, if P, S, N # 1, if D +# TODO PRIO2: suspended primary handling: +# hdbnsutil -sr_state --sapcontrol=1 includes the following output, if the primary is suspended: +# isPrimarySuspended=true # function check_for_primary() { # called by: TODO From 9de48fad8a04fa53903cc1c759fda1cdc55a533e Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 19 Dec 2023 14:24:32 +0100 Subject: [PATCH 009/123] saphana-controller-lib: set global SEC attribute during probe --- ra/saphana-controller-lib | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/ra/saphana-controller-lib b/ra/saphana-controller-lib index 3af7b983..cb4554fd 100755 --- a/ra/saphana-controller-lib +++ b/ra/saphana-controller-lib @@ -1664,6 +1664,10 @@ function saphana_monitor_secondary() { promoted=1; ;; DEMOTED ) # This is the status we expect + if ocf_is_probe; then + super_ocf_log info "ACT: saphana_monitor_secondary: set global_sec attribute to $gSite" + set_hana_attribute "$NODENAME" "$gSite" "${ATTR_NAME_HANA_SEC[@]}" + fi promoted=0; ;; WAITING4PRIM ) # We are WAITING for PRIMARY so not testing the HANA engine now but check for a new start From abdc9038c6150c1d68c67d2c3ac6bdd5919ad857 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 19 Dec 2023 16:24:35 +0100 Subject: [PATCH 010/123] maintenance_with_standby_nodes.json, standby_secn_node.json: remove sid and mst-resource definition from test description. These values are coming from the properties file with current releases. --- test/json/angi-ScaleUp/maintenance_with_standby_nodes.json | 4 +--- test/json/angi-ScaleUp/standby_secn_node.json | 2 -- 2 files changed, 1 insertion(+), 5 deletions(-) diff --git a/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json b/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json index fb2abb34..32788213 100644 --- a/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json @@ -2,8 +2,6 @@ "test": "maintenance_with_standby_nodes", "name": "standby+online secondary then standby+online primary", "start": "prereq10", - "sid": "HA1", - "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ { "step": "prereq10", @@ -106,7 +104,7 @@ "lss == 1" , "srr == P" , "lpt == 10" , - "srHook == SWAIT" , + "srHook ~ (SWAIT|SFAIL)" , "srPoll == SFAIL" ], "sSite": "pSiteUp", diff --git a/test/json/angi-ScaleUp/standby_secn_node.json b/test/json/angi-ScaleUp/standby_secn_node.json index 71dbb1da..6c50bbf7 100644 --- a/test/json/angi-ScaleUp/standby_secn_node.json +++ b/test/json/angi-ScaleUp/standby_secn_node.json @@ -2,8 +2,6 @@ "test": "standby_secondary_node", "name": "standby secondary node (and online again)", "start": "prereq10", - "sid": "HA1", - "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ { "step": "prereq10", From c453a7fce760f58a94b07ee32ec2a7de168bfbd8 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 19 Dec 2023 17:27:19 +0100 Subject: [PATCH 011/123] sct_test_maintenance_cluster_turn_hana: wait for SR in sync before ending resource maintenance, fix similar srHook issue as in restart_cluster_turn_hana --- test/sct_test_maintenance_cluster_turn_hana | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/test/sct_test_maintenance_cluster_turn_hana b/test/sct_test_maintenance_cluster_turn_hana index 25b20d26..7b0ba2ef 100755 --- a/test/sct_test_maintenance_cluster_turn_hana +++ b/test/sct_test_maintenance_cluster_turn_hana @@ -55,7 +55,15 @@ ssh "${node02}" "crm resource maintenance $mstResource on" ssh "$currSecondary" 'su - '"$sidadm"' -c "hdbnsutil -sr_takeover --suspendPrimary"' ssh "$currPrimary" 'su - '"$sidadm"' -c "hdbnsutil -sr_register --remoteHost='"$vhostSecn"' --remoteInstance='"$instNr"' --name='"$sitePrimary"' --replicationMode='"$srMode"' --operationMode='"$opMode"' --online"' -sleep 30 +while true; do + ssh "$currSecondary" 'su - '"$sidadm"' -c "python3 exe/python_support/systemReplicationStatus.py 1>/dev/null"'; rc=$? + if [[ "$rc" != 15 ]]; then + sleep 60 + else + break + fi +done + ssh "${node02}" 'cs_wait_for_idle -s 2' # shellcheck disable=SC2029 ssh "${node02}" "crm resource refresh $mstResource" From 7dbf3fe95c95b83a670ef01814f5ded0c4e8a0dc Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 20 Dec 2023 11:20:56 +0100 Subject: [PATCH 012/123] SAPHanaSR-testCluster.8 SAPHanaSR-tester.7 SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-angi-ScaleUp.7 SAPHanaSR-tests-description.7: formattig, typos, tests description --- man-tester/SAPHanaSR-testCluster.8 | 5 +- man-tester/SAPHanaSR-tester.7 | 1 + man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 127 +++--- man-tester/SAPHanaSR-tests-angi-ScaleUp.7 | 322 ++----------- man-tester/SAPHanaSR-tests-description.7 | 496 +++++++++++++++++++++ 5 files changed, 617 insertions(+), 334 deletions(-) create mode 100644 man-tester/SAPHanaSR-tests-description.7 diff --git a/man-tester/SAPHanaSR-testCluster.8 b/man-tester/SAPHanaSR-testCluster.8 index 553a2bb7..473c759d 100644 --- a/man-tester/SAPHanaSR-testCluster.8 +++ b/man-tester/SAPHanaSR-testCluster.8 @@ -148,7 +148,9 @@ TODO .\" .SH REQUIREMENTS .\" -See the REQUIREMENTS section in SAPHanaSR-tester(7). +See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). +Of course, HANA database and Linux cluster need to match certain requirements. +Please refer to the product documentation. .PP .\" .SH BUGS @@ -159,6 +161,7 @@ Please report any other feedback and suggestions to feedback@suse.com. .\" .SH SEE ALSO \fBSAPHanaSR-tester\fP(7) , \fBSAPHanaSR-tests-syntax\fP(7) , +\fBSAPHanaSR-tests-description\fP(7) , \fBSAPHanaSR-tests-angi-ScaleUp\fP(7) , \fBSAPHanaSR-tests-angi-ScaleOut\fP(7) , \fBSAPHanaSR-angi\fP(7) , \fBSAPHanaSR-showAttr\fP(8) , \fBcrm_mon\fP(8) , \fBcrm\fP(8) , \fBcs_clusterstate\fP(8) diff --git a/man-tester/SAPHanaSR-tester.7 b/man-tester/SAPHanaSR-tester.7 index dd0fbdb7..11e86f77 100644 --- a/man-tester/SAPHanaSR-tester.7 +++ b/man-tester/SAPHanaSR-tester.7 @@ -236,6 +236,7 @@ Please report any other feedback and suggestions to feedback@suse.com. .\" .SH SEE ALSO \fBSAPHanaSR-testCluster\fP(8) , \fBSAPHanaSR-tests-syntax\fP(5) , +\fBSAPHanaSR-tests-description\fP(7) , \fBSAPHanaSR-tests-angi-ScaleUp\fP(7) , \fBSAPHanaSR-tests-angi-ScaleOut\fP(7) , \fBSAPHanaSR-angi\fP(7) , \fBSAPHanaSR-showAttr\fP(8) , \fBcrm_mon\fP(8) , \fBssh-keygen\fP(1) , \fBssh-copy-id\fP(1) , diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index fea9965b..be48b4c1 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -1,6 +1,6 @@ .\" Version: 1.001 .\" -.TH SAPHanaSR-tests-angi-ScaleOut 7 "20 Nov 2023" "" "SAPHanaSR-angi" +.TH SAPHanaSR-tests-angi-ScaleOut 7 "20 Dec 2023" "" "SAPHanaSR-angi" .\" .SH NAME SAPHanaSR-tests-angi-ScaleOut \- Functional tests for SAPHanaSR Scale-Out. @@ -8,101 +8,107 @@ SAPHanaSR-tests-angi-ScaleOut \- Functional tests for SAPHanaSR Scale-Out. .\" .SH DESCRIPTION .PP -Functional test are shipped for scale-out scenarios. This tests could be run -out-of-the-box. The test cases are defined in dedicated files. -See manual page SAPHanaSR-tests-syntax(5) for syntax details. +Functional test are shipped for the scale-out ERP scenario. This tests could +be run out-of-the-box. The test cases are defined in dedicated files. +See manual page SAPHanaSR-tests-syntax(5) for syntax details. Details like +performed steps or expected behaviour of cluster and HANA are explained in +SAPHanaSR-tests-description(7). Entry point for all predefined tests is a clean and idle Linux cluster and a clean HANA pair in sync. Same is true for the final state. See manual page SAPHanaSR_maintenance_examples(7) for detecting the correct status and watching changes near real-time. + +Each test can be executed by running the command SAPHanaSR-testCluster with +appropriate parameters. See manual page SAPHanaSR-testCluster(8). .PP -Predefined functional tests for scale-out overwiev +Predefined functional tests for scale-out ERP overwiev: .TP -block_manual_takeover -blocked manual takeover, for susTkOver.py +\fBblock_manual_takeover\fP +Blocked manual takeover, for susTkOver.py. .TP -block_sr_and_freeze_prim_master_nfs -block HANA SR and freeze HANA NFS on primary master node +\fBblock_sr_and_freeze_prim_master_nfs\fP +Block HANA SR and freeze HANA NFS on primary master node. .TP -block_sr_and_freeze_prim_site_nfs -block HANA SR and freeze HANA NFS on primary site +\fBblock_sr_and_freeze_prim_site_nfs\fP +Block HANA SR and freeze HANA NFS on primary site. .TP -free_log_area -free HANA log area on primary site +\fBfree_log_area\fP +Free HANA log area on primary site. .TP -freeze_prim_master_nfs -freeze HANA NFS on primary master node +\fBfreeze_prim_master_nfs\fP +Freeze HANA NFS on primary master node. .TP -freeze_prim_site_nfs -freeze HANA NFS on primary site +\fBfreeze_prim_site_nfs\fP +Freeze HANA NFS on primary site. .TP -kill_prim_indexserver -Kill primary master indexserver, for susChkSrv.py +\fBkill_prim_indexserver\fP +Kill primary master indexserver, for susChkSrv.py. .TP -kill_prim_inst -Kill primary master instance +\fBkill_prim_inst\fP +Kill primary master instance. .TP -kill_prim_node -Kill primary master node +\fBkill_prim_node\fP +Kill primary master node. .TP -kill_prim_worker_indexserver -Kill primary worker indexserver, for susChkSrv.py +\fBkill_prim_worker_indexserver\fP +Kill primary worker indexserver, for susChkSrv.py. .TP -kill_prim_worker_inst -Kill primary worker instance +\fBkill_prim_worker_inst\fP +Kill primary worker instance. .TP -kill_prim_worker_node -Kill primary worker node +\fBkill_prim_worker_node\fP +Kill primary worker node. .TP -kill_secn_indexserver -Kill secondary master indexserver, for susChkSrv.py +\fBkill_secn_indexserver\fP +Kill secondary master indexserver, for susChkSrv.py. .TP -kill_secn_inst -Kill secondary master instance +\fBkill_secn_inst\fP +Kill secondary master instance. .TP -kill_secn_node -Kill secondary master node +\fBkill_secn_node\fP +Kill secondary master node. .TP -kill_secn_worker_inst -Kill secondary worker instance +\fBkill_secn_worker_inst\fP +Kill secondary worker instance. .TP -kill_secn_worker_node -Kill secondary worker node +\fBkill_secn_worker_node\fP +Kill secondary worker node. .TP -maintenance_cluster_turn_hana -maintenance procedure, manually turning HANA sites +\fBmaintenance_cluster_turn_hana\fP +Maintenance procedure, manually turning HANA sites. .TP -maintenance_with_standby_nodes -standby+online secondary then standby+online primary +\fBmaintenance_with_standby_nodes\fP +Maintenance procedure, standby+online secondary then standby+online primary. .TP -nop -no operation - check, wait and check again (stability check) +\fBnop\fP +No operation - check, wait and check again (stability check). .TP -restart_cluster - +\fBrestart_cluster\fP +Stop and restart cluster and HANA .TP -restart_cluster_hana_running - +\fBrestart_cluster_hana_running\fP +Stop and restart cluster, keep HANA running. .TP -restart_cluster_turn_hana - +\fBrestart_cluster_turn_hana\fP +Stop cluster and HANA, takeover HANA, start cluster. .TP -standby_primary_node TODO -standby primary node (and online again) +\fBstandby_primary_node\fP TODO +Standby primary master node and online again. .TP -standby_secondary_node TODO -standby secondary node (and online again) +\fBstandby_secondary_node\fP TODO +Standby secondary master node and online again. .RE .PP .\" .SH EXAMPLES .PP +.\" TODO .\" .SH FILES .TP -/usr/share/SAPHanaSR-tester/json/angi-ScaleOu/ -functional tests for SAPHanaSR-angi scale-out scenarios. +/usr/share/SAPHanaSR-tester/json/angi-ScaleOut/ +functional tests for SAPHanaSR-angi scale-out ERP scenarios. .TP /usr/bin/sct_test_* shell scripts for un-easy tasks on the cluster nodes. @@ -110,7 +116,9 @@ shell scripts for un-easy tasks on the cluster nodes. .\" .SH REQUIREMENTS .PP -See the REQUIREMENTS section in SAPHanaSR-tester(7). +See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). +Of course, HANA database and Linux cluster need to match certain requirements. +Please refer to the product documentation. .\" .SH BUGS In case of any problem, please use your favourite SAP support process to open @@ -120,7 +128,8 @@ Please report any other feedback and suggestions to feedback@suse.com. .\" .SH SEE ALSO \fBSAPHanaSR-tester\fP(7) , \fBSAPHanaSR-testCluster\fP(8) , -\fBSAPHanaSR-tests-angi-ScaleUp\fP(7) , \fBSAPHanaSR-tests-syntax\fP(5) , +\fBSAPHanaSR-tests-description\fP(7) , \fBSAPHanaSR-tests-syntax\fP(5) , +\fBSAPHanaSR-tests-angi-ScaleUp\fP(7) , \fBSAPHanaSR-angi\fP(7) , \fBSAPHanaSR-showAttr\fP(8) .PP .\" diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 index 2f608571..a2510e45 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 @@ -1,6 +1,6 @@ .\" Version: 1.001 .\" -.TH SAPHanaSR-tests-angi-ScaleUp 7 "20 Nov 2023" "" "SAPHanaSR-angi" +.TH SAPHanaSR-tests-angi-ScaleUp 7 "20 Dec 2023" "" "SAPHanaSR-angi" .\" .SH NAME SAPHanaSR-tests-angi-ScaleUp \- Functional tests for SAPHanaSR Scale-Up. @@ -10,7 +10,9 @@ SAPHanaSR-tests-angi-ScaleUp \- Functional tests for SAPHanaSR Scale-Up. .PP Functional test are shipped for scale-up scenarios. This tests could be run out-of-the-box. The test cases are defined in dedicated files. -See manual page SAPHanaSR-tests-syntax(5) for syntax details. +See manual page SAPHanaSR-tests-syntax(5) for syntax details. Details like +performed steps or expected behaviour of cluster and HANA are explained in +SAPHanaSR-tests-description(7). Entry point for all predefined tests is a clean and idle Linux cluster and a clean HANA pair in sync. Same is true for the final state. @@ -18,302 +20,71 @@ See manual page SAPHanaSR_maintenance_examples(7) for detecting the correct status and watching changes near real-time. .PP Predefined functional tests for scale-up overview: -.PP +.TP \fBblock_manual_takeover\fP -.RS 2 -Descr: blocked manual takeover, for susTkOver.py -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See susTkOver.py(7). -.br -Expect: Both nodes stay online. -Both HANA stay online. -Failed manual takeover attempt is logged in syslog and HANA tracefile. -SR stays SOK. -No takeover. No fencing. -.RE -.PP +Blocked manual takeover, for susTkOver.py. +.TP \fBblock_sr\fP -.RS 2 -Descr: block HANA SR and check SFAIL attribute; unblock to recover -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See susHanaSR.py.(7). -.br -Expect: Both nodes stay online. -Both HANA stay online. -SR SFAIL and finally SOK. -No takeover. No fencing. -.RE -.PP +Block HANA SR and check SFAIL attribute; unblock to recover. +.TP \fBblock_sr_and_freeze_prim_fs\fP -.RS 2 -Descr: block HANA SR and freeze HANA FS on primary master node -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See ocf_suse_SAPHanaFilesystem(7), susHanaSR.py.(7). -.br -Expect: Both nodes stay online. -HANA primary is stopped and finally back online. -SR SFAIL and finally SOK. -No takeover. No fencing. -.RE -.PP +Block HANA SR and freeze HANA FS on primary master node. +.TP \fBflup\fP -.RS 2 -Descr: like nop but very short sleep only - only for checking the test engine -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: Wait and see. -.br -Expect: Cluster and HANA are up and running, all good. -.RE -.PP +Like nop but very short sleep, just checking the test engine. +.TP \fBfree_log_area\fP -.RS 2 -Descr: free HANA log area on primary site -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: Free up HANA transaction log space and log backups. -.br -Expect: Cluster and HANA are up and running, all good. -.RE -.PP +Free HANA log area on primary site. +.TP \fBfreeze_prim_fs\fP -.RS 2 -Descr: freeze HANA FS on primary master node -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See ocf_suse_SAPHanaFilesystem(7). -.br -Expect: Primary node fenced and finally started as secondary. -HANA primary stopped and finally started as secondary. -HANA secondary becomes finally primary. -SR SFAIL and finally SOK. -One takeover. One fence. -.RE -.PP +Freeze HANA FS on primary master node. +.TP \fBkill_prim_indexserver\fP -.RS 2 -Descr: Kill primary indexserver, for susChkSrv.py -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See susChkSrv.py(7). -.br -Expect: Primary node stays online. -HANA primary stopped and finally started as secondary. -HANA secondary becomes finally primary. -SR SFAIL and finally SOK. -One takeover. No fencing (for action_on_lost=kill). -.RE -.PP +Kill primary indexserver, for susChkSrv.py. +.TP \fBkill_prim_inst\fP -.RS 2 -Descr: Kill primary instance -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: HDB kill -.br -Expect: Primary node stays online. -HANA primary stopped and finally started as secondary. -HANA secondary becomes finally primary. -SR SFAIL and finally SOK. -One takeover. No fencing. -.RE -.PP +Kill primary instance. +.TP \fBkill_prim_node\fP -.RS 2 -Descr: Kill primary node -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: systemctl reboot --force -.br -Expect: Primary node fenced and finally started as secondary. -HANA primary stopped and finally started as secondary. -HANA secondary becomes finally primary. -SR SFAIL and finally SOK. -One takeover. One fencing. -.RE -.PP +Kill primary node. +.TP \fBkill_secn_indexserver\fP -.RS 2 -Descr: Kill secondary indexserver, for susChkSrv.py -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See susChkSrv.py(7). -.br -Expect: HANA secondary stopped and finally online. -HANA primary stays online. -SR SFAIL and finally SOK. -No takeover. No fencing (for action_on_lost=kill). -.RE -.PP -\fBkill_secn_inst\fP -.RS 2 -Descr: Kill secondary instance -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: HDB kill -.br -Expect: HANA secondary stopped and finally online. -HANA primary stays online. -SR SFAIL and finally SOK. -No takeover. No fencing. -.RE -.PP +Kill secondary indexserver, for susChkSrv.py. +.TP \fBkill_secn_node\fP -.RS 2 -Descr: Kill secondary node -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: systemctl reboot --force -.br -Expect: Secondary node fenced and finally online. -HANA primary stays online. -SR SFAIL and finally SOK. -No takeover. One fencing. -.RE -.PP +Kill secondary node. +.TP \fBmaintenance_cluster_turn_hana\fP -.RS 2 -Descr: maintenance procedure, manually turning HANA sites -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See SAPHanaSR_maintenance_examples(7), https://www.suse.com/c/sap-hana-maintenance-suse-clusters/ . -.br -Expect: Both nodes stay online. -HANA primary stopped and finally started as secondary. -HANA secondary becomes finally primary by manual takeover. -SR SFAIL and finally SOK. -One takeover. No fencing. -.RE -.PP +Maintenance procedure, manually turning HANA sites. +.TP \fBmaintenance_with_standby_nodes\fP -.RS 2 -Descr: standby+online secondary then standby+online primary -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: See SAPHanaSR_maintenance_examples(7. -.br -Expect: Both nodes stay online. -HANA primary stopped and finally started as secondary. -HANA secondary becomes finally primary. -SR SFAIL and finally SOK. -One takeover. No fencing. -.RE -.PP +Maintenance procedure, standby+online secondary then standby+online primary. +.TP \fBnop\fP -.RS 2 -Descr: no operation - check, wait and check again (stability check) -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: Wait and see. -.br -Expect: Cluster and HANA are up and running, all good. -.RE -.PP +No operation - check, wait and check again (stability check). +.TP \fBrestart_cluster_hana_running\fP -.RS 2 -Descr: -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: -.br -Expect: Both nodes stay online. - -No takeover. No fencing. -.RE -.PP +Stop and restart cluster, keep HANA running. +.TP \fBrestart_cluster\fP -.RS 2 -Descr: -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: -.br -Expect: Both nodes stay online. - -No takeover. No fencing. -.RE -.PP +Stop and restart cluster and HANA. +.TP \fBrestart_cluster_turn_hana\fP -.RS 2 -Descr: -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: -.br -Expect: Both nodes stay online. - -One takeover. No fencing. -.RE +Stop cluster and HANA, manually start and takeover HANA, start cluster. .PP \fBsplit_brain_prio\fP -.RS 2 -Descr: Network split-brain with priority fencing -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: -.br -Expect: Secondary node fenced and finally online. -Primary node stays online. -HANA primary stays online. -SR SFAIL and finally SOK. -No takeover. One fencing. -.RE -.PP +Network split-brain with priority fencing. +.TP \fBstandby_primary_node\fP -.RS 2 -Descr: Set primary node standby and online again -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: crm node standby ; crm node online -.br -Expect: Both nodes aty online. -Primary node standby and finally back online. -HANA primary stopped and finally started as secondary. -HANA secondary finally primary by takeover. -SR SFAIL and finally SOK. -One takeover. No fencing. -.RE -.PP +Set primary node standby and online again. +.TP \fBstandby_secondary_node\fP -.RS 2 -Descr: Set secondary node standby and online again -.br -Prereq: Cluster and HANA are up and running, all good. -.br -Test: crm node standby ; crm node online -.br -Expect: Secondary node standby and finally online. -HANA primary stays online. -HANA secondary stopped and finally started. -SR SFAIL and finally SOK. No takeover. No fencing. -.RE +Set secondary node standby and online again. .PP .\" .SH EXAMPLES .PP +.\" TODO list all tests from json directories .\" .SH FILES .\" @@ -328,6 +99,8 @@ shell scripts for un-easy tasks on the cluster nodes. .SH REQUIREMENTS .\" See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). +Of course, HANA database and Linux cluster need to match certain requirements. +Please refer to the product documentation. .PP .\" .SH BUGS @@ -338,7 +111,8 @@ Please report any other feedback and suggestions to feedback@suse.com. .\" .SH SEE ALSO \fBSAPHanaSR-tester\fP(7) , \fBSAPHanaSR-testCluster\fP(8) , -\fBSAPHanaSR-tests-angi-ScaleOut\fP(7) , \fBSAPHanaSR-tests-syntax\fP(5) , +\fBSAPHanaSR-tests-description\fP(7) , \fBSAPHanaSR-tests-syntax\fP(5) , +\fBSAPHanaSR-tests-angi-ScaleOut\fP(7) , \fBSAPHanaSR-angi\fP(7) , \fBSAPHanaSR-showAttr\fP(8) .PP .\" diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 new file mode 100644 index 00000000..71cc735f --- /dev/null +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -0,0 +1,496 @@ +.\" Version: 1.001 +.\" +.TH SAPHanaSR-tests-description 7 "20 Dec 2023" "" "SAPHanaSR-angi" +.\" +.SH NAME +SAPHanaSR-tests-description \- Functional tests for SAPHanaSR. +.PP +.\" +.SH DESCRIPTION +.PP +Functional test are shipped for different scenarios. This tests could be run +out-of-the-box. The test cases are defined in dedicated files. +See manual page SAPHanaSR-tests-syntax(5) for syntax details. Tests for +SAPHanaSR-angi scale-up scenarios are listed in SAPHanaSR-tests-angi-ScaleUp(7), +for SAPHanaSR-angi scale-out ERP scenarios in SAPHanaSR-tests-angi-ScaleOut(7). + +Entry point for all predefined tests is a clean and idle Linux cluster and a +clean HANA pair in sync. Same is true for the final state. +See manual page SAPHanaSR_maintenance_examples(7) for detecting the correct +status and watching changes near real-time. + +Each test can be executed by running the command SAPHanaSR-testCluster with +appropriate parameters. See manual page SAPHanaSR-testCluster(8). +.PP +Predefined functional tests: +.PP +\fBblock_manual_takeover\fP +.RS 2 +Descr: Blocked manual takeover, for susTkOver.py. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See susTkOver.py(7). +.br +Expect: Both nodes stay online. +Both HANA stay online. +Failed manual takeover attempt is logged in syslog and HANA tracefile. +SR stays SOK. +No takeover. No fencing. +.RE +.PP +\fBblock_sr\fP +.RS 2 +Descr: Block HANA SR and check SFAIL attribute; unblock to recover. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See susHanaSR.py.(7). +.br +Expect: Both nodes stay online. +Both HANA stay online. +SR SFAIL and finally SOK. +No takeover. No fencing. +.RE +.PP +\fBblock_sr_and_freeze_prim_fs\fP +.RS 2 +Descr: Block HANA SR and freeze HANA FS on primary master node. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See ocf_suse_SAPHanaFilesystem(7), susHanaSR.py.(7). +.br +Expect: Both nodes stay online. +HANA primary is stopped and finally back online. +SR SFAIL and finally SOK. +No takeover. No fencing. +.RE +.PP +\fBblock_sr_and_freeze_prim_master_nfs\fP +.RS 2 +Descr: Block HANA SR and freeze HANA NFS on primary master node. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See ocf_suse_SAPHanaFilesystem(7), susHanaSR.py.(7). +.br +Expect: Both nodes stay online. +HANA primary is stopped and finally back online. +SR SFAIL and finally SOK. +No takeover. No fencing. +.RE +.PP +\fBblock_sr_and_freeze_prim_site_nfs\fP +.RS 2 +Descr: Block HANA SR and freeze HANA NFS on primary site. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See ocf_suse_SAPHanaFilesystem(7), susHanaSR.py.(7). +.br +Expect: Both nodes stay online. +HANA primary is stopped and finally back online. +SR SFAIL and finally SOK. +No takeover. No fencing. +.RE +.PP +\fBflup\fP +.RS 2 +Descr: Like nop but very short sleep, just checking the test engine. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: Wait and see. +.br +Expect: Cluster and HANA are up and running, all good. +.RE +.PP +\fBfree_log_area\fP +.RS 2 +Descr: Free HANA log area on primary site. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: Free up HANA transaction log space and log backups. +.br +Expect: Cluster and HANA are up and running, all good. +.RE +.PP +\fBfreeze_prim_fs\fP +.RS 2 +Descr: Freeze HANA FS on primary master node. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See ocf_suse_SAPHanaFilesystem(7). +.br +Expect: Primary node fenced and finally started as secondary. +HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. One fence. +.RE +.PP +\fBfreeze_prim_master_nfs\fP +.RS 2 +Descr: Freeze HANA NFS on primary master node, scale-out. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: +.RE +.PP +\fBfreeze_prim_site_nfs\fP +.RS 2 +Descr: Freeze HANA NFS on primary site, scale-out. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: +.RE +.PP +\fBkill_prim_indexserver\fP +.RS 2 +Descr: Kill primary indexserver, for susChkSrv.py. +On scale-out, kill primary master indexserver. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See susChkSrv.py(7). +.br +Expect: Primary node stays online. +HANA primary (master) stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. No fencing (for action_on_lost=kill). +.RE +.PP +\fBkill_prim_inst\fP +.RS 2 +Descr: Kill primary instance. +On scale-out, kill primary master instance. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: HDB kill +.br +Expect: Primary (master) node stays online. +HANA primary (master) stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. No fencing. +.RE +.PP +\fBkill_prim_node\fP +.RS 2 +Descr: Kill primary node. +On scale-out, kill primary master node. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: systemctl reboot --force +.br +Expect: Primary (master) node fenced and finally started as secondary. +HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. One fencing. +.RE +.PP +\fBkill_prim_worker_indexserver\fP +.RS 2 +Descr: Kill primary worker indexserver, scale-out, for susChkSrv.py. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: +.RE +.PP +\fBkill_prim_worker_inst\fP +.RS 2 +Descr: Kill primary worker instance, scale-out. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: +.RE +.PP +\fBkill_prim_worker_node\fP +.RS 2 +Descr: Kill primary worker node, scale-out. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: +.RE +.PP +\fBkill_secn_indexserver\fP +.RS 2 +Descr: Kill secondary indexserver, for susChkSrv.py. +On scale-out, kill secondary master indexserver. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See susChkSrv.py(7). +.br +Expect: HANA secondary stopped and finally online. +HANA primary stays online. +SR SFAIL and finally SOK. +No takeover. No fencing (for action_on_lost=kill). +.RE +.PP +\fBkill_secn_inst\fP +.RS 2 +Descr: Kill secondary instance. +On scale-out, kill secondary master instance. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: HDB kill +.br +Expect: HANA secondary stopped and finally online. +HANA primary stays online. +SR SFAIL and finally SOK. +No takeover. No fencing. +.RE +.PP +\fBkill_secn_node\fP +.RS 2 +Descr: Kill secondary node. +On scale-out, kill secondary master node. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: systemctl reboot --force +.br +Expect: Secondary (master) node fenced and finally online. +HANA primary stays online. +SR SFAIL and finally SOK. +No takeover. One fencing. +.RE +.PP +\fBkill_secn_worker_inst\fP +.RS 2 +Descr: Kill secondary worker instance, scale-out. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: + +HANA primary stays online. +SR SFAIL and finally SOK. +No takeover. No fencing. +.RE +.PP +\fBkill_secn_worker_node\fP +.RS 2 +Descr: Kill secondary worker node, scale-out. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: systemctl reboot --force +.br +Expect: Secondary worker node fenced and finally online. +HANA primary stays online. +SR SFAIL and finally SOK. +No takeover. One fencing. +.RE +.PP +\fBmaintenance_cluster_turn_hana\fP +.RS 2 +Descr: Maintenance procedure, manually turning HANA sites. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See SAPHanaSR_maintenance_examples(7), https://www.suse.com/c/sap-hana-maintenance-suse-clusters/ . +.br +Expect: Both nodes stay online. +HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary by manual takeover. +SR SFAIL and finally SOK. +One takeover. No fencing. +.RE +.PP +\fBmaintenance_with_standby_nodes\fP +.RS 2 +Descr: standby+online secondary then standby+online primary +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See SAPHanaSR_maintenance_examples(7). +.br +Expect: Both nodes stay online. +HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. No fencing. +.RE +.PP +\fBnop\fP +.RS 2 +Descr: No operation - check, wait and check again (stability check). +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: Wait and see. +.br +Expect: Cluster and HANA are up and running, all good. +.RE +.PP +\fBrestart_cluster_hana_running\fP +.RS 2 +Descr: Stop and restart cluster, keep HANA running. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: Both nodes stay online. + +No takeover. No fencing. +.RE +.PP +\fBrestart_cluster\fP +.RS 2 +Descr: Stop and restart cluster and HANA. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: Both nodes stay online. + +No takeover. No fencing. +.RE +.PP +\fBrestart_cluster_turn_hana\fP +.RS 2 +Descr: Stop cluster and HANA, manually start and takeover HANA, start cluster. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: Both nodes stay online. +Both HANA stopped. +HANA primary finally started as secondary. +HANA secondary becomes finally primary by manual takeover. +SR SFAIL and finally SOK. +One takeover. No fencing. +.RE +.PP +\fBsplit_brain_prio\fP +.RS 2 +Descr: Network split-brain with priority fencing. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: Secondary node fenced and finally online. +Primary node stays online. +HANA primary stays online. +SR SFAIL and finally SOK. +No takeover. One fencing. +.RE +.PP +\fBstandby_primary_node\fP +.RS 2 +Descr: Set primary node standby and online again. +On scale-out, standby primary master node and online again. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: crm node standby ; crm node online +.br +Expect: All nodes stay online. +Primary (master) node standby and finally back online. +HANA primary stopped and finally started as secondary. +HANA secondary finally primary by takeover. +SR SFAIL and finally SOK. +One takeover. No fencing. +.RE +.PP +\fBstandby_secondary_node\fP +.RS 2 +Descr: Set secondary node standby and online again. +On scale-out, standby secondary master node and online again. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: crm node standby ; crm node online +.br +Expect: Secondary (master) node standby and finally online. +HANA primary stays online. +HANA secondary stopped and finally started. +SR SFAIL and finally SOK. No takeover. No fencing. +.RE +.PP +.\" +.SH EXAMPLES +.PP +* List all shipped tests +.PP +.RS 2 +# find /usr/share/SAPHanaSR-tester/json/ -name "*.json" -exec basename {} \; | sort -u +.RE +.PP +.\" +.SH FILES +.\" +.TP +/usr/share/SAPHanaSR-tester/json/angi-ScaleUp/ +functional tests for SAPHanaSR-angi scale-up scenarios. +.TP +/usr/share/SAPHanaSR-tester/json/angi-ScaleOut/ +functional tests for SAPHanaSR-angi scale-out ERP scenarios. +.TP +/usr/bin/sct_test_* +shell scripts for un-easy tasks on the cluster nodes. +.PP +.\" +.SH REQUIREMENTS +.\" +See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). +Of course, HANA database and Linux cluster need to match certain requirements. +Please refer to the product documentation. +.PP +.\" +.SH BUGS +In case of any problem, please use your favourite SAP support process to open +a request for the component BC-OP-LNX-SUSE. +Please report any other feedback and suggestions to feedback@suse.com. +.PP +.\" +.SH SEE ALSO +\fBSAPHanaSR-tester\fP(7) , \fBSAPHanaSR-testCluster\fP(8) , +\fBSAPHanaSR-tests-syntax\fP(5) , \fBSAPHanaSR-tests-angi-ScaleUp\fP(7) , +\fBSAPHanaSR-tests-angi-ScaleOut\fP(7) , \fBSAPHanaSR-angi\fP(7) , +\fBSAPHanaSR-showAttr\fP(8) +.PP +.\" +.SH AUTHORS +F.Herschel, L.Pinne. +.PP +.\" +.SH COPYRIGHT +(c) 2023 SUSE Linux GmbH, Germany. +.br +The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. +.br +For details see the GNU General Public License at +http://www.gnu.org/licenses/gpl.html +.\" From ec49911f87d3cd663d2fb178455f9aee3fba3bb8 Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 20 Dec 2023 11:37:52 +0100 Subject: [PATCH 013/123] SAPHanaSR-testCluster.8 SAPHanaSR-tester.7 SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-angi-ScaleUp.7 SAPHanaSR-tests-description.7: formattig, typos, tests description --- man-tester/SAPHanaSR-testCluster.8 | 2 +- man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 14 +++++++------- man-tester/SAPHanaSR-tests-angi-ScaleUp.7 | 14 +++++++------- man-tester/SAPHanaSR-tests-description.7 | 2 +- 4 files changed, 16 insertions(+), 16 deletions(-) diff --git a/man-tester/SAPHanaSR-testCluster.8 b/man-tester/SAPHanaSR-testCluster.8 index 473c759d..cf9c789e 100644 --- a/man-tester/SAPHanaSR-testCluster.8 +++ b/man-tester/SAPHanaSR-testCluster.8 @@ -149,7 +149,7 @@ TODO .SH REQUIREMENTS .\" See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). -Of course, HANA database and Linux cluster need to match certain requirements. +Of course, HANA database and Linux cluster have certain requirements. Please refer to the product documentation. .PP .\" diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index be48b4c1..b6e58415 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -14,11 +14,6 @@ See manual page SAPHanaSR-tests-syntax(5) for syntax details. Details like performed steps or expected behaviour of cluster and HANA are explained in SAPHanaSR-tests-description(7). -Entry point for all predefined tests is a clean and idle Linux cluster and a -clean HANA pair in sync. Same is true for the final state. -See manual page SAPHanaSR_maintenance_examples(7) for detecting the correct -status and watching changes near real-time. - Each test can be executed by running the command SAPHanaSR-testCluster with appropriate parameters. See manual page SAPHanaSR-testCluster(8). .PP @@ -103,7 +98,12 @@ Standby secondary master node and online again. .\" .SH EXAMPLES .PP -.\" TODO +* List tests for SAPHanaSR-angi scale-out ERP scenarios +.PP +.RS 2 +# ls /usr/share/SAPHanaSR-tester/json/angi-ScaleOut/ +.RE +.PP .\" .SH FILES .TP @@ -117,7 +117,7 @@ shell scripts for un-easy tasks on the cluster nodes. .SH REQUIREMENTS .PP See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). -Of course, HANA database and Linux cluster need to match certain requirements. +Of course, HANA database and Linux cluster have certain requirements. Please refer to the product documentation. .\" .SH BUGS diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 index a2510e45..131b882a 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 @@ -13,11 +13,6 @@ out-of-the-box. The test cases are defined in dedicated files. See manual page SAPHanaSR-tests-syntax(5) for syntax details. Details like performed steps or expected behaviour of cluster and HANA are explained in SAPHanaSR-tests-description(7). - -Entry point for all predefined tests is a clean and idle Linux cluster and a -clean HANA pair in sync. Same is true for the final state. -See manual page SAPHanaSR_maintenance_examples(7) for detecting the correct -status and watching changes near real-time. .PP Predefined functional tests for scale-up overview: .TP @@ -84,7 +79,12 @@ Set secondary node standby and online again. .\" .SH EXAMPLES .PP -.\" TODO list all tests from json directories +* List tests for SAPHanaSR-angi scale-up scenarios +.PP +.RS 2 +# ls /usr/share/SAPHanaSR-tester/json/angi-ScaleUp/ +.RE +.PP .\" .SH FILES .\" @@ -99,7 +99,7 @@ shell scripts for un-easy tasks on the cluster nodes. .SH REQUIREMENTS .\" See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). -Of course, HANA database and Linux cluster need to match certain requirements. +Of course, HANA database and Linux cluster have certain requirements. Please refer to the product documentation. .PP .\" diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index 71cc735f..a3f69d64 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -465,7 +465,7 @@ shell scripts for un-easy tasks on the cluster nodes. .SH REQUIREMENTS .\" See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). -Of course, HANA database and Linux cluster need to match certain requirements. +Of course, HANA database and Linux cluster also have certain requirements. Please refer to the product documentation. .PP .\" From dd59df3002031276147a7957ad957084b2bb2bce Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 20 Dec 2023 11:44:51 +0100 Subject: [PATCH 014/123] SAPHanaSR-tests-basic-cluster.7 --- man-tester/SAPHanaSR-tests-basic-cluster.7 | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/man-tester/SAPHanaSR-tests-basic-cluster.7 b/man-tester/SAPHanaSR-tests-basic-cluster.7 index 14c3f91a..60bf21f0 100644 --- a/man-tester/SAPHanaSR-tests-basic-cluster.7 +++ b/man-tester/SAPHanaSR-tests-basic-cluster.7 @@ -125,7 +125,8 @@ The Linux sudo rules are checked for needed permissions. The HANA global.ini is checked for SUSE HA/DR provider hook scripts. The HANA nameserver tracefiles are checked whether the hook scripts have been loaded. SID is HA1, instance number is 85. Do this on all Linux cluster nodes. See -manual page susHanaSR.py(7), susTkOver.py(7) and susChkSrv.py(7). +manual page susHanaSR.py(7), susTkOver.py(7), susChkSrv.py(7) and +SAPHanaSR-hookHelper(8). .PP .RS 2 # sudo -U ha1adm -l | \\ @@ -181,6 +182,8 @@ path to HANA tracefiles .SH REQUIREMENTS .\" See the REQUIREMENTS section in SAPHanaSR-tester(7) and SAPHanaSR-angi(7). +Of course, the Linux cluster has certain requirements. +Please refer to the product documentation. .PP .\" .SH BUGS From ce74a02ae535133e90c3654fa113f6d05974dc57 Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 20 Dec 2023 11:53:51 +0100 Subject: [PATCH 015/123] SAPHanaSR-testCluster.8: wording --- man-tester/SAPHanaSR-testCluster.8 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/man-tester/SAPHanaSR-testCluster.8 b/man-tester/SAPHanaSR-testCluster.8 index cf9c789e..74f88764 100644 --- a/man-tester/SAPHanaSR-testCluster.8 +++ b/man-tester/SAPHanaSR-testCluster.8 @@ -3,7 +3,7 @@ .TH SAPHanaSR-testCluster 8 "20 Nov 2023" "" "SAPHanaSR-angi" .\" .SH NAME -SAPHanaSR-testCluster \- Run functional testing for SAPHanaSR clusters. +SAPHanaSR-testCluster \- Run functional tests for SAPHanaSR clusters. .PP .\" .SH SYNOPSIS From ecde04fbdbc794c6972f0cec6ef53c8b4df7de13 Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 20 Dec 2023 11:56:27 +0100 Subject: [PATCH 016/123] SAPHanaSR-testCluster.8: example --- man-tester/SAPHanaSR-testCluster.8 | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/man-tester/SAPHanaSR-testCluster.8 b/man-tester/SAPHanaSR-testCluster.8 index 74f88764..84755e56 100644 --- a/man-tester/SAPHanaSR-testCluster.8 +++ b/man-tester/SAPHanaSR-testCluster.8 @@ -78,7 +78,8 @@ Usage, syntax or execution errors. The functional test "nop" is performed on the Linux cluster defined in properties_q42.json, nodes are node1 and node2. A dedicated working directory and logfile for this test is used. -See also manual page SAPHanaSR-tester(7) and SAPHanaSR-tests-syntax(7). +See also manual page SAPHanaSR-tester(7), SAPHanaSR-tests-syntax(7), +SAPHanaSR-tests-angi-ScaleUp(7) and SAPHanaSR-tests-description(7). .PP .RS 2 # mkdir ~/test_nop; cd ~/test_nop From d0abe07252343e4a734565085d82bc48f30e5cc6 Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 20 Dec 2023 12:08:32 +0100 Subject: [PATCH 017/123] SAPHanaSR-tester.7: details --- man-tester/SAPHanaSR-tester.7 | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/man-tester/SAPHanaSR-tester.7 b/man-tester/SAPHanaSR-tester.7 index 11e86f77..f3844c0e 100644 --- a/man-tester/SAPHanaSR-tester.7 +++ b/man-tester/SAPHanaSR-tester.7 @@ -63,14 +63,14 @@ See manual page SAPHanaSR-tests-syntax(5) for details, see also example below. \fB*\fP Processing: .\" TODO SAPHanaSR-testCluster -.\" TODO shell scripts shiped with SAPhanaSR-tester, e.g. test_ +.\" TODO shell scripts shiped with SAPhanaSR-tester, e.g. sct_test_ .\" TODO custom scripts and test automation frameworks -See manual page SAPHanaSR-testCluster(8). +See manual page SAPHanaSR-testCluster(8) and SAPHanaSR-tests-description(7). \fB*\fP Output: Test results .\" TODO output of SAPHanaSR-testCluster -See manual page SAPHanaSR-testCluster(8). +See manual page SAPHanaSR-testCluster(8) and SAPHanaSR-tests-description(7). .PP .\" .SH EXAMPLES @@ -146,8 +146,9 @@ Note: Never do this on production systems. \fB*\fR Showing predefined functional scale-up test cases. .PP The predefined functional tests for the SAPHanaSR-angi scale-up scenario are -shown. See manual page SAPHanaSR-tests-angi-ScaleUp(7) and -SAPHanaSR-testCluster(8) for how to run this tests. +shown. See also manual page SAPHanaSR-tests-angi-ScaleUp(7). See manual page +SAPHanaSR-tests-description(7) and SAPHanaSR-testCluster(8) for how to run this +tests. .PP .RS 2 # ls /usr/share/SAPHanaSR-tester/json/angi-ScaleUp/*.json | \\ From 10ca11572095ecdc5f41b596351b662e0d221dfd Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Thu, 4 Jan 2024 08:45:09 +0100 Subject: [PATCH 018/123] angi srHooks: susHanaSR.py - catch possible IO errors --- srHook/susHanaSR.py | 26 +++++++++++++++++++++++--- 1 file changed, 23 insertions(+), 3 deletions(-) diff --git a/srHook/susHanaSR.py b/srHook/susHanaSR.py index a11ed691..85a9bac5 100755 --- a/srHook/susHanaSR.py +++ b/srHook/susHanaSR.py @@ -107,13 +107,33 @@ def srConnectionChanged(self, ParamDict, **kwargs): # SAP HANA swarm nodes # attribute_name = f"hana_{mysid_lower}_site_srHook_{my_site}" - with open(fallback_stage_file_name, "w", encoding="UTF-8") as fallback_file_obj: - fallback_file_obj.write(f"{attribute_name} = {my_srs}") + try: + with open(fallback_stage_file_name, "w", encoding="UTF-8") as fallback_file_obj: + fallback_file_obj.write(f"{attribute_name} = {my_srs}") + except PermissionError: + my_msg = f"ERROR: Permission denied for file {fallback_stage_file_name}" + self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") + except FileNotFoundError: + my_msg = f"ERROR: File not found error occured during - creating file {fallback_stage_file_name}" + self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") + except OSError as oerr: + my_msg = f"ERROR: OS error occured during creating file {fallback_stage_file_name}: {oerr}" + self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") # # release the stage file to the original name (move is used to be atomic) # .crm_attribute.stage. is renamed to .crm_attribute. # - os.rename(fallback_stage_file_name, fallback_file_name) + try: + os.rename(fallback_stage_file_name, fallback_file_name) + except PermissionError: + my_msg = "ERROR: Permission denied to move file {fallback_stage_file_name} to {fallback_file_name}" + self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") + except FileNotFoundError: + my_msg = "ERROR: File not found error occured during moving file {fallback_stage_file_name} to {fallback_file_name}" + self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") + except OSError as oerr: + my_msg = f"ERROR: OS error occured during moving file {fallback_stage_file_name} to {fallback_file_name}: {oerr}" + self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") return 0 except NameError as e: print(f"Could not find base class ({e})") From 3e821c19b948c2b1334bc68b2c210cbde8027b3c Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Thu, 4 Jan 2024 10:38:21 +0100 Subject: [PATCH 019/123] angi srHooks: susHanaSR.py - fixed f-strings --- srHook/susHanaSR.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/srHook/susHanaSR.py b/srHook/susHanaSR.py index 85a9bac5..5f9bb9fa 100755 --- a/srHook/susHanaSR.py +++ b/srHook/susHanaSR.py @@ -114,7 +114,7 @@ def srConnectionChanged(self, ParamDict, **kwargs): my_msg = f"ERROR: Permission denied for file {fallback_stage_file_name}" self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") except FileNotFoundError: - my_msg = f"ERROR: File not found error occured during - creating file {fallback_stage_file_name}" + my_msg = f"ERROR: File not found error occured during creating file {fallback_stage_file_name}" self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") except OSError as oerr: my_msg = f"ERROR: OS error occured during creating file {fallback_stage_file_name}: {oerr}" @@ -126,10 +126,10 @@ def srConnectionChanged(self, ParamDict, **kwargs): try: os.rename(fallback_stage_file_name, fallback_file_name) except PermissionError: - my_msg = "ERROR: Permission denied to move file {fallback_stage_file_name} to {fallback_file_name}" + my_msg = f"ERROR: Permission denied to move file {fallback_stage_file_name} to {fallback_file_name}" self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") except FileNotFoundError: - my_msg = "ERROR: File not found error occured during moving file {fallback_stage_file_name} to {fallback_file_name}" + my_msg = f"ERROR: File not found error occured during moving file {fallback_stage_file_name} to {fallback_file_name}" self.tracer.error(f"{self.__class__.__name__}.{method}() {my_msg}\n") except OSError as oerr: my_msg = f"ERROR: OS error occured during moving file {fallback_stage_file_name} to {fallback_file_name}: {oerr}" From e8a72d000c3a7a3debbf3647ac9277302b8856f5 Mon Sep 17 00:00:00 2001 From: lpinne Date: Thu, 4 Jan 2024 11:29:36 +0100 Subject: [PATCH 020/123] susHanaSR.py.7: requirements for cache file --- man/susHanaSR.py.7 | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/man/susHanaSR.py.7 b/man/susHanaSR.py.7 index b6958a30..1c302202 100644 --- a/man/susHanaSR.py.7 +++ b/man/susHanaSR.py.7 @@ -1,6 +1,6 @@ .\" Version: 1.001 .\" -.TH susHanaSR.py 7 "13 Apr 2023" "" "SAPHanaSR" +.TH susHanaSR.py 7 "04 Jan 2024" "" "SAPHanaSR" .\" .SH NAME susHanaSR.py \- Provider for SAP HANA srHook method srConnectionChanged(). @@ -263,14 +263,12 @@ the sudo permissions configuration path to HANA tracefiles .TP /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE -the internal cache for srHook status changes while Linux cluster is down, file is owned by ${SID}adm and must never be touched +the internal cache for srHook status changes while Linux cluster is down, file is owned and r/w by ${sid}adm and must never be touched .PP .\" .SH REQUIREMENTS 1. SAP HANA 2.0 SPS04 or later provides the HA/DR provider hook method srConnectionChanged() with multi-target aware parameters. -The multi-target aware parameters are needed for the SAPHanaSR scale-up -package. .PP 2. No other HADR provider hook script should be configured for the srConnectionChanged() method. Hook scripts for other methods, provided in @@ -280,7 +278,10 @@ contradictingly. 3. The user ${sid}adm needs execution permission as user root for the command crm_attribute. .PP -4. The hook provider needs to be added to the HANA global configuration, +4. The user ${sid}adm needs ownership and read/write permission on the internal +cache file /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE . +.PP +5. The hook provider needs to be added to the HANA global configuration, in memory and on disk (in persistence). .\" .SH BUGS @@ -290,7 +291,7 @@ Please report any other feedback and suggestions to feedback@suse.com. .PP .\" .SH SEE ALSO -\fBSAPHanaSR\fP(7) , +\fBSAPHanaSR-angi\fP(7) , \fBocf_suse_SAPHanaTopology\fP(7) , \fBocf_suse_SAPHanaController\fP(7) , \fBocf_heartbeat_IPaddr2\fP(7) , \fBsusCostOpt.py\fP(7) , \fBsusTkOver.py\fP(7) , \fBsusChkSrv.py\fP (7) , @@ -311,7 +312,7 @@ A.Briel, F.Herschel, L.Pinne. .SH COPYRIGHT (c) 2015-2017 SUSE Linux GmbH, Germany. .br -(c) 2018-2023 SUSE LLC +(c) 2018-2024 SUSE LLC .br susHanaSR.py comes with ABSOLUTELY NO WARRANTY. .br From dc3e67563bcd219109f9b72af2a52bb56d51588d Mon Sep 17 00:00:00 2001 From: lpinne Date: Thu, 4 Jan 2024 11:54:33 +0100 Subject: [PATCH 021/123] susHanaSR.py.7: requirements --- man/susHanaSR.py.7 | 3 +++ 1 file changed, 3 insertions(+) diff --git a/man/susHanaSR.py.7 b/man/susHanaSR.py.7 index 1c302202..5e0c94db 100644 --- a/man/susHanaSR.py.7 +++ b/man/susHanaSR.py.7 @@ -283,6 +283,9 @@ cache file /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE . .PP 5. The hook provider needs to be added to the HANA global configuration, in memory and on disk (in persistence). +.PP +6. The srHook script runtime almost completely depends on call-outs to OS and +Linux cluster. .\" .SH BUGS In case of any problem, please use your favourite SAP support process to open From 60dd9019cb53802ab495c4e4ea371bf6c10fb339 Mon Sep 17 00:00:00 2001 From: lpinne Date: Thu, 4 Jan 2024 12:01:25 +0100 Subject: [PATCH 022/123] SAPHanaSR.7: requirements --- man/SAPHanaSR.7 | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/man/SAPHanaSR.7 b/man/SAPHanaSR.7 index 802a5c91..8d6c77c9 100644 --- a/man/SAPHanaSR.7 +++ b/man/SAPHanaSR.7 @@ -293,6 +293,8 @@ In opposite to Native Storage Extension, the HANA Extension Nodes are changing the topology and thus currently are not supported. Please refer to SAP documentation for details. .PP +25. The Linux user root´s shell is /bin/bash, or completely compatible. +.PP .\" .SH BUGS .\" TODO @@ -345,7 +347,7 @@ A.Briel, F.Herschel, L.Pinne. .SH COPYRIGHT (c) 2015-2017 SUSE Linux GmbH, Germany. .br -(c) 2018-2023 SUSE LLC +(c) 2018-2024 SUSE LLC .br The package SAPHanaSR-angi comes with ABSOLUTELY NO WARRANTY. .br From 6f476cdb27249644bd5ca1121b134db317001cc3 Mon Sep 17 00:00:00 2001 From: lpinne Date: Thu, 4 Jan 2024 12:46:26 +0100 Subject: [PATCH 023/123] susHanaSR.py.7: requirements --- man/susHanaSR.py.7 | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/man/susHanaSR.py.7 b/man/susHanaSR.py.7 index 5e0c94db..ac460627 100644 --- a/man/susHanaSR.py.7 +++ b/man/susHanaSR.py.7 @@ -278,8 +278,8 @@ contradictingly. 3. The user ${sid}adm needs execution permission as user root for the command crm_attribute. .PP -4. The user ${sid}adm needs ownership and read/write permission on the internal -cache file /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE . +4. The user ${sid}adm needs ownership and permission for reading/writing/creating +on the internal cache file /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE . .PP 5. The hook provider needs to be added to the HANA global configuration, in memory and on disk (in persistence). From c1dda27a991f3b65da0520ed69ccb27b19bb053e Mon Sep 17 00:00:00 2001 From: lpinne Date: Thu, 4 Jan 2024 12:53:16 +0100 Subject: [PATCH 024/123] susHanaSR.py.7: requirements --- man/susHanaSR.py.7 | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/man/susHanaSR.py.7 b/man/susHanaSR.py.7 index ac460627..8cfd458b 100644 --- a/man/susHanaSR.py.7 +++ b/man/susHanaSR.py.7 @@ -278,8 +278,8 @@ contradictingly. 3. The user ${sid}adm needs execution permission as user root for the command crm_attribute. .PP -4. The user ${sid}adm needs ownership and permission for reading/writing/creating -on the internal cache file /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE . +4. The user ${sid}adm needs ownership and read/write permission on the internal +cache file /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE and its parent directory. .PP 5. The hook provider needs to be added to the HANA global configuration, in memory and on disk (in persistence). From 02894460335b819f842a6f57e26b2c1ffc4e5bf8 Mon Sep 17 00:00:00 2001 From: lpinne Date: Thu, 4 Jan 2024 15:31:38 +0100 Subject: [PATCH 025/123] SAPHanaSR-testCluster.8 SAPHanaSR-tester.7 SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-angi-ScaleUp.7 SAPHanaSR-tests-basic-cluster.7 SAPHanaSR-tests-description.7 SAPHanaSR-tests-syntax.5: copyright, tests description --- man-tester/SAPHanaSR-testCluster.8 | 2 +- man-tester/SAPHanaSR-tester.7 | 2 +- man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 14 +- man-tester/SAPHanaSR-tests-angi-ScaleUp.7 | 5 +- man-tester/SAPHanaSR-tests-basic-cluster.7 | 2 +- man-tester/SAPHanaSR-tests-description.7 | 177 ++++++++++++++++----- man-tester/SAPHanaSR-tests-syntax.5 | 2 +- 7 files changed, 156 insertions(+), 48 deletions(-) diff --git a/man-tester/SAPHanaSR-testCluster.8 b/man-tester/SAPHanaSR-testCluster.8 index 84755e56..88669723 100644 --- a/man-tester/SAPHanaSR-testCluster.8 +++ b/man-tester/SAPHanaSR-testCluster.8 @@ -173,7 +173,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man-tester/SAPHanaSR-tester.7 b/man-tester/SAPHanaSR-tester.7 index f3844c0e..9cb443d3 100644 --- a/man-tester/SAPHanaSR-tester.7 +++ b/man-tester/SAPHanaSR-tester.7 @@ -254,7 +254,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index b6e58415..9533fb28 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -23,10 +23,10 @@ Predefined functional tests for scale-out ERP overwiev: Blocked manual takeover, for susTkOver.py. .TP \fBblock_sr_and_freeze_prim_master_nfs\fP -Block HANA SR and freeze HANA NFS on primary master node. +Block HANA SR and freeze HANA NFS on primary master node (not yet implemented). .TP \fBblock_sr_and_freeze_prim_site_nfs\fP -Block HANA SR and freeze HANA NFS on primary site. +Block HANA SR and freeze HANA NFS on primary site (not yet implemented). .TP \fBfree_log_area\fP Free HANA log area on primary site. @@ -79,6 +79,10 @@ Maintenance procedure, standby+online secondary then standby+online primary. \fBnop\fP No operation - check, wait and check again (stability check). .TP +\fBregister_prim_cold_hana\fP +Stop cluster, do manual takeover, leave former primary down and unregistered, start cluster +(not yet implemented). +.TP \fBrestart_cluster\fP Stop and restart cluster and HANA .TP @@ -88,10 +92,10 @@ Stop and restart cluster, keep HANA running. \fBrestart_cluster_turn_hana\fP Stop cluster and HANA, takeover HANA, start cluster. .TP -\fBstandby_primary_node\fP TODO +\fBstandby_primary_node\fP Standby primary master node and online again. .TP -\fBstandby_secondary_node\fP TODO +\fBstandby_secondary_node\fP Standby secondary master node and online again. .RE .PP @@ -138,7 +142,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 index 131b882a..f6843c37 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 @@ -58,6 +58,9 @@ Maintenance procedure, standby+online secondary then standby+online primary. \fBnop\fP No operation - check, wait and check again (stability check). .TP +\fBregister_prim_cold_hana\fP +Stop cluster, do manual takeover, leave former primary down and unregistered, start cluster. +.TP \fBrestart_cluster_hana_running\fP Stop and restart cluster, keep HANA running. .TP @@ -121,7 +124,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man-tester/SAPHanaSR-tests-basic-cluster.7 b/man-tester/SAPHanaSR-tests-basic-cluster.7 index 60bf21f0..a3413c6b 100644 --- a/man-tester/SAPHanaSR-tests-basic-cluster.7 +++ b/man-tester/SAPHanaSR-tests-basic-cluster.7 @@ -216,7 +216,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index a3f69d64..9c8bc448 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -1,6 +1,6 @@ .\" Version: 1.001 .\" -.TH SAPHanaSR-tests-description 7 "20 Dec 2023" "" "SAPHanaSR-angi" +.TH SAPHanaSR-tests-description 7 "04 Jan 2024" "" "SAPHanaSR-angi" .\" .SH NAME SAPHanaSR-tests-description \- Functional tests for SAPHanaSR. @@ -28,6 +28,8 @@ Predefined functional tests: .RS 2 Descr: Blocked manual takeover, for susTkOver.py. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: See susTkOver.py(7). @@ -43,11 +45,13 @@ No takeover. No fencing. .RS 2 Descr: Block HANA SR and check SFAIL attribute; unblock to recover. .br +Topology: ScaleUp, ScaleOut (not yet implemented). +.br Prereq: Cluster and HANA are up and running, all good. .br Test: See susHanaSR.py.(7). .br -Expect: Both nodes stay online. +Expect: All nodes stay online. Both HANA stay online. SR SFAIL and finally SOK. No takeover. No fencing. @@ -55,7 +59,9 @@ No takeover. No fencing. .PP \fBblock_sr_and_freeze_prim_fs\fP .RS 2 -Descr: Block HANA SR and freeze HANA FS on primary master node. +Descr: Block HANA SR and freeze HANA FS on primary node. +.br +Topology: ScaleUp. .br Prereq: Cluster and HANA are up and running, all good. .br @@ -69,13 +75,16 @@ No takeover. No fencing. .PP \fBblock_sr_and_freeze_prim_master_nfs\fP .RS 2 -Descr: Block HANA SR and freeze HANA NFS on primary master node. +Descr: Block HANA SR and freeze HANA NFS on primary master node +(not yet implemented). +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br Test: See ocf_suse_SAPHanaFilesystem(7), susHanaSR.py.(7). .br -Expect: Both nodes stay online. +Expect: All nodes stay online. HANA primary is stopped and finally back online. SR SFAIL and finally SOK. No takeover. No fencing. @@ -83,13 +92,16 @@ No takeover. No fencing. .PP \fBblock_sr_and_freeze_prim_site_nfs\fP .RS 2 -Descr: Block HANA SR and freeze HANA NFS on primary site. +Descr: Block HANA SR and freeze HANA NFS on primary site +(not yet implemented). +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br Test: See ocf_suse_SAPHanaFilesystem(7), susHanaSR.py.(7). .br -Expect: Both nodes stay online. +Expect: All nodes stay online. HANA primary is stopped and finally back online. SR SFAIL and finally SOK. No takeover. No fencing. @@ -99,6 +111,8 @@ No takeover. No fencing. .RS 2 Descr: Like nop but very short sleep, just checking the test engine. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: Wait and see. @@ -110,6 +124,8 @@ Expect: Cluster and HANA are up and running, all good. .RS 2 Descr: Free HANA log area on primary site. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: Free up HANA transaction log space and log backups. @@ -119,7 +135,9 @@ Expect: Cluster and HANA are up and running, all good. .PP \fBfreeze_prim_fs\fP .RS 2 -Descr: Freeze HANA FS on primary master node. +Descr: Freeze HANA FS on primary node. +.br +Topology: ScaleUp. .br Prereq: Cluster and HANA are up and running, all good. .br @@ -134,24 +152,36 @@ One takeover. One fence. .PP \fBfreeze_prim_master_nfs\fP .RS 2 -Descr: Freeze HANA NFS on primary master node, scale-out. +Descr: Freeze HANA NFS on primary master node. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: See ocf_suse_SAPHanaFilesystem(7). .br -Expect: +Expect: Primary master node fenced, primary worker instance stopped. +HANA finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. One fence. .RE .PP \fBfreeze_prim_site_nfs\fP .RS 2 -Descr: Freeze HANA NFS on primary site, scale-out. +Descr: Freeze HANA NFS on primary site. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: See ocf_suse_SAPHanaFilesystem(7). .br -Expect: +Expect: Primary master an worker node fenced. +HANA finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. One fence. .RE .PP \fBkill_prim_indexserver\fP @@ -159,6 +189,8 @@ Expect: Descr: Kill primary indexserver, for susChkSrv.py. On scale-out, kill primary master indexserver. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: See susChkSrv.py(7). @@ -175,6 +207,8 @@ One takeover. No fencing (for action_on_lost=kill). Descr: Kill primary instance. On scale-out, kill primary master instance. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: HDB kill @@ -191,6 +225,8 @@ One takeover. No fencing. Descr: Kill primary node. On scale-out, kill primary master node. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: systemctl reboot --force @@ -204,35 +240,51 @@ One takeover. One fencing. .PP \fBkill_prim_worker_indexserver\fP .RS 2 -Descr: Kill primary worker indexserver, scale-out, for susChkSrv.py. +Descr: Kill primary worker indexserver, for susChkSrv.py. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: See susChkSrv.py(7). .br -Expect: +Expect: HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. No fencing (for action_on_lost=kill). .RE .PP \fBkill_prim_worker_inst\fP .RS 2 -Descr: Kill primary worker instance, scale-out. +Descr: Kill primary worker instance. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: HDB kill .br -Expect: +Expect: HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. No fencing. .RE .PP \fBkill_prim_worker_node\fP .RS 2 -Descr: Kill primary worker node, scale-out. +Descr: Kill primary worker node. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: systemctl reboot --force .br -Expect: +Expect: Primary worker node fenced. +HANA primary stopped and finally started as secondary. +HANA secondary becomes finally primary. +SR SFAIL and finally SOK. +One takeover. One fencing. .RE .PP \fBkill_secn_indexserver\fP @@ -240,6 +292,8 @@ Expect: Descr: Kill secondary indexserver, for susChkSrv.py. On scale-out, kill secondary master indexserver. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: See susChkSrv.py(7). @@ -255,6 +309,8 @@ No takeover. No fencing (for action_on_lost=kill). Descr: Kill secondary instance. On scale-out, kill secondary master instance. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: HDB kill @@ -270,6 +326,8 @@ No takeover. No fencing. Descr: Kill secondary node. On scale-out, kill secondary master node. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: systemctl reboot --force @@ -282,7 +340,9 @@ No takeover. One fencing. .PP \fBkill_secn_worker_inst\fP .RS 2 -Descr: Kill secondary worker instance, scale-out. +Descr: Kill secondary worker instance. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br @@ -297,7 +357,9 @@ No takeover. No fencing. .PP \fBkill_secn_worker_node\fP .RS 2 -Descr: Kill secondary worker node, scale-out. +Descr: Kill secondary worker node. +.br +Topology: ScaleOut. .br Prereq: Cluster and HANA are up and running, all good. .br @@ -313,26 +375,30 @@ No takeover. One fencing. .RS 2 Descr: Maintenance procedure, manually turning HANA sites. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: See SAPHanaSR_maintenance_examples(7), https://www.suse.com/c/sap-hana-maintenance-suse-clusters/ . .br -Expect: Both nodes stay online. +Expect: All nodes stay online. HANA primary stopped and finally started as secondary. HANA secondary becomes finally primary by manual takeover. SR SFAIL and finally SOK. -One takeover. No fencing. +One takeover. No takeover by cluster. No fencing. .RE .PP \fBmaintenance_with_standby_nodes\fP .RS 2 Descr: standby+online secondary then standby+online primary .br +Topology: ScaleUp. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: See SAPHanaSR_maintenance_examples(7). .br -Expect: Both nodes stay online. +Expect: All nodes stay online. HANA primary stopped and finally started as secondary. HANA secondary becomes finally primary. SR SFAIL and finally SOK. @@ -343,6 +409,8 @@ One takeover. No fencing. .RS 2 Descr: No operation - check, wait and check again (stability check). .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: Wait and see. @@ -350,16 +418,37 @@ Test: Wait and see. Expect: Cluster and HANA are up and running, all good. .RE .PP +\fBregister_prim_cold_hana\fP +.RS 2 +Descr: Stop cluster, do manual takeover, leave former primary down and unregistered, start cluster. +.br +Topology: ScaleUp, ScaleOut (not yet implemented). +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: +.br +Expect: All nodes stay online. +HANA primary stopped and finally started as secondary. +HANA secondary stopped and finally started as primary. +SR SFAIL and finally SOK. +One takeover. No takeover by cluster. No fencing. +.RE +.PP \fBrestart_cluster_hana_running\fP .RS 2 Descr: Stop and restart cluster, keep HANA running. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: .br -Expect: Both nodes stay online. - +Expect: All nodes stay online. +Cluster stopped and restarted. +Both HANA keep running. +SR stays SOK. No takeover. No fencing. .RE .PP @@ -367,12 +456,16 @@ No takeover. No fencing. .RS 2 Descr: Stop and restart cluster and HANA. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: .br -Expect: Both nodes stay online. - +Expect: All nodes stay online. +Cluster stopped and restarted. +Both HANA stopped and restarted. +SR SFAIL and finally SOK. No takeover. No fencing. .RE .PP @@ -380,25 +473,29 @@ No takeover. No fencing. .RS 2 Descr: Stop cluster and HANA, manually start and takeover HANA, start cluster. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: .br -Expect: Both nodes stay online. +Expect: All nodes stay online. Both HANA stopped. HANA primary finally started as secondary. HANA secondary becomes finally primary by manual takeover. SR SFAIL and finally SOK. -One takeover. No fencing. +One takeover. No takeover by cluster. No fencing. .RE .PP \fBsplit_brain_prio\fP .RS 2 Descr: Network split-brain with priority fencing. .br +Topology: ScaleUp. +.br Prereq: Cluster and HANA are up and running, all good. .br -Test: +Test: iptables -I INPUT -s -j DROP .br Expect: Secondary node fenced and finally online. Primary node stays online. @@ -412,6 +509,8 @@ No takeover. One fencing. Descr: Set primary node standby and online again. On scale-out, standby primary master node and online again. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: crm node standby ; crm node online @@ -429,6 +528,8 @@ One takeover. No fencing. Descr: Set secondary node standby and online again. On scale-out, standby secondary master node and online again. .br +Topology: ScaleUp, ScaleOut. +.br Prereq: Cluster and HANA are up and running, all good. .br Test: crm node standby ; crm node online @@ -445,7 +546,7 @@ SR SFAIL and finally SOK. No takeover. No fencing. * List all shipped tests .PP .RS 2 -# find /usr/share/SAPHanaSR-tester/json/ -name "*.json" -exec basename {} \; | sort -u +# find /usr/share/SAPHanaSR-tester/json/ -name "*.json" -exec basename {} \\; | sort -u .RE .PP .\" @@ -487,7 +588,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023-2024 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br diff --git a/man-tester/SAPHanaSR-tests-syntax.5 b/man-tester/SAPHanaSR-tests-syntax.5 index fbbd6571..7a544ecb 100644 --- a/man-tester/SAPHanaSR-tests-syntax.5 +++ b/man-tester/SAPHanaSR-tests-syntax.5 @@ -219,7 +219,7 @@ F.Herschel, L.Pinne. .PP .\" .SH COPYRIGHT -(c) 2023 SUSE Linux GmbH, Germany. +(c) 2023 SUSE LLC .br The package SAPHanaSR-tester comes with ABSOLUTELY NO WARRANTY. .br From 0d9267017a08ae5b2fa111dffce9da97cd083bef Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 8 Jan 2024 11:31:53 +0100 Subject: [PATCH 026/123] SAPHanaSR-tests-basic-cluster.7: see also --- man-tester/SAPHanaSR-tests-basic-cluster.7 | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/man-tester/SAPHanaSR-tests-basic-cluster.7 b/man-tester/SAPHanaSR-tests-basic-cluster.7 index a3413c6b..f8d63c8c 100644 --- a/man-tester/SAPHanaSR-tests-basic-cluster.7 +++ b/man-tester/SAPHanaSR-tests-basic-cluster.7 @@ -196,7 +196,7 @@ Please report any other feedback and suggestions to feedback@suse.com. \fBSAPHanaSR-tester\fP(7) , \fBSAPHanaSR-angi\fP(7) , \fBSAPHanaSR_basic_cluster\fP(7) , \fBSAPHanaSR-ScaleOut_basic_cluster\fP(7) , \fBsusHanaSR.py\fP(7) , \fBsusTkOver.py\fP(7) , \fBsusChkSrv.py\fP(7) , -\fBcrm\fP(8) , \fBcrm_verify\fP(8) , \fBstonith_admin\fP(8) , +\fBcrm\fP(8) , \fBcrm_verify\fP(8) , \fBcrm_mon\fP(8) , \fBstonith_admin\fP(8) , \fBcs_show_error_patterns\fP(8) , \fBcs_sum_base_config\fP(8) , \fBcs_show_sbd_devices\fP(8) , \fBsbd\fP(8) , \fBstonith_sbd\fP(8) , \fBsaptune\fP(8) , \fBchronyc\fP(8) , \fBsystemctl\fP(8) , \fBhosts\fP(5) , From 4a622f7bd359240a307b5509319294a80e1b9a08 Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 10 Jan 2024 09:27:34 +0100 Subject: [PATCH 027/123] susHanaSR.py.7 susHanaSrMultiTarget.py.7: r/w -> read/write --- man/susHanaSR.py.7 | 2 +- man/susHanaSrMultiTarget.py.7 | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/man/susHanaSR.py.7 b/man/susHanaSR.py.7 index 8cfd458b..af1c1475 100644 --- a/man/susHanaSR.py.7 +++ b/man/susHanaSR.py.7 @@ -263,7 +263,7 @@ the sudo permissions configuration path to HANA tracefiles .TP /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE -the internal cache for srHook status changes while Linux cluster is down, file is owned and r/w by ${sid}adm and must never be touched +the internal cache for srHook status changes while Linux cluster is down, file is owned and read/write by ${sid}adm and must never be touched .PP .\" .SH REQUIREMENTS diff --git a/man/susHanaSrMultiTarget.py.7 b/man/susHanaSrMultiTarget.py.7 index 44d49376..0bbbcb02 100644 --- a/man/susHanaSrMultiTarget.py.7 +++ b/man/susHanaSrMultiTarget.py.7 @@ -283,7 +283,7 @@ the sudo permission configuration the directory with HANA trace files .TP /usr/sap/$SID/HDB$nr/.crm_attribute.$SITE -the internal cache for srHook status changes while Linux cluster is down, file is owned by ${SID}adm and must never be touched +the internal cache for srHook status changes while Linux cluster is down, file is owned and read/write by ${sid}adm and must never be touched .PP .\" .SH REQUIREMENTS From 09c03991da49035ac4fa64527d336f0e3f3248ef Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Wed, 10 Jan 2024 14:22:41 +0100 Subject: [PATCH 028/123] angi tester ScaleOut: use the new comperators --- test/json/angi-ScaleOut/defaults.json | 2 +- test/json/angi-ScaleOut/kill_prim_inst.json | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/test/json/angi-ScaleOut/defaults.json b/test/json/angi-ScaleOut/defaults.json index da941cf6..0399d6a8 100644 --- a/test/json/angi-ScaleOut/defaults.json +++ b/test/json/angi-ScaleOut/defaults.json @@ -3,7 +3,7 @@ "srMode": "sync", "checkPtr": { "globalUp": [ - "topology = ScaleOut" + "topology == ScaleOut" ], "pHostUp": [ "clone_state == PROMOTED", diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index 894e1479..5c9620f6 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -56,8 +56,8 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss = 1", - "srr = P", + "lss == 1", + "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" From d16f6c9db520fe2c123b034b767c00df0c836fe4 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Wed, 10 Jan 2024 14:27:30 +0100 Subject: [PATCH 029/123] angi tester ScaleOut: use the new sct test files --- test/json/angi-ScaleOut/free_log_area.json | 2 +- test/json/angi-ScaleOut/freeze_prim_master_nfs.json | 2 +- test/json/angi-ScaleOut/freeze_prim_site_nfs.json | 2 +- test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json | 2 +- test/json/angi-ScaleOut/restart_cluster.json | 2 +- test/json/angi-ScaleOut/restart_cluster_hana_running.json | 2 +- test/json/angi-ScaleOut/restart_cluster_turn_hana.json | 2 +- 7 files changed, 7 insertions(+), 7 deletions(-) diff --git a/test/json/angi-ScaleOut/free_log_area.json b/test/json/angi-ScaleOut/free_log_area.json index 3de2d553..451cfd56 100644 --- a/test/json/angi-ScaleOut/free_log_area.json +++ b/test/json/angi-ScaleOut/free_log_area.json @@ -9,7 +9,7 @@ "next": "step20", "loop": 1, "wait": 1, - "post": "shell test_free_log_area", + "post": "shell sct_test_free_log_area", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json index 31455dc6..2fa758a6 100644 --- a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json @@ -9,7 +9,7 @@ "next": "step20", "loop": 1, "wait": 1, - "post": "shell test_freeze_prim_master_nfs", + "post": "shell sct_test_freeze_prim_master_nfs", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json index 27338cdd..477df53b 100644 --- a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json @@ -9,7 +9,7 @@ "next": "step20", "loop": 1, "wait": 1, - "post": "shell test_freeze_prim_site_nfs", + "post": "shell sct_test_freeze_prim_site_nfs", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json index 15aacfa3..54e79d46 100644 --- a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json @@ -9,7 +9,7 @@ "next": "final40", "loop": 1, "wait": 1, - "post": "shell test_maintenance_cluster_turn_hana", + "post": "shell sct_test_maintenance_cluster_turn_hana", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleOut/restart_cluster.json b/test/json/angi-ScaleOut/restart_cluster.json index 7018fdd2..6434fe0c 100644 --- a/test/json/angi-ScaleOut/restart_cluster.json +++ b/test/json/angi-ScaleOut/restart_cluster.json @@ -9,7 +9,7 @@ "next": "final40", "loop": 1, "wait": 1, - "post": "shell test_restart_cluster", + "post": "shell sct_test_restart_cluster", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleOut/restart_cluster_hana_running.json b/test/json/angi-ScaleOut/restart_cluster_hana_running.json index b7a20ca7..85dd0e3f 100644 --- a/test/json/angi-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut/restart_cluster_hana_running.json @@ -9,7 +9,7 @@ "next": "final40", "loop": 1, "wait": 1, - "post": "shell test_restart_cluster_hana_running", + "post": "shell sct_test_restart_cluster_hana_running", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index cd398f38..4ae5c4f1 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -9,7 +9,7 @@ "next": "final40", "loop": 1, "wait": 1, - "post": "shell test_restart_cluster_turn_hana", + "post": "shell sct_test_restart_cluster_turn_hana", "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", From 17e81777511084c461407d6c2267bd026051720d Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 12:37:31 +0100 Subject: [PATCH 030/123] saphana_sr_test.py standby_secn_worker_node.json: tandby_secn_worker_node --- .../standby_secn_worker_node.json | 96 +++++++++++++++++++ test/saphana_sr_test.py | 4 + 2 files changed, 100 insertions(+) create mode 100644 test/json/angi-ScaleOut/standby_secn_worker_node.json diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json new file mode 100644 index 00000000..197908e9 --- /dev/null +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -0,0 +1,96 @@ +{ + "test": "standby_secn_worker_node", + "name": "standby secondary worker node (and online again)", + "start": "prereq10", + "steps": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "standby_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "online_secn_worker_node", + "pSite": [ + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" + ], + "sSite": [ + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SFAIL", + "srPoll == SFAIL" + ], + "pHost": [ + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" + ], + "sHost": [ + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 100", + "standby == on" + ] + }, + { + "step": "step30", + "name": "node back online", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" + ], + "sSite": [ + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SWAIT", + "srPoll == SFAIL" + ], + "pHost": [ + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" + ], + "sHost": [ + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } + ] +} diff --git a/test/saphana_sr_test.py b/test/saphana_sr_test.py index 51aab7ef..91c6aeb3 100755 --- a/test/saphana_sr_test.py +++ b/test/saphana_sr_test.py @@ -607,6 +607,10 @@ def action_on_cluster(self, action_name): cmd = "crm node standby {}".format(self.topolo['pHost']) elif action_name == "opn": cmd = "crm node online {}".format(self.topolo['pHost']) + elif action_name == "standby_secn_worker_node": + cmd = "crm node standby {}".format(self.topolo['sWorker']) + elif action_name == "online_secn_worker_node": + cmd = "crm node online {}".format(self.topolo['sWorker']) elif action_name == "cleanup": cmd = "crm resource cleanup {}".format(resource) elif action_name == "kill_secn_worker_node": From efe8268fb816cb0131e5abf9d356fa71f9272559 Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 13:00:46 +0100 Subject: [PATCH 031/123] SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-description.7: standby_secn_worker_node --- man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 3 +++ man-tester/SAPHanaSR-tests-description.7 | 15 +++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index 9533fb28..56d22e49 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -97,6 +97,9 @@ Standby primary master node and online again. .TP \fBstandby_secondary_node\fP Standby secondary master node and online again. +.TP +\fBstandby_secondary_worker_node\fP +Standby secondary worker node and online again. .RE .PP .\" diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index 9c8bc448..98ab0413 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -538,6 +538,21 @@ Expect: Secondary (master) node standby and finally online. HANA primary stays online. HANA secondary stopped and finally started. SR SFAIL and finally SOK. No takeover. No fencing. +.PP +\fBstandby_secondary_worker_node\fP +.RS 2 +Descr: Set secondary worker node standby and online again. +.br +Topology: ScaleOut. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: crm node standby ; crm node online +.br +Expect: Secondary worker node standby and finally online. +HANA primary stays online. +HANA secondary stopped and finally started. +SR SFAIL and finally SOK. No takeover. No fencing. .RE .PP .\" From 8e1e825fa6b84274fa481716f09c0193a64f1c09 Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 13:12:35 +0100 Subject: [PATCH 032/123] SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-description.7: standby_secn_worker_node --- man-tester/SAPHanaSR-tests-description.7 | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index 98ab0413..3bcda53b 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -551,8 +551,8 @@ Test: crm node standby ; crm node online .br Expect: Secondary worker node standby and finally online. HANA primary stays online. -HANA secondary stopped and finally started. -SR SFAIL and finally SOK. No takeover. No fencing. +HANA secondary stays online. HANA worker clone_state goes to UNDEFINED and finally to DEMOTED. +SR stays SOK. No takeover. No fencing. .RE .PP .\" From 7a6f6fe349c752d35edf378f5ea8b808ac94b883 Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 13:17:50 +0100 Subject: [PATCH 033/123] saphana_sr_test.py standby_secn_worker_node.json: tandby_secn_worker_node --- .../standby_secn_worker_node.json | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index 197908e9..e1ae1cda 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -30,11 +30,11 @@ "srPoll == PRIM" ], "sSite": [ - "lpt == 10", - "lss == 1", + "lpt == 30", + "lss == 4", "srr == S", - "srHook == SFAIL", - "srPoll == SFAIL" + "srHook == SOK", + "srPoll == SOK" ], "pHost": [ "clone_state == PROMOTED", @@ -42,8 +42,8 @@ "score == 150" ], "sHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", + "clone_state == DEMOTED", + "roles == master1:master:worker:master", "score == 100", "standby == on" ] @@ -63,11 +63,11 @@ "srPoll == PRIM" ], "sSite": [ - "lpt == 10", - "lss == 1", + "lpt == 30", + "lss == 4", "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" + "srHook == SOK", + "srPoll == SOK" ], "pHost": [ "clone_state == PROMOTED", @@ -76,8 +76,8 @@ ], "sHost": [ "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" + "roles == master1:master:worker:master", + "score == 100" ] }, { From 1c6eb87e66ea967bc30308036a49fa3a6426ac28 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Fri, 12 Jan 2024 13:29:59 +0100 Subject: [PATCH 034/123] angi: saphana-topology-lib: call get_local_virtual_name() also in start and stop action --- ra/saphana-topology-lib | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/ra/saphana-topology-lib b/ra/saphana-topology-lib index 129814f9..4cf54f7f 100755 --- a/ra/saphana-topology-lib +++ b/ra/saphana-topology-lib @@ -418,6 +418,7 @@ function sht_start_clone() { # called by: TODO super_ocf_log info "FLOW ${FUNCNAME[0]} ($*)" local rc="$OCF_NOT_RUNNING" + get_local_virtual_name sht_start; rc="$?" return "$rc" } # end function sht_start_clone @@ -448,7 +449,7 @@ function sht_stop_clone() { if [ -z "$timeout" ] || [ "$timeout" -lt "$stdTimeOut" ]; then timeout=$stdTimeOut fi - + get_local_virtual_name gNodeRole="$( get_role_by_landscape "$gVirtName" --timeout="$timeout")"; hanalrc="$?" if [[ "$hanalrc" != "124" ]]; then # normal exit, use gNodeRole From 65216baaac44c93d6c715e7a624d5de41c90b65b Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Fri, 12 Jan 2024 15:12:35 +0100 Subject: [PATCH 035/123] angi saphana-common-lib: do not set empty value for MNS --- ra/saphana-common-lib | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/ra/saphana-common-lib b/ra/saphana-common-lib index 93f3ed6b..9f10c417 100755 --- a/ra/saphana-common-lib +++ b/ra/saphana-common-lib @@ -994,7 +994,11 @@ function node_role_walk() { fi case "$raType" in saphana* ) # SAPHanaController to set the MNS attribute - set_hana_site_attribute "$gSite" "$gTheMaster" "${ATTR_NAME_HANA_SITE_MNS[@]}" + if [[ -n "$gSite" && -n "$gTheMaster" ]]; then + set_hana_site_attribute "$gSite" "$gTheMaster" "${ATTR_NAME_HANA_SITE_MNS[@]}" + else + super_ocf_log warn "DEC: either gSite or gTheMaster are not set. Do not set MNS" + fi ;; sht* ) # Currently skip that on SAPHanaTopology (to be checked) ;; From aff8f46fa6feefe9f5d71bc6ed506301dcec858b Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Fri, 12 Jan 2024 15:15:15 +0100 Subject: [PATCH 036/123] angi saphana-controller-lib: monitor handling of uncloned resource -> OCF_ERR_UNIMPLEMENTED --- ra/saphana-controller-lib | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/ra/saphana-controller-lib b/ra/saphana-controller-lib index cb4554fd..5437053f 100755 --- a/ra/saphana-controller-lib +++ b/ra/saphana-controller-lib @@ -1665,7 +1665,7 @@ function saphana_monitor_secondary() { ;; DEMOTED ) # This is the status we expect if ocf_is_probe; then - super_ocf_log info "ACT: saphana_monitor_secondary: set global_sec attribute to $gSite" + super_ocf_log info "ACT: saphana_monitor_secondary: set global_sec to $gSite" set_hana_attribute "$NODENAME" "$gSite" "${ATTR_NAME_HANA_SEC[@]}" fi promoted=0; @@ -1922,6 +1922,16 @@ function saphana_monitor_clone() { return "$rc" } # end function saphana_monitor_clone +function saphana_monitor() { + # this function should never be called currently. it is intended for future releases which might support un-cloned resources + if ! ocf_is_clone; then + super_ocf_log error "RA: resource is not defined as clone. This is not supported (OCF_ERR_UNIMPLEMENTED)" + return "$OCF_ERR_UNIMPLEMENTED" + else + return "$OCF_SUCCESS" + fi +} # end function saphana_monitor + function saphana_promote_clone() { # function: saphana_promote_clone - promote a hana clone # params: - From 5678d2c162fa23e52eda7560e0f11e361679f564 Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 16:21:31 +0100 Subject: [PATCH 037/123] standby_secn_node.json: fixed name --- test/json/angi-ScaleOut/standby_secn_node.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/json/angi-ScaleOut/standby_secn_node.json b/test/json/angi-ScaleOut/standby_secn_node.json index c573f71b..d9247fa1 100644 --- a/test/json/angi-ScaleOut/standby_secn_node.json +++ b/test/json/angi-ScaleOut/standby_secn_node.json @@ -1,6 +1,6 @@ { "test": "standby_secn_node", - "name": "standby secondary node (and online again)", + "name": "standby secondary master node (and online again)", "start": "prereq10", "steps": [ { From 48a184df121619d1a2ebdf5bc83d4938ff5648e0 Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 16:26:39 +0100 Subject: [PATCH 038/123] SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-angi-ScaleUp.7 SAPHanaSR-tests-description.7: fixed test names --- man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 6 +++--- man-tester/SAPHanaSR-tests-angi-ScaleUp.7 | 4 ++-- man-tester/SAPHanaSR-tests-description.7 | 6 +++--- 3 files changed, 8 insertions(+), 8 deletions(-) diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index 56d22e49..d4e43650 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -92,13 +92,13 @@ Stop and restart cluster, keep HANA running. \fBrestart_cluster_turn_hana\fP Stop cluster and HANA, takeover HANA, start cluster. .TP -\fBstandby_primary_node\fP +\fBstandby_prim_node\fP Standby primary master node and online again. .TP -\fBstandby_secondary_node\fP +\fBstandby_secn_node\fP Standby secondary master node and online again. .TP -\fBstandby_secondary_worker_node\fP +\fBstandby_secn_worker_node\fP Standby secondary worker node and online again. .RE .PP diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 index f6843c37..3e8b1331 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 @@ -73,10 +73,10 @@ Stop cluster and HANA, manually start and takeover HANA, start cluster. \fBsplit_brain_prio\fP Network split-brain with priority fencing. .TP -\fBstandby_primary_node\fP +\fBstandby_prim_node\fP Set primary node standby and online again. .TP -\fBstandby_secondary_node\fP +\fBstandby_secn_node\fP Set secondary node standby and online again. .PP .\" diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index 3bcda53b..4468358b 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -504,7 +504,7 @@ SR SFAIL and finally SOK. No takeover. One fencing. .RE .PP -\fBstandby_primary_node\fP +\fBstandby_prim_node\fP .RS 2 Descr: Set primary node standby and online again. On scale-out, standby primary master node and online again. @@ -523,7 +523,7 @@ SR SFAIL and finally SOK. One takeover. No fencing. .RE .PP -\fBstandby_secondary_node\fP +\fBstandby_secn_node\fP .RS 2 Descr: Set secondary node standby and online again. On scale-out, standby secondary master node and online again. @@ -539,7 +539,7 @@ HANA primary stays online. HANA secondary stopped and finally started. SR SFAIL and finally SOK. No takeover. No fencing. .PP -\fBstandby_secondary_worker_node\fP +\fBstandby_secn_worker_node\fP .RS 2 Descr: Set secondary worker node standby and online again. .br From 8cc4a151c85db38c602b6daa0be967791de68bf3 Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 16:29:36 +0100 Subject: [PATCH 039/123] kill_prim_inst.json: fixed name --- test/json/angi-ScaleOut/kill_prim_inst.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index 5c9620f6..85281e1c 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -1,6 +1,6 @@ { "test": "kill_prim_inst", - "name": "Kill primary instance", + "name": "Kill primary master instance", "start": "prereq10", "steps": [ { From c11455f4ecb63d84627c3b67ba5f76819a0e6f4e Mon Sep 17 00:00:00 2001 From: lpinne Date: Fri, 12 Jan 2024 16:33:53 +0100 Subject: [PATCH 040/123] kill_secn_inst.json: fixed name --- test/json/angi-ScaleOut/kill_secn_inst.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/json/angi-ScaleOut/kill_secn_inst.json b/test/json/angi-ScaleOut/kill_secn_inst.json index e515afe6..54e9939b 100644 --- a/test/json/angi-ScaleOut/kill_secn_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_inst.json @@ -1,6 +1,6 @@ { "test": "kill_secn_inst", - "name": "Kill secondary instance", + "name": "Kill secondary master instance", "start": "prereq10", "steps": [ { From 3f1843d0f407402c9c078c3a4c674d9f0acb6d59 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Fri, 12 Jan 2024 17:01:53 +0100 Subject: [PATCH 041/123] angi tester: saphana_sr_test.py - fail step, if check syntax is incorrect --- test/saphana_sr_test.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/test/saphana_sr_test.py b/test/saphana_sr_test.py index 91c6aeb3..f460573c 100755 --- a/test/saphana_sr_test.py +++ b/test/saphana_sr_test.py @@ -347,6 +347,10 @@ def run_checks(self, checks, area_name, object_name, step_step ): # match # TODO: maybe allow flexible whitespace match_obj = re.search("(.*) (==|!=|>|>=|<|<=|~|!~|>~|is) (.*)", single_check) + if match_obj is None: + self.message(f"ERROR: step={step_step} unknown comperator in {single_check}") + check_result = 2 + break c_key = match_obj.group(1) c_comp = match_obj.group(2) c_reg_exp = match_obj.group(3) From a838183d2ce17f1d2aa2015a226308150651dc5c Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Fri, 12 Jan 2024 17:05:09 +0100 Subject: [PATCH 042/123] angi package: minor version added - only for local builds to be able to update --- SAPHanaSR-angi.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SAPHanaSR-angi.spec b/SAPHanaSR-angi.spec index 8d7a8be6..26dc2695 100644 --- a/SAPHanaSR-angi.spec +++ b/SAPHanaSR-angi.spec @@ -21,7 +21,7 @@ License: GPL-2.0 Group: Productivity/Clustering/HA AutoReqProv: on Summary: Resource agents to control the HANA database in system replication setup -Version: 1.2.3 +Version: 1.2.3.1 Release: 0 Url: https://www.suse.com/c/fail-safe-operation-of-sap-hana-suse-extends-its-high-availability-solution/ From ab1f29fe66218fad537d945b8fc1fed4e8bd0245 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Fri, 12 Jan 2024 18:35:25 +0100 Subject: [PATCH 043/123] angi tester: SAPHanaSR-testCluster, saphana_sr_test.py - catch file errors --- test/SAPHanaSR-testCluster | 3 ++- test/saphana_sr_test.py | 21 +++++++++++++++------ 2 files changed, 17 insertions(+), 7 deletions(-) diff --git a/test/SAPHanaSR-testCluster b/test/SAPHanaSR-testCluster index 61931ca9..d4494416 100755 --- a/test/SAPHanaSR-testCluster +++ b/test/SAPHanaSR-testCluster @@ -142,7 +142,8 @@ while test01.run['count'] <= test01.config['repeat']: ) test01.message(l_msg) - test01.read_test_file() + if test01.read_test_file() != 0: + sys.exit (1) ### debug exit after printing test properties if test01.config['printTestProperties'] is True: p_msg = ( diff --git a/test/saphana_sr_test.py b/test/saphana_sr_test.py index f460573c..8bb760eb 100755 --- a/test/saphana_sr_test.py +++ b/test/saphana_sr_test.py @@ -287,16 +287,25 @@ def read_test_file(self): if self.config['test_file'] == "-": self.test_data.update(json.load(sys.stdin)) else: - with open(self.config['test_file'], encoding="utf-8") as tf_fh: - #print(f"read test file {self.config['test_file']}") - self.test_data.update(json.load(tf_fh)) + try: + with open(self.config['test_file'], encoding="utf-8") as tf_fh: + #print(f"read test file {self.config['test_file']}") + self.test_data.update(json.load(tf_fh)) + except FileNotFoundError as e_file: + self.message(f"ERROR: File error: {e_file}") + return 1 if self.config['properties_file']: #print(f"read properties file {self.config['properties_file']}") - with open(self.config['properties_file'], encoding="utf-8") as prop_fh: - self.test_data.update(json.load(prop_fh)) + try: + with open(self.config['properties_file'], encoding="utf-8") as prop_fh: + self.test_data.update(json.load(prop_fh)) + except FileNotFoundError as e_file: + self.message(f"ERROR: File error: {e_file}") + return 1 self.run['test_id'] = self.test_data['test'] self.debug("DEBUG: test_data: {}".format(str(self.test_data)), stdout=False) + return 0 def write_test_properties(self, topology): """ @@ -412,7 +421,7 @@ def run_checks(self, checks, area_name, object_name, step_step ): c_err = 0 check_result = max(check_result, 0) if c_err == 1: - if not found: + if not found: l_val = None self.__add_failed__((area_name, object_name), (c_key, l_val, c_reg_exp, c_comp)) check_result = max(check_result, 1) From 85041eac150abf501e2dcd9f533f7b6b22294d2a Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 15 Jan 2024 09:39:05 +0100 Subject: [PATCH 044/123] standby_secn_worker_node.json: todo --- test/json/angi-ScaleOut/standby_secn_worker_node.json | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index e1ae1cda..b19fbc17 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -18,7 +18,8 @@ { "step": "step20", "name": "node is standby", - "next": "step30", + "todo": "node clone_state UNDEFINED", + "next": "step30", "loop": 120, "wait": 2, "post": "online_secn_worker_node", From 629d18aab90cc7ab7439e0a19bbf32e00789b878 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Mon, 15 Jan 2024 12:04:48 +0100 Subject: [PATCH 045/123] angi: Always remove SAP sockets, if sapstartsrv is init V controlled and needs a (re-)start. Old code to change permissions is been deleted in angi. See also bsc#1218696. --- ra/saphana-controller-common-lib | 43 +++++--------------------------- ra/saphana-controller-lib | 12 +-------- ra/saphana-filesystem-lib | 1 - 3 files changed, 7 insertions(+), 49 deletions(-) diff --git a/ra/saphana-controller-common-lib b/ra/saphana-controller-common-lib index b9517e3c..933590b1 100755 --- a/ra/saphana-controller-common-lib +++ b/ra/saphana-controller-common-lib @@ -418,43 +418,12 @@ function handle_unix_domain_sockets() { # called by: TODO super_ocf_log info "FLOW ${FUNCNAME[0]}" - if ocf_is_true "$RemoveSAPSockets"; then - # previous and new default handling - # removing the unix domain socket files as they might have - # wrong permissions or ownership - # they will be recreated by sapstartsrv during next start - rm -f /tmp/.sapstream5"${InstanceNr}"13 - rm -f /tmp/.sapstream5"${InstanceNr}"14 - else - # try to change permission and ownership of the unix domain socket files - for sockFile in /tmp/.sapstream5"${InstanceNr}"13 /tmp/.sapstream5"${InstanceNr}"14; do - sockError=false - if [ -S "$sockFile" ]; then - if ! chown "${sid}"adm "$sockFile" > /dev/null 2>&1; then - sockError=true - fi - if ! chgrp sapsys "$sockFile" > /dev/null 2>&1; then - sockError=true - fi - if chmod 700 "$sockFile" > /dev/null 2>&1; then - sockError=true - fi - else - if [ -f "$sockFile" ]; then - super_ocf_log error "RA: file '$sockFile' still exists, but is NO longer a socket." - super_ocf_log info "DEC: remove file '$sockFile' to give sapstartsrv a chance to create it the right way during next start" - sockError=true - else - super_ocf_log info "RA: file '$sockFile' not found. Nothing to do, let sapstartsrv create a new one during start" - fi - fi - if ocf_is_true "$sockError"; then - # fallback - if there are Errors, remove unix domain socket as - # last resort - rm -f "$sockFile" - fi - done - fi + # previous and new default handling + # removing the unix domain socket files as they might have + # wrong permissions or ownership + # they will be recreated by sapstartsrv during next start + rm -f /tmp/.sapstream5"${InstanceNr}"13 + rm -f /tmp/.sapstream5"${InstanceNr}"14 } # end function handle_unix_domain_sockets # diff --git a/ra/saphana-controller-lib b/ra/saphana-controller-lib index 5437053f..42b52fd1 100755 --- a/ra/saphana-controller-lib +++ b/ra/saphana-controller-lib @@ -23,7 +23,6 @@ # OCF_RESKEY_INSTANCE_PROFILE (optional, well known directories will be searched by default) # OCF_RESKEY_PREFER_SITE_TAKEOVER (optional, default is no) # OCF_RESKEY_DUPLICATE_PRIMARY_TIMEOUT (optional, time difference needed between two last-primary-tiemstampe (lpt)) -# OCF_RESKEY_REMOVE_SAP_SOCKETS (optional, default is true) # # HANA must support the following commands: # hdbnsutil -sr_stateConfiguration (unsure, if this means >= SPS110, SPS111 or SPS10x) @@ -141,14 +140,6 @@ function saphana_print_parameters() { Local or site recover preferred? - - Should the RA remove the unix domain sockets related to sapstartsrv before (re-)starting sapstartsrv ? Default="true" - no: Do not remove the sockets, try chown/chgrp instead - yes: Do remove the sockets - - Remove unix domain sockets related to sapstartsrv? - - Define, if a former primary should automatically be registered. The parameter AUTOMATED_REGISTER defines, whether a former primary instance should @@ -392,7 +383,6 @@ function saphana_init_get_ocf_parameters() { sidadm="${sid}adm" PreferSiteTakeover="${OCF_RESKEY_PREFER_SITE_TAKEOVER^^}" # upper case AUTOMATED_REGISTER="${OCF_RESKEY_AUTOMATED_REGISTER:-false}" - RemoveSAPSockets="${OCF_RESKEY_REMOVE_SAP_SOCKETS:-true}" DUPLICATE_PRIMARY_TIMEOUT="${OCF_RESKEY_DUPLICATE_PRIMARY_TIMEOUT:-7200}" ocf_env=$(env | grep 'OCF_RESKEY_CRM') super_ocf_log debug "DBG: OCF: $ocf_env" @@ -411,7 +401,7 @@ function saphana_init() { local rc="$OCF_SUCCESS" clN SYSTEMCTL="/usr/bin/systemctl" NODENAME=$(crm_node -n) - saphana_init_get_ocf_parameters # set SID, sid, sidadm, InstanceNr, InstanceName HANA_CALL_TIMEOUT, PreferSiteTakeover, AUTOMATED_REGISTER, RemoveSAPSockets + saphana_init_get_ocf_parameters # set SID, sid, sidadm, InstanceNr, InstanceName HANA_CALL_TIMEOUT, PreferSiteTakeover, AUTOMATED_REGISTER # # create directory for HANA_CALL command sdtout and stderr tracking # diff --git a/ra/saphana-filesystem-lib b/ra/saphana-filesystem-lib index 150ae462..590b8ae8 100755 --- a/ra/saphana-filesystem-lib +++ b/ra/saphana-filesystem-lib @@ -190,7 +190,6 @@ function shfs_init() { pp_hana_shared="/dev/shm/poison_pill_${SID}" sid="${SID,,}" export sidadm="${sid}adm" - RemoveSAPSockets="" # not needed in SAPHanaFilesystem, but referened in shell lib saphana-controller-common-lib used by SAPHanaFilesystem # # init attribute definitions # From 4bdb7df45a922266bde10add41eb9c597abbdfaa Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Mon, 15 Jan 2024 12:08:27 +0100 Subject: [PATCH 046/123] angi: Always remove SAP sockets - also remove OCF_RESKEY_REMOVE_SAP_SOCKETS from comments --- ra/saphana-controller-common-lib | 2 -- 1 file changed, 2 deletions(-) diff --git a/ra/saphana-controller-common-lib b/ra/saphana-controller-common-lib index 933590b1..6acec61d 100755 --- a/ra/saphana-controller-common-lib +++ b/ra/saphana-controller-common-lib @@ -23,7 +23,6 @@ # OCF_RESKEY_INSTANCE_PROFILE (optional, well known directories will be searched by default) # OCF_RESKEY_PREFER_SITE_TAKEOVER (optional, default is no) # OCF_RESKEY_DUPLICATE_PRIMARY_TIMEOUT (optional, time difference needed between two last-primary-tiemstampe (lpt)) -# OCF_RESKEY_REMOVE_SAP_SOCKETS (optional, default is true) # # HANA must support the following commands: # hdbnsutil -sr_stateConfiguration (unsure, if this means >= SPS110, SPS111 or SPS10x) @@ -412,7 +411,6 @@ function saphana_init_sap_commands() { # # handle_unix_domain_sockets # params: - -# globals: OCF_RESKEY_REMOVE_SAP_SOCKETS # function handle_unix_domain_sockets() { # called by: TODO From 6712c4eaa7b9cfbe95d0152542d7b42e573c2a74 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Mon, 15 Jan 2024 12:09:19 +0100 Subject: [PATCH 047/123] angi: Always remove SAP sockets - also remove OCF_RESKEY_REMOVE_SAP_SOCKETS from comments --- ra/SAPHanaController | 1 - 1 file changed, 1 deletion(-) diff --git a/ra/SAPHanaController b/ra/SAPHanaController index db64f8e6..1a538216 100755 --- a/ra/SAPHanaController +++ b/ra/SAPHanaController @@ -27,7 +27,6 @@ # OCF_RESKEY_INSTANCE_PROFILE (optional, well known directories will be searched by default) # OCF_RESKEY_PREFER_SITE_TAKEOVER (optional, default is no) # OCF_RESKEY_DUPLICATE_PRIMARY_TIMEOUT (optional, time difference needed between two last-primary-tiemstampe (lpt)) -# OCF_RESKEY_REMOVE_SAP_SOCKETS (optional, default is true) # # HANA must support the following commands: # hdbnsutil -sr_stateConfiguration From dc54cf77d196f5b73d255f01dd9a4c0204720389 Mon Sep 17 00:00:00 2001 From: lpinne Date: Mon, 15 Jan 2024 12:11:40 +0100 Subject: [PATCH 048/123] ocf_suse_SAPHana.7 ocf_suse_SAPHanaController.7: removed REMOVE_SAP_SOCKET --- man/ocf_suse_SAPHana.7 | 10 ---------- man/ocf_suse_SAPHanaController.7 | 9 --------- 2 files changed, 19 deletions(-) diff --git a/man/ocf_suse_SAPHana.7 b/man/ocf_suse_SAPHana.7 index 5c57b6c6..e5810060 100644 --- a/man/ocf_suse_SAPHana.7 +++ b/man/ocf_suse_SAPHana.7 @@ -165,16 +165,6 @@ Example: "AUTOMATED_REGISTER=true". Default value: false\&. .RE .PP -\fBREMOVE_SAP_SOCKETS\fR -.RS 4 -Defines whether the RA removes the UNIX domain sockets related to sapstartsrv -before (re-)starting sapstartsrv. -.br -Example: "REMOVE_SAP_SOCKETS=true". -.br -Optional. Default value: true\&. -.RE -.PP \fBSAPHanaFilter\fR .RS 4 Outdated parameter. Please do not use it any longer. diff --git a/man/ocf_suse_SAPHanaController.7 b/man/ocf_suse_SAPHanaController.7 index 1f86b4e5..6f6e1dc2 100644 --- a/man/ocf_suse_SAPHanaController.7 +++ b/man/ocf_suse_SAPHanaController.7 @@ -158,15 +158,6 @@ Example: "AUTOMATED_REGISTER=true". Optional. Default value: false\&. .RE .PP -\fBREMOVE_SAP_SOCKETS\fR -.RS 4 -Should the RA remove the unix domain sockets related to sapstartsrv before -starting sapstartsrv? If set to false, the RA does not remove the sockets, but -runs chown/chgrp to adapte the ownerwhip of the existing sockets. -.br -Optional. Default value: true\&. -.RE -.PP .\" .SH SUPPORTED PROPERTIES .br From 5640915bf8168dead5d5bae0d1d8d04e677dbf37 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Mon, 15 Jan 2024 13:12:36 +0100 Subject: [PATCH 049/123] angi: report failing UDS authentication; do not longer check UDS file but directly sapcontrol client comman - see also bsc#1218698 --- ra/saphana-controller-common-lib | 38 ++++++++++++++++---------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/ra/saphana-controller-common-lib b/ra/saphana-controller-common-lib index 6acec61d..88a08bf5 100755 --- a/ra/saphana-controller-common-lib +++ b/ra/saphana-controller-common-lib @@ -442,7 +442,12 @@ function check_sapstartsrv() { local systemd_unit_name="SAP${SID}_${InstanceNr}.service" if "$SYSTEMCTL" is-active --quiet "$systemd_unit_name"; then - super_ocf_log info "ACT: systemd service $systemd_unit_name is active" + if output=$("$SAPCONTROL" -nr "$InstanceNr" -function ParameterValue INSTANCE_NAME -format script) + then + super_ocf_log info "ACT: systemd service $systemd_unit_name is active and Unix Domain Socket authentication is working." + else + super_ocf_log info "ACT: systemd service $systemd_unit_name is active but Unix Domain Socket authentication is NOT working." + fi else super_ocf_log warn "ACT: systemd service $systemd_unit_name is not active, it will be started using systemd" # use start, because restart does also stop sap instance @@ -455,29 +460,24 @@ function check_sapstartsrv() { fi else # no SAP systemd unit available, continue with old code... - if [ ! -S "/tmp/.sapstream5${InstanceNr}13" ]; then - super_ocf_log warn "ACT: sapstartsrv is not running for instance $SID-$InstanceName (no UDS), it will be started now" - restart=1 - else - if output=$("$SAPCONTROL" -nr "$InstanceNr" -function ParameterValue INSTANCE_NAME -format script) + if output=$("$SAPCONTROL" -nr "$InstanceNr" -function ParameterValue INSTANCE_NAME -format script) + then + runninginst=$(echo "$output" | grep '^0 : ' | cut -d' ' -f3) + if [ "$runninginst" != "$InstanceName" ] then - runninginst=$(echo "$output" | grep '^0 : ' | cut -d' ' -f3) - if [ "$runninginst" != "$InstanceName" ] + super_ocf_log warn "ACT: sapstartsrv is running for instance $runninginst, that service will be killed" + restart=1 + else + if ! output=$("$SAPCONTROL" -nr "$InstanceNr" -function AccessCheck Start) then - super_ocf_log warn "ACT: sapstartsrv is running for instance $runninginst, that service will be killed" + super_ocf_log warn "ACT: FAILED - sapcontrol -nr $InstanceNr -function AccessCheck Start ($(ls -ld1 "/tmp/.sapstream5${InstanceNr}13"))" + super_ocf_log warn "ACT: sapstartsrv will be restarted to try to solve this situation, otherwise please check sapstsartsrv setup (SAP Note 927637)" restart=1 - else - if ! output=$("$SAPCONTROL" -nr "$InstanceNr" -function AccessCheck Start) - then - super_ocf_log warn "ACT: FAILED - sapcontrol -nr $InstanceNr -function AccessCheck Start ($(ls -ld1 "/tmp/.sapstream5${InstanceNr}13"))" - super_ocf_log warn "ACT: sapstartsrv will be restarted to try to solve this situation, otherwise please check sapstsartsrv setup (SAP Note 927637)" - restart=1 - fi fi - else - super_ocf_log warn "ACT: sapstartsrv is not running for instance $SID-$InstanceName, it will be started now" - restart=1 fi + else + super_ocf_log warn "ACT: sapstartsrv is not running for instance $SID-$InstanceName, it will be started now" + restart=1 fi if [ -z "$runninginst" ]; then runninginst="$InstanceName"; fi if [[ "$restart" == 1 ]] From 497e677020adf91c4a213c2c248fb8c13c3583da Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Mon, 15 Jan 2024 13:12:36 +0100 Subject: [PATCH 050/123] angi: report failing UDS authentication; do not longer check UDS file but directly sapcontrol client command - see also bsc#1218698 --- ra/saphana-controller-common-lib | 38 ++++++++++++++++---------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/ra/saphana-controller-common-lib b/ra/saphana-controller-common-lib index 6acec61d..88a08bf5 100755 --- a/ra/saphana-controller-common-lib +++ b/ra/saphana-controller-common-lib @@ -442,7 +442,12 @@ function check_sapstartsrv() { local systemd_unit_name="SAP${SID}_${InstanceNr}.service" if "$SYSTEMCTL" is-active --quiet "$systemd_unit_name"; then - super_ocf_log info "ACT: systemd service $systemd_unit_name is active" + if output=$("$SAPCONTROL" -nr "$InstanceNr" -function ParameterValue INSTANCE_NAME -format script) + then + super_ocf_log info "ACT: systemd service $systemd_unit_name is active and Unix Domain Socket authentication is working." + else + super_ocf_log info "ACT: systemd service $systemd_unit_name is active but Unix Domain Socket authentication is NOT working." + fi else super_ocf_log warn "ACT: systemd service $systemd_unit_name is not active, it will be started using systemd" # use start, because restart does also stop sap instance @@ -455,29 +460,24 @@ function check_sapstartsrv() { fi else # no SAP systemd unit available, continue with old code... - if [ ! -S "/tmp/.sapstream5${InstanceNr}13" ]; then - super_ocf_log warn "ACT: sapstartsrv is not running for instance $SID-$InstanceName (no UDS), it will be started now" - restart=1 - else - if output=$("$SAPCONTROL" -nr "$InstanceNr" -function ParameterValue INSTANCE_NAME -format script) + if output=$("$SAPCONTROL" -nr "$InstanceNr" -function ParameterValue INSTANCE_NAME -format script) + then + runninginst=$(echo "$output" | grep '^0 : ' | cut -d' ' -f3) + if [ "$runninginst" != "$InstanceName" ] then - runninginst=$(echo "$output" | grep '^0 : ' | cut -d' ' -f3) - if [ "$runninginst" != "$InstanceName" ] + super_ocf_log warn "ACT: sapstartsrv is running for instance $runninginst, that service will be killed" + restart=1 + else + if ! output=$("$SAPCONTROL" -nr "$InstanceNr" -function AccessCheck Start) then - super_ocf_log warn "ACT: sapstartsrv is running for instance $runninginst, that service will be killed" + super_ocf_log warn "ACT: FAILED - sapcontrol -nr $InstanceNr -function AccessCheck Start ($(ls -ld1 "/tmp/.sapstream5${InstanceNr}13"))" + super_ocf_log warn "ACT: sapstartsrv will be restarted to try to solve this situation, otherwise please check sapstsartsrv setup (SAP Note 927637)" restart=1 - else - if ! output=$("$SAPCONTROL" -nr "$InstanceNr" -function AccessCheck Start) - then - super_ocf_log warn "ACT: FAILED - sapcontrol -nr $InstanceNr -function AccessCheck Start ($(ls -ld1 "/tmp/.sapstream5${InstanceNr}13"))" - super_ocf_log warn "ACT: sapstartsrv will be restarted to try to solve this situation, otherwise please check sapstsartsrv setup (SAP Note 927637)" - restart=1 - fi fi - else - super_ocf_log warn "ACT: sapstartsrv is not running for instance $SID-$InstanceName, it will be started now" - restart=1 fi + else + super_ocf_log warn "ACT: sapstartsrv is not running for instance $SID-$InstanceName, it will be started now" + restart=1 fi if [ -z "$runninginst" ]; then runninginst="$InstanceName"; fi if [[ "$restart" == 1 ]] From 44850046f746a23cd14a2b8c703e5ebd414829a9 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 09:56:54 +0100 Subject: [PATCH 051/123] standby_secn_worker_node.json: sWorker status --- .../standby_secn_worker_node.json | 86 ++++++------------- 1 file changed, 28 insertions(+), 58 deletions(-) diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index b19fbc17..e3717732 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -16,70 +16,40 @@ "sHost": "sHostUp" }, { - "step": "step20", - "name": "node is standby", - "todo": "node clone_state UNDEFINED", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "online_secn_worker_node", - "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" - ], - "sSite": [ - "lpt == 30", - "lss == 4", - "srr == S", - "srHook == SOK", - "srPoll == SOK" - ], - "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" - ], - "sHost": [ - "clone_state == DEMOTED", + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "online_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": [ + "clone_state == UNDEFINED", "roles == master1:master:worker:master", - "score == 100", + "score == -12200", "standby == on" - ] + ] }, { - "step": "step30", - "name": "node back online", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" - ], - "sSite": [ - "lpt == 30", - "lss == 4", - "srr == S", - "srHook == SOK", - "srPoll == SOK" - ], - "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" - ], - "sHost": [ + "step": "step30", + "name": "node back online", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": [ "clone_state == DEMOTED", "roles == master1:master:worker:master", - "score == 100" - ] + "score == -12200", + "standby == off" + ] }, { "step": "final40", From 0a3f199f6e46d0125c6c86d144b3f9727de9750c Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 10:17:20 +0100 Subject: [PATCH 052/123] sct_test_freeze_secn_site_nfs freeze_secn_site_nfs.json: initial checkin --- .../angi-ScaleOut/freeze_secn_site_nfs.json | 92 +++++++++++++++++++ test/sct_test_freeze_secn_site_nfs | 16 ++++ 2 files changed, 108 insertions(+) create mode 100644 test/json/angi-ScaleOut/freeze_secn_site_nfs.json create mode 100755 test/sct_test_freeze_secn_site_nfs diff --git a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json new file mode 100644 index 00000000..48366795 --- /dev/null +++ b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json @@ -0,0 +1,92 @@ +{ + "test": "freeze_secn_site_nfs", + "name": "freeze sap hana nfs on secondary site", + "todo": "please correct this file", + "start": "prereq10", + "steps": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_freeze_secn_site_nfs", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" + ], + "sSite": [ + "lpt ~ (20|10)", + "lss == 1", + "srr == S", + "srHook == SFAIL", + "srPoll == SFAIL" + ], + "pHost": [ + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" + ], + "sHost": [ + ] + }, + { + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" + ], + "sSite": [ + "lpt ~ (20|10)", + "lss == 4", + "srr == S", + "srHook ~ (SOK|SWAIT)", + "srPoll ~ (SOK|SFAIL)" + ], + "pHost": [ + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" + ], + "sHost": [ + "clone_state ~ (DEMOTED|UNDEFINED)", + "roles == master1::worker:", + "score ~ (100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } + ] +} diff --git a/test/sct_test_freeze_secn_site_nfs b/test/sct_test_freeze_secn_site_nfs new file mode 100755 index 00000000..f642f62d --- /dev/null +++ b/test/sct_test_freeze_secn_site_nfs @@ -0,0 +1,16 @@ +#!/bin/bash +# +# test_freeze_secn_nfs - freeze nfs on secondary site +src=${BASH_SOURCE[0]} +full_path=$(readlink -f "$src") +dir_path=$(dirname "$full_path") +source .test_properties +currSecondary="$(ssh "${node01}" "SAPHanaSR-showAttr --format=tester" | awk -F'/' '/score="100"/ { print $2 }' )" +currSecnWorker="$(ssh "${node01}" "SAPHanaSR-showAttr --format=tester" | awk -F'/' '/score="-12200"/ { print $2 }' )" + +echo "==== Freeze SAP HANA NFS ====" + +ssh "$currSecondary" 'iptables -I OUTPUT -p tcp -m multiport --ports 2049 -j DROP &' +ssh "$currSecnWorker" 'iptables -I OUTPUT -p tcp -m multiport --ports 2049 -j DROP' +sleep 60 +# From 7da21af0ed0c46d3f91af2b2bcc766e489c5bdc8 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 10:23:45 +0100 Subject: [PATCH 053/123] SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-description.7: freeze_secn_site_nfs --- man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 3 +++ man-tester/SAPHanaSR-tests-description.7 | 16 ++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index d4e43650..b543ab23 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -37,6 +37,9 @@ Freeze HANA NFS on primary master node. \fBfreeze_prim_site_nfs\fP Freeze HANA NFS on primary site. .TP +\fBfreeze_secn_site_nfs\fP +Freeze HANA NFS on secondary site. +.TP \fBkill_prim_indexserver\fP Kill primary master indexserver, for susChkSrv.py. .TP diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index 4468358b..bdd16b0a 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -184,6 +184,22 @@ SR SFAIL and finally SOK. One takeover. One fence. .RE .PP +\fBfreeze_secn_site_nfs\fP +.RS 2 +Freeze HANA NFS on secondary site. +.br +Topology: ScaleOut. +.br +Prereq: Cluster and HANA are up and running, all good. +.br +Test: See ocf_suse_SAPHanaFilesystem(7). +.br +Expect: Primary site keeps running. +HANA secondary site restarted. +SR SFAIL and finally SOK. +No takeover. No fence. +.RE +.PP \fBkill_prim_indexserver\fP .RS 2 Descr: Kill primary indexserver, for susChkSrv.py. From 1c489567e70e3d635b6f819527e2de9c2322083e Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 10:45:02 +0100 Subject: [PATCH 054/123] angi tester angi-ScaleOut: flap.json flop.json flup.json - added --- test/json/angi-ScaleOut/flap.json | 37 ++++++++++++++++++++++++++++++ test/json/angi-ScaleOut/flop.json | 38 +++++++++++++++++++++++++++++++ test/json/angi-ScaleOut/flup.json | 30 ++++++++++++++++++++++++ 3 files changed, 105 insertions(+) create mode 100644 test/json/angi-ScaleOut/flap.json create mode 100644 test/json/angi-ScaleOut/flop.json create mode 100644 test/json/angi-ScaleOut/flup.json diff --git a/test/json/angi-ScaleOut/flap.json b/test/json/angi-ScaleOut/flap.json new file mode 100644 index 00000000..7afac9be --- /dev/null +++ b/test/json/angi-ScaleOut/flap.json @@ -0,0 +1,37 @@ +{ + "test": "flap", + "name": "flap - test the new test parser", + "start": "prereq10", + "steps": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ + "lpt >~ 2000000000:^(20|30|1.........)$", + "lss == 4", + "srr == P", + "srHook == PRIM", + "srPoll == PRIM", + "hugo is None" + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } + ] +} diff --git a/test/json/angi-ScaleOut/flop.json b/test/json/angi-ScaleOut/flop.json new file mode 100644 index 00000000..671fbfad --- /dev/null +++ b/test/json/angi-ScaleOut/flop.json @@ -0,0 +1,38 @@ +{ + "test": "flop", + "name": "flop - this test should NOT pass successfully", + "start": "prereq10", + "steps": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ + "lpt >~ 2000000000:^(20|30|1.........)$", + "lss == 4", + "srr == P", + "srHook == PRIM", + "srPoll == PRIM" + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": [ + "lpt is None" + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } + ] +} diff --git a/test/json/angi-ScaleOut/flup.json b/test/json/angi-ScaleOut/flup.json new file mode 100644 index 00000000..bfe0eeb0 --- /dev/null +++ b/test/json/angi-ScaleOut/flup.json @@ -0,0 +1,30 @@ +{ + "test": "flup", + "name": "flup - like nop but very short sleep only - only for checking the test engine", + "start": "prereq10", + "steps": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } + ] +} From fd5e5ea2fcf49cc6bfef6ede494e9b24a61bf777 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 10:47:14 +0100 Subject: [PATCH 055/123] angi tester angi-ScaleOut: defaults+newComparators.json - deleted --- .../defaults+newComparators.json | 72 ------------------- 1 file changed, 72 deletions(-) delete mode 100644 test/json/angi-ScaleOut/defaults+newComparators.json diff --git a/test/json/angi-ScaleOut/defaults+newComparators.json b/test/json/angi-ScaleOut/defaults+newComparators.json deleted file mode 100644 index 30ba4d74..00000000 --- a/test/json/angi-ScaleOut/defaults+newComparators.json +++ /dev/null @@ -1,72 +0,0 @@ -{ - "checkPtr": { - "comparartorinline": [ - "alfa != dassollungleichsein", - "lpa_@@sid@@_lpt > 160000", - "beta == dassollgleichsein", - "gamma >? 20000:(detla|epsilon)" - ], - "comparator1": [ - "key comp value", - "lpt >= 1699606819", - "lpt >= 1699606819:(20|30)", - "lss like [1-4]", - "lss ~ [1-4]", - "lss !~ [1-4]", - "srHook is None" - ], - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ - "lpt >= 1699606819", - "lss=4", - "srr=P", - "srHook=PRIM", - "srPoll=PRIM" - ], - "sSiteUp": [ - "lpt=30", - "lss=4", - "srr=S", - "srHook=SOK", - "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles ~ master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "pSiteDown": [ - "lpt=1[6-9]........" , - "lss=1" , - "srr=P" , - "srHook=PRIM" , - "srPoll=PRIM" - ], - "sSiteDown": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SFAIL", - "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] - } -} From e89a0e68c41b340e83b54d5a8d1d601296149ba9 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 10:54:55 +0100 Subject: [PATCH 056/123] angi tester: saphana_sr_test.py - added exceptions for file I/O --- test/saphana_sr_test.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/test/saphana_sr_test.py b/test/saphana_sr_test.py index 8bb760eb..2592ed8e 100755 --- a/test/saphana_sr_test.py +++ b/test/saphana_sr_test.py @@ -294,6 +294,9 @@ def read_test_file(self): except FileNotFoundError as e_file: self.message(f"ERROR: File error: {e_file}") return 1 + except (PermissionError, Exception) as e_generic: + self.message(f"ERROR: File error: {e_generic}") + return 1 if self.config['properties_file']: #print(f"read properties file {self.config['properties_file']}") try: @@ -302,6 +305,9 @@ def read_test_file(self): except FileNotFoundError as e_file: self.message(f"ERROR: File error: {e_file}") return 1 + except (PermissionError, Exception) as e_generic: + self.message(f"ERROR: File error: {e_generic}") + return 1 self.run['test_id'] = self.test_data['test'] self.debug("DEBUG: test_data: {}".format(str(self.test_data)), stdout=False) From 373ab15b5f2cf1fa76b76e459c7b473be12a06ff Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 10:56:54 +0100 Subject: [PATCH 057/123] git: .gitignore, test/.gitignore --- .gitignore | 2 ++ test/.gitignore | 1 + 2 files changed, 3 insertions(+) diff --git a/.gitignore b/.gitignore index 03940d33..795161d9 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,5 @@ *.tgz *.tar.gz osc +ibs +test.log diff --git a/test/.gitignore b/test/.gitignore index a8539dde..44c980a5 100644 --- a/test/.gitignore +++ b/test/.gitignore @@ -8,3 +8,4 @@ misc-* python_examples tmux testLog*.txt +.test_properties From 07014827dd381f50773f360e480136b9a97afe50 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 11:30:48 +0100 Subject: [PATCH 058/123] standby_prim_node.json: name --- test/json/angi-ScaleOut/standby_prim_node.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/json/angi-ScaleOut/standby_prim_node.json b/test/json/angi-ScaleOut/standby_prim_node.json index 1b4937db..4b1a4dea 100644 --- a/test/json/angi-ScaleOut/standby_prim_node.json +++ b/test/json/angi-ScaleOut/standby_prim_node.json @@ -1,6 +1,6 @@ { "test": "standby_prim_node", - "name": "standby primary node (and online again)", + "name": "standby primary master node (and online again)", "start": "prereq10", "steps": [ { From 329b9d93bb9ffbe2aff583f1845f15f022a5a2ad Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 11:45:11 +0100 Subject: [PATCH 059/123] free_log_area.json kill_prim_indexserver.json kill_secn_indexserver.json: fixed name --- test/json/angi-ScaleOut/free_log_area.json | 2 +- test/json/angi-ScaleOut/kill_prim_indexserver.json | 2 +- test/json/angi-ScaleOut/kill_secn_indexserver.json | 2 +- test/json/angi-ScaleUp/free_log_area.json | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/test/json/angi-ScaleOut/free_log_area.json b/test/json/angi-ScaleOut/free_log_area.json index 451cfd56..8bb84d70 100644 --- a/test/json/angi-ScaleOut/free_log_area.json +++ b/test/json/angi-ScaleOut/free_log_area.json @@ -1,6 +1,6 @@ { "test": "free_log_area", - "name": "free log area on primary", + "name": "free hana log area on primary site", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleOut/kill_prim_indexserver.json b/test/json/angi-ScaleOut/kill_prim_indexserver.json index 7bb34bbe..7e297ec3 100644 --- a/test/json/angi-ScaleOut/kill_prim_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_indexserver.json @@ -1,6 +1,6 @@ { "test": "kill_prim_indexserver", - "name": "Kill primary indexserver", + "name": "Kill primary master indexserver", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleOut/kill_secn_indexserver.json b/test/json/angi-ScaleOut/kill_secn_indexserver.json index e124f9d6..734670ac 100644 --- a/test/json/angi-ScaleOut/kill_secn_indexserver.json +++ b/test/json/angi-ScaleOut/kill_secn_indexserver.json @@ -1,6 +1,6 @@ { "test": "kill_secn_indexserver", - "name": "Kill secondary indexserver", + "name": "Kill secondary master indexserver", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleUp/free_log_area.json b/test/json/angi-ScaleUp/free_log_area.json index 451cfd56..8bb84d70 100644 --- a/test/json/angi-ScaleUp/free_log_area.json +++ b/test/json/angi-ScaleUp/free_log_area.json @@ -1,6 +1,6 @@ { "test": "free_log_area", - "name": "free log area on primary", + "name": "free hana log area on primary site", "start": "prereq10", "steps": [ { From 614ad03ae3fabdaf05d9e921fc4fba3f9a858130 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 11:50:58 +0100 Subject: [PATCH 060/123] restart_cluster.json restart_cluster_hana_running.json restart_cluster_turn_hana.json: fixed name --- test/json/angi-ScaleOut/restart_cluster.json | 2 +- test/json/angi-ScaleOut/restart_cluster_hana_running.json | 2 +- test/json/angi-ScaleOut/restart_cluster_turn_hana.json | 2 +- test/json/angi-ScaleUp/restart_cluster.json | 2 +- test/json/angi-ScaleUp/restart_cluster_hana_running.json | 2 +- test/json/angi-ScaleUp/restart_cluster_turn_hana.json | 2 +- 6 files changed, 6 insertions(+), 6 deletions(-) diff --git a/test/json/angi-ScaleOut/restart_cluster.json b/test/json/angi-ScaleOut/restart_cluster.json index 6434fe0c..1adb528e 100644 --- a/test/json/angi-ScaleOut/restart_cluster.json +++ b/test/json/angi-ScaleOut/restart_cluster.json @@ -1,6 +1,6 @@ { "test": "restart_cluster", - "name": "restart_cluster", + "name": "stop and restart cluster and hana", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleOut/restart_cluster_hana_running.json b/test/json/angi-ScaleOut/restart_cluster_hana_running.json index 85dd0e3f..a8c5157f 100644 --- a/test/json/angi-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut/restart_cluster_hana_running.json @@ -1,6 +1,6 @@ { "test": "restart_cluster_hana_running", - "name": "restart_cluster_hana_running", + "name": "stop and restart cluster, keep hana running", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index 4ae5c4f1..d08b8008 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -1,6 +1,6 @@ { "test": "restart_cluster_turn_hana", - "name": "restart_cluster_turn_hana", + "name": "stop cluster, turn hana, start cluster", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleUp/restart_cluster.json b/test/json/angi-ScaleUp/restart_cluster.json index 6434fe0c..1adb528e 100644 --- a/test/json/angi-ScaleUp/restart_cluster.json +++ b/test/json/angi-ScaleUp/restart_cluster.json @@ -1,6 +1,6 @@ { "test": "restart_cluster", - "name": "restart_cluster", + "name": "stop and restart cluster and hana", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleUp/restart_cluster_hana_running.json b/test/json/angi-ScaleUp/restart_cluster_hana_running.json index 85dd0e3f..337651f1 100644 --- a/test/json/angi-ScaleUp/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleUp/restart_cluster_hana_running.json @@ -1,6 +1,6 @@ { "test": "restart_cluster_hana_running", - "name": "restart_cluster_hana_running", + "name": "stop and restart cluster, keep hana_running", "start": "prereq10", "steps": [ { diff --git a/test/json/angi-ScaleUp/restart_cluster_turn_hana.json b/test/json/angi-ScaleUp/restart_cluster_turn_hana.json index 4ae5c4f1..d08b8008 100644 --- a/test/json/angi-ScaleUp/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleUp/restart_cluster_turn_hana.json @@ -1,6 +1,6 @@ { "test": "restart_cluster_turn_hana", - "name": "restart_cluster_turn_hana", + "name": "stop cluster, turn hana, start cluster", "start": "prereq10", "steps": [ { From 42f911c9d107ef6fd26d075e0ad153629af5e5b7 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 12:49:32 +0100 Subject: [PATCH 061/123] angi package: SAPHanaSR-tester.spec - test version --- SAPHanaSR-tester.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/SAPHanaSR-tester.spec b/SAPHanaSR-tester.spec index fb7094de..c1f44d6c 100644 --- a/SAPHanaSR-tester.spec +++ b/SAPHanaSR-tester.spec @@ -19,7 +19,7 @@ License: GPL-2.0 Group: Productivity/Clustering/HA AutoReqProv: on Summary: Test suite for SAPHanaSR clusters -Version: 1.2.7 +Version: 1.2.7.2 Release: 0 Url: https://www.suse.com/c/fail-safe-operation-of-sap-hana-suse-extends-its-high-availability-solution/ From 1c61dc7f00db8f968e010d33602f01bc9c2530ef Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 12:51:27 +0100 Subject: [PATCH 062/123] angi tester angi-ScaleOut: standby_secn_worker_node.json - role patterns to match 'slave' not 'master1' --- test/json/angi-ScaleOut/standby_secn_worker_node.json | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index e3717732..9cf6611a 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -28,7 +28,7 @@ "sHost": "sHostUp", "sWorker": [ "clone_state == UNDEFINED", - "roles == master1:master:worker:master", + "roles == slave:slave:worker:slave", "score == -12200", "standby == on" ] @@ -46,7 +46,7 @@ "sHost": "sHostUp", "sWorker": [ "clone_state == DEMOTED", - "roles == master1:master:worker:master", + "roles == slave:slave:worker:slave", "score == -12200", "standby == off" ] From c59b635cb06f05118fd6616f572f3b4aca201965 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 12:53:15 +0100 Subject: [PATCH 063/123] angi tester: saphana_sr_test.py - allow new check-vectors for pWorker and sWorker; fixed actions online_secn_worker_node and standby_secn_worker_node --- test/saphana_sr_test.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/test/saphana_sr_test.py b/test/saphana_sr_test.py index 2592ed8e..bc7ff203 100755 --- a/test/saphana_sr_test.py +++ b/test/saphana_sr_test.py @@ -494,7 +494,10 @@ def process_step(self, step): self.process_topology_object(step, 'pSite', 'Site'), self.process_topology_object(step, 'sSite', 'Site'), self.process_topology_object(step, 'pHost', 'Host'), - self.process_topology_object(step, 'sHost', 'Host')) + self.process_topology_object(step, 'sHost', 'Host'), + self.process_topology_object(step, 'pWorker', 'Host'), + self.process_topology_object(step, 'sWorker', 'Host'), + ) if process_result == 0: break time.sleep(wait) @@ -678,7 +681,7 @@ def action(self, action_name): elif action_name_short in ("kill_prim_inst", "kill_prim_worker_inst", "kill_secn_inst", "kill_secn_worker_inst", "kill_prim_indexserver", "kill_secn_indexserver", "kill_prim_worker_indexserver", "kill_secn_worker_indexserver" , "bmt"): action_rc = self.action_on_hana(action_name) - elif action_name_short in ("ssn", "osn", "spn", "opn", "cleanup", "kill_secn_node", "kill_secn_worker_node", "kill_prim_node", "kill_prim_worker_node", "simulate_split_brain"): + elif action_name_short in ("ssn", "osn", "spn", "opn", "cleanup", "kill_secn_node", "kill_secn_worker_node", "kill_prim_node", "kill_prim_worker_node", "simulate_split_brain","standby_secn_worker_node", "online_secn_worker_node"): action_rc = self.action_on_cluster(action_name) elif action_name_short in ("sleep", "shell"): action_rc = self.action_on_os(action_name) From dba1a50a1310da8608c18fc7be799fbeab93cc52 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 13:12:25 +0100 Subject: [PATCH 064/123] ./SAPHanaSR-tests-description.7 --- man-tester/SAPHanaSR-tests-description.7 | 1 + 1 file changed, 1 insertion(+) diff --git a/man-tester/SAPHanaSR-tests-description.7 b/man-tester/SAPHanaSR-tests-description.7 index bdd16b0a..06dcd207 100644 --- a/man-tester/SAPHanaSR-tests-description.7 +++ b/man-tester/SAPHanaSR-tests-description.7 @@ -554,6 +554,7 @@ Expect: Secondary (master) node standby and finally online. HANA primary stays online. HANA secondary stopped and finally started. SR SFAIL and finally SOK. No takeover. No fencing. +.RE .PP \fBstandby_secn_worker_node\fP .RS 2 From 1932149eefe3ea8108adcc664434094f4e899535 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 14:20:37 +0100 Subject: [PATCH 065/123] flup.json: sWorkerUp, pWorkerUp --- test/json/angi-ScaleOut/flup.json | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/test/json/angi-ScaleOut/flup.json b/test/json/angi-ScaleOut/flup.json index bfe0eeb0..6090c693 100644 --- a/test/json/angi-ScaleOut/flup.json +++ b/test/json/angi-ScaleOut/flup.json @@ -13,18 +13,22 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sHost": "pWorkerUp", + "sHost": "sWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sHost": "pWorkerUp", + "sHost": "sWorkerUp" } ] } From 705efddafa12d97cf0f9db981abdf3eb4b1ace8f Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 14:23:10 +0100 Subject: [PATCH 066/123] flup.json nop.json: sWorkerUp, pWorkerUp --- test/json/angi-ScaleOut/flup.json | 8 ++++---- test/json/angi-ScaleOut/nop.json | 18 +++++++++++------- 2 files changed, 15 insertions(+), 11 deletions(-) diff --git a/test/json/angi-ScaleOut/flup.json b/test/json/angi-ScaleOut/flup.json index 6090c693..55b7bfa8 100644 --- a/test/json/angi-ScaleOut/flup.json +++ b/test/json/angi-ScaleOut/flup.json @@ -14,8 +14,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "sHost": "pWorkerUp", - "sHost": "sWorkerUp" + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "final40", @@ -27,8 +27,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "sHost": "pWorkerUp", - "sHost": "sWorkerUp" + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/nop.json b/test/json/angi-ScaleOut/nop.json index e3d9826e..fd0f16d0 100644 --- a/test/json/angi-ScaleOut/nop.json +++ b/test/json/angi-ScaleOut/nop.json @@ -14,18 +14,22 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } From fa966ad7019bac1a78969201c7f58414214435df Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 14:38:30 +0100 Subject: [PATCH 067/123] angi tester: json/angi-ScaleOut/defaults.json - added defaults for pWorkerUp and sWorkerUp --- test/json/angi-ScaleOut/defaults.json | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/test/json/angi-ScaleOut/defaults.json b/test/json/angi-ScaleOut/defaults.json index 0399d6a8..ebd779eb 100644 --- a/test/json/angi-ScaleOut/defaults.json +++ b/test/json/angi-ScaleOut/defaults.json @@ -54,6 +54,16 @@ "roles == master1::worker:", "score == 100", "standby == on" + ], + "pWorkerUp": [ + "clone_state == DEMOTED", + "roles == slave:slave:worker:slave", + "score == -12200" + ], + "sWorkerUp": [ + "clone_state == DEMOTED", + "roles == slave:slave:worker:slave", + "score == -12200" ] } } From 80510919f8016d992d8d20b36a6761f9be7f2ddb Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 14:44:01 +0100 Subject: [PATCH 068/123] block_manual_takeover.json flap.json free_log_area.json freeze_prim_master_nfs.json restart_cluster.json restart_cluster_hana_running.json restart_cluster_turn_hana.json standby_secn_worker_node.json: sWorkerUp, pWorkerUp --- .../angi-ScaleOut/block_manual_takeover.json | 18 +++++++++++------- test/json/angi-ScaleOut/flap.json | 18 +++++++++++------- test/json/angi-ScaleOut/free_log_area.json | 9 +++++++-- .../angi-ScaleOut/freeze_prim_master_nfs.json | 8 ++++++-- test/json/angi-ScaleOut/restart_cluster.json | 8 ++++++-- .../restart_cluster_hana_running.json | 8 ++++++-- .../restart_cluster_turn_hana.json | 8 ++++++-- .../standby_secn_worker_node.json | 8 ++++++-- 8 files changed, 59 insertions(+), 26 deletions(-) diff --git a/test/json/angi-ScaleOut/block_manual_takeover.json b/test/json/angi-ScaleOut/block_manual_takeover.json index 5896d13c..3ffd626f 100644 --- a/test/json/angi-ScaleOut/block_manual_takeover.json +++ b/test/json/angi-ScaleOut/block_manual_takeover.json @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "step20", @@ -28,15 +30,17 @@ "sHost": "sHostUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/flap.json b/test/json/angi-ScaleOut/flap.json index 7afac9be..4adcd2f9 100644 --- a/test/json/angi-ScaleOut/flap.json +++ b/test/json/angi-ScaleOut/flap.json @@ -20,18 +20,22 @@ ], "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/free_log_area.json b/test/json/angi-ScaleOut/free_log_area.json index 8bb84d70..d4976ff2 100644 --- a/test/json/angi-ScaleOut/free_log_area.json +++ b/test/json/angi-ScaleOut/free_log_area.json @@ -13,7 +13,10 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" + }, { "step": "step20", @@ -36,7 +39,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json index 2fa758a6..37b31277 100644 --- a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "step20", @@ -84,7 +86,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster.json b/test/json/angi-ScaleOut/restart_cluster.json index 1adb528e..b51088d5 100644 --- a/test/json/angi-ScaleOut/restart_cluster.json +++ b/test/json/angi-ScaleOut/restart_cluster.json @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "final40", @@ -25,7 +27,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster_hana_running.json b/test/json/angi-ScaleOut/restart_cluster_hana_running.json index a8c5157f..566a20c4 100644 --- a/test/json/angi-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut/restart_cluster_hana_running.json @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "final40", @@ -25,7 +27,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index d08b8008..8d35440b 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "final40", @@ -26,7 +28,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index 9cf6611a..d9aaba7f 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "step20", @@ -61,7 +63,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } From 9ec42e13c1bf7a9b7d8c8502bd37557c7931a14a Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 14:56:31 +0100 Subject: [PATCH 069/123] freeze_prim_site_nfs.json kill_prim_inst.json kill_prim_node.json maintenance_cluster_turn_hana.json maintenance_with_standby_nodes.json nop-false.json standby_secn_node.json: pWorkerUp, sWorkerUp --- .../angi-ScaleOut/freeze_prim_site_nfs.json | 38 +++++++++------- test/json/angi-ScaleOut/kill_prim_inst.json | 41 +++++++++-------- test/json/angi-ScaleOut/kill_prim_node.json | 38 +++++++++------- .../maintenance_cluster_turn_hana.json | 10 +++-- .../maintenance_with_standby_nodes.json | 44 ++++++++++--------- test/json/angi-ScaleOut/nop-false.json | 18 +++++--- .../json/angi-ScaleOut/standby_secn_node.json | 41 +++++++++-------- 7 files changed, 130 insertions(+), 100 deletions(-) diff --git a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json index 477df53b..6ce9178b 100644 --- a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json @@ -13,21 +13,23 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "srr == P", "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -43,20 +45,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -84,7 +86,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index 85281e1c..4cf51730 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -14,23 +14,26 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" + }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "comment": "sSite: srPoll could get SFAIL on scale-out", - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "comment": "sSite: srPoll could get SFAIL on scale-out", + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", @@ -49,20 +52,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -92,7 +95,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_node.json b/test/json/angi-ScaleOut/kill_prim_node.json index 0ee081c9..47cec024 100644 --- a/test/json/angi-ScaleOut/kill_prim_node.json +++ b/test/json/angi-ScaleOut/kill_prim_node.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -44,20 +46,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -85,7 +87,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json index 54e79d46..c0587a86 100644 --- a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json @@ -1,6 +1,6 @@ { "test": "maintenance_cluster_turn_hana", - "name": "maintenance_cluster_turn_hana", + "name": "maintenance cluster turn hana", "start": "prereq10", "steps": [ { @@ -13,7 +13,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { "step": "final40", @@ -26,7 +28,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index 7f605584..9103af6d 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -14,29 +14,31 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "secondary site: node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "osn", - "pSite": "pSiteUp", - "sSite": "sSiteDown", + "step": "step20", + "name": "secondary site: node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "osn", + "pSite": "pSiteUp", + "sSite": "sSiteDown", "pHost": "pHostUp", "sHost": "sHostDown" }, { - "step": "step30", - "name": "secondary site: node back online", - "next": "step40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": [ + "step": "step30", + "name": "secondary site: node back online", + "next": "step40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": [ "lpt == 10", "lss == 1", "srr == S", @@ -54,8 +56,8 @@ "step": "step40", "name": "end recover", "next": "step110", - "loop": 120, - "wait": 2, + "loop": 120, + "wait": 2, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", @@ -128,7 +130,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/nop-false.json b/test/json/angi-ScaleOut/nop-false.json index a3c5ad48..f04048bd 100644 --- a/test/json/angi-ScaleOut/nop-false.json +++ b/test/json/angi-ScaleOut/nop-false.json @@ -16,18 +16,22 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/standby_secn_node.json b/test/json/angi-ScaleOut/standby_secn_node.json index d9247fa1..c85bc5b1 100644 --- a/test/json/angi-ScaleOut/standby_secn_node.json +++ b/test/json/angi-ScaleOut/standby_secn_node.json @@ -13,23 +13,26 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" + }, { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "osn", - "pSite": [ + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "osn", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", @@ -49,20 +52,20 @@ ] }, { - "step": "step30", - "name": "node back online", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "node back online", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", @@ -90,7 +93,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } From e81cad7b9200937fbea988d07c273f966824c9e7 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 15:16:49 +0100 Subject: [PATCH 070/123] flop.json freeze_prim_master_nfs.json freeze_prim_site_nfs.json freeze_secn_site_nfs.json kill_prim_indexserver.json kill_prim_worker_indexserver.json kill_prim_node.json kill_prim_worker_inst.json kill_secn_indexserver.json kill_prim_worker_node.json kill_secn_indexserver.json kill_secn_inst.json kill_secn_node.json kill_secn_worker_inst.json kill_secn_worker_node.json maintenance_cluster_turn_hana.json maintenance_with_standby_nodes.json restart_cluster_turn_hana.json standby_prim_node.json: pWorkerUp, sWorkerUp --- test/json/angi-ScaleOut/flop.json | 18 +++++---- .../angi-ScaleOut/freeze_prim_master_nfs.json | 4 +- .../angi-ScaleOut/freeze_prim_site_nfs.json | 4 +- .../angi-ScaleOut/freeze_secn_site_nfs.json | 38 ++++++++++-------- .../angi-ScaleOut/kill_prim_indexserver.json | 38 ++++++++++-------- test/json/angi-ScaleOut/kill_prim_node.json | 4 +- .../kill_prim_worker_indexserver.json | 40 ++++++++++--------- .../angi-ScaleOut/kill_prim_worker_inst.json | 38 ++++++++++-------- .../angi-ScaleOut/kill_prim_worker_node.json | 38 ++++++++++-------- .../angi-ScaleOut/kill_secn_indexserver.json | 38 ++++++++++-------- test/json/angi-ScaleOut/kill_secn_inst.json | 38 ++++++++++-------- test/json/angi-ScaleOut/kill_secn_node.json | 38 ++++++++++-------- .../angi-ScaleOut/kill_secn_worker_inst.json | 38 ++++++++++-------- .../angi-ScaleOut/kill_secn_worker_node.json | 38 ++++++++++-------- .../maintenance_cluster_turn_hana.json | 4 +- .../maintenance_with_standby_nodes.json | 4 +- .../restart_cluster_turn_hana.json | 4 +- .../json/angi-ScaleOut/standby_prim_node.json | 38 ++++++++++-------- 18 files changed, 255 insertions(+), 207 deletions(-) diff --git a/test/json/angi-ScaleOut/flop.json b/test/json/angi-ScaleOut/flop.json index 671fbfad..087bf988 100644 --- a/test/json/angi-ScaleOut/flop.json +++ b/test/json/angi-ScaleOut/flop.json @@ -19,20 +19,24 @@ ], "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": [ "lpt is None" ], "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json index 37b31277..2d14d281 100644 --- a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json @@ -87,8 +87,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json index 6ce9178b..4464c7d4 100644 --- a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json @@ -87,8 +87,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json index 48366795..5a568ebc 100644 --- a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json @@ -14,21 +14,23 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (20|10)", "lss == 1", "srr == S", @@ -44,20 +46,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (20|10)", "lss == 4", "srr == S", @@ -86,7 +88,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_indexserver.json b/test/json/angi-ScaleOut/kill_prim_indexserver.json index 7e297ec3..16833470 100644 --- a/test/json/angi-ScaleOut/kill_prim_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_indexserver.json @@ -14,22 +14,24 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "comment": "sSite: srPoll could get SFAIL on scale-out", - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "comment": "sSite: srPoll could get SFAIL on scale-out", + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", @@ -48,20 +50,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -91,7 +93,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_node.json b/test/json/angi-ScaleOut/kill_prim_node.json index 47cec024..9caa1c93 100644 --- a/test/json/angi-ScaleOut/kill_prim_node.json +++ b/test/json/angi-ScaleOut/kill_prim_node.json @@ -88,8 +88,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json index d911a350..3e772f87 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", @@ -46,21 +48,21 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "todo2": "why do we need SFAIL for srHook?", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "todo2": "why do we need SFAIL for srHook?", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -89,7 +91,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_inst.json b/test/json/angi-ScaleOut/kill_prim_worker_inst.json index fcfa7054..f3d22ce1 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_inst.json @@ -14,22 +14,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", @@ -47,20 +49,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -90,7 +92,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_node.json b/test/json/angi-ScaleOut/kill_prim_worker_node.json index bcd4dac9..c99d6194 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_node.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_node.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -46,20 +48,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 240, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 240, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -87,7 +89,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_indexserver.json b/test/json/angi-ScaleOut/kill_secn_indexserver.json index 734670ac..3ebd62d3 100644 --- a/test/json/angi-ScaleOut/kill_secn_indexserver.json +++ b/test/json/angi-ScaleOut/kill_secn_indexserver.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", @@ -47,20 +49,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", @@ -89,7 +91,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_inst.json b/test/json/angi-ScaleOut/kill_secn_inst.json index 54e9939b..5a90b67f 100644 --- a/test/json/angi-ScaleOut/kill_secn_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_inst.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", @@ -47,20 +49,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", @@ -88,7 +90,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_node.json b/test/json/angi-ScaleOut/kill_secn_node.json index 81c108ef..661f00d0 100644 --- a/test/json/angi-ScaleOut/kill_secn_node.json +++ b/test/json/angi-ScaleOut/kill_secn_node.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", @@ -42,20 +44,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", @@ -83,7 +85,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_worker_inst.json b/test/json/angi-ScaleOut/kill_secn_worker_inst.json index 2cf0dd96..cc1cbfa8 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_inst.json @@ -13,16 +13,18 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": "pSiteUp", + "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", @@ -37,14 +39,14 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", @@ -68,7 +70,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_worker_node.json b/test/json/angi-ScaleOut/kill_secn_worker_node.json index d5c2acfe..e7a24da8 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_node.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_node.json @@ -13,16 +13,18 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": "pSiteUp", + "sSite": [ "lpt=10", "lss=1", "srr=S", @@ -35,14 +37,14 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": [ "lpt=10", "lss=(1|2)", "srr=S", @@ -66,7 +68,9 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json index c0587a86..87036c68 100644 --- a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json @@ -29,8 +29,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index 9103af6d..b74f267e 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -131,8 +131,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index 8d35440b..e7bd9c42 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -29,8 +29,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/standby_prim_node.json b/test/json/angi-ScaleOut/standby_prim_node.json index 4b1a4dea..308a3472 100644 --- a/test/json/angi-ScaleOut/standby_prim_node.json +++ b/test/json/angi-ScaleOut/standby_prim_node.json @@ -13,22 +13,24 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", @@ -48,20 +50,20 @@ ] }, { - "step": "step30", - "name": "takeover on secondary", - "next": "final40", - "loop": 120, - "post": "opn", - "wait": 2, - "pSite": [ + "step": "step30", + "name": "takeover on secondary", + "next": "final40", + "loop": 120, + "post": "opn", + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt == 10", "srHook == SWAIT", "srPoll == SFAIL" ], - "sSite": [ + "sSite": [ "lpt > 1000000000", "lss == 4", "srr == P", @@ -91,7 +93,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } From 9fdce3c70dd7dee64801e1832e2b60e2f5fba47f Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 15:21:48 +0100 Subject: [PATCH 071/123] kill_prim_inst.json kill_prim_node.json kill_prim_worker_indexserver.json: fixed final status pWorkerUp, sWorkerUp --- test/json/angi-ScaleOut/kill_prim_inst.json | 9 ++++----- test/json/angi-ScaleOut/kill_prim_node.json | 4 ++-- .../json/angi-ScaleOut/kill_prim_worker_indexserver.json | 4 ++-- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index 4cf51730..d6f3590d 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -15,9 +15,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" - + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "step20", @@ -96,8 +95,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_node.json b/test/json/angi-ScaleOut/kill_prim_node.json index 9caa1c93..483b1302 100644 --- a/test/json/angi-ScaleOut/kill_prim_node.json +++ b/test/json/angi-ScaleOut/kill_prim_node.json @@ -88,8 +88,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "pWorkerUp", - "pWorker": "sWorkerUp" + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json index 3e772f87..6b113eae 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json @@ -92,8 +92,8 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } From 351742ab46829e733ae55c34d5cf340a171d06c1 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 15:33:24 +0100 Subject: [PATCH 072/123] freeze_prim_fs.json nop.json split_brain_prio.json standby_prim_node.json: indentation --- test/json/angi-ScaleUp/freeze_prim_fs.json | 30 +++++++++---------- test/json/angi-ScaleUp/nop.json | 10 +++---- test/json/angi-ScaleUp/split_brain_prio.json | 30 +++++++++---------- test/json/angi-ScaleUp/standby_prim_node.json | 30 +++++++++---------- 4 files changed, 50 insertions(+), 50 deletions(-) diff --git a/test/json/angi-ScaleUp/freeze_prim_fs.json b/test/json/angi-ScaleUp/freeze_prim_fs.json index 1735a93f..2c0cdeab 100644 --- a/test/json/angi-ScaleUp/freeze_prim_fs.json +++ b/test/json/angi-ScaleUp/freeze_prim_fs.json @@ -16,18 +16,18 @@ "sHost": "sHostUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "srr == P" , "lpt >~ 1000000000:(20|10)" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", @@ -43,20 +43,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)" , "lpt >~ 1000000000:(30|20|10)" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", diff --git a/test/json/angi-ScaleUp/nop.json b/test/json/angi-ScaleUp/nop.json index 872854df..767d5d02 100644 --- a/test/json/angi-ScaleUp/nop.json +++ b/test/json/angi-ScaleUp/nop.json @@ -16,11 +16,11 @@ "sHost": "sHostUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", diff --git a/test/json/angi-ScaleUp/split_brain_prio.json b/test/json/angi-ScaleUp/split_brain_prio.json index 0dbc112a..15be2ee0 100644 --- a/test/json/angi-ScaleUp/split_brain_prio.json +++ b/test/json/angi-ScaleUp/split_brain_prio.json @@ -16,19 +16,19 @@ "sHost": "sHostUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4" , "srr == P" , "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "srr == S", "srHook == SFAIL", @@ -41,20 +41,20 @@ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4" , "srr == P" , "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", diff --git a/test/json/angi-ScaleUp/standby_prim_node.json b/test/json/angi-ScaleUp/standby_prim_node.json index 95730b4c..6014411b 100644 --- a/test/json/angi-ScaleUp/standby_prim_node.json +++ b/test/json/angi-ScaleUp/standby_prim_node.json @@ -16,19 +16,19 @@ "sHost": "sHostUp" }, { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1" , "srr == P" , "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (30|1[6-9]........)", "lss == 4", "srr == S", @@ -48,20 +48,20 @@ ] }, { - "step": "step30", - "name": "takeover on secondary", - "next": "final40", - "loop": 120, - "post": "opn", - "wait": 2, - "pSite": [ + "step": "step30", + "name": "takeover on secondary", + "next": "final40", + "loop": 120, + "post": "opn", + "wait": 2, + "pSite": [ "lss == 1" , "srr == P" , "lpt == 10" , "srHook == SWAIT" , "srPoll == SFAIL" ], - "sSite": [ + "sSite": [ "lpt > 1000000000", "lss == 4", "srr == P", From db11dabc1f2250b7333cac7079d646dccd9149af Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 15:44:48 +0100 Subject: [PATCH 073/123] SAPHanaSR-tests-angi-ScaleUp.7: kill_secn_inst --- man-tester/SAPHanaSR-tests-angi-ScaleUp.7 | 3 +++ 1 file changed, 3 insertions(+) diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 index 3e8b1331..18c6a20d 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 @@ -46,6 +46,9 @@ Kill primary node. \fBkill_secn_indexserver\fP Kill secondary indexserver, for susChkSrv.py. .TP +\fBkill_secn_inst\fP +Kill secondary instance. +.TP \fBkill_secn_node\fP Kill secondary node. .TP From 4cfdc3d4dad3226fedc2a3c6283f6dce4abfeff2 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 15:57:16 +0100 Subject: [PATCH 074/123] SAPHanaSR-tests-angi-ScaleOut.7 SAPHanaSR-tests-angi-ScaleUp.7 --- man-tester/SAPHanaSR-tests-angi-ScaleOut.7 | 4 ++-- man-tester/SAPHanaSR-tests-angi-ScaleUp.7 | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 index b543ab23..c8183bb0 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleOut.7 @@ -38,7 +38,7 @@ Freeze HANA NFS on primary master node. Freeze HANA NFS on primary site. .TP \fBfreeze_secn_site_nfs\fP -Freeze HANA NFS on secondary site. +Freeze HANA NFS on secondary site (not yet implemented). .TP \fBkill_prim_indexserver\fP Kill primary master indexserver, for susChkSrv.py. @@ -77,7 +77,7 @@ Kill secondary worker node. Maintenance procedure, manually turning HANA sites. .TP \fBmaintenance_with_standby_nodes\fP -Maintenance procedure, standby+online secondary then standby+online primary. +Maintenance procedure, standby+online secondary then standby+online primary (not yet implemented). .TP \fBnop\fP No operation - check, wait and check again (stability check). diff --git a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 index 18c6a20d..e18ccffd 100644 --- a/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 +++ b/man-tester/SAPHanaSR-tests-angi-ScaleUp.7 @@ -62,7 +62,7 @@ Maintenance procedure, standby+online secondary then standby+online primary. No operation - check, wait and check again (stability check). .TP \fBregister_prim_cold_hana\fP -Stop cluster, do manual takeover, leave former primary down and unregistered, start cluster. +Stop cluster, do manual takeover, leave former primary down and unregistered, start cluster (not yet implementeed). .TP \fBrestart_cluster_hana_running\fP Stop and restart cluster, keep HANA running. From 2478bdbae7c2b1949737d826e1fd7e63ab2f2c1d Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 16:12:38 +0100 Subject: [PATCH 075/123] defaults+newComparators.json removed --- .../defaults+newComparators.json | 66 ------------------- 1 file changed, 66 deletions(-) delete mode 100644 test/json/classic-ScaleOut/defaults+newComparators.json diff --git a/test/json/classic-ScaleOut/defaults+newComparators.json b/test/json/classic-ScaleOut/defaults+newComparators.json deleted file mode 100644 index 44699c70..00000000 --- a/test/json/classic-ScaleOut/defaults+newComparators.json +++ /dev/null @@ -1,66 +0,0 @@ -{ - "checkPtr": { - "comparartorinline": [ - "alfa!=dassollungleichsein", - "lpa_@@sid@@_lpt > 160000", - "beta=dassollgleichsein" - ], - "comparatortuple": [ - ("noty", "alfa=ungleich"), - () - ], - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ - "lpt=1[6-9]........", - "lss=4", - "srr=P", - "srHook=PRIM", - "srPoll=PRIM" - ], - "sSiteUp": [ - "lpt=30", - "lss=4", - "srr=S", - "srHook=SOK", - "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles=master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "pSiteDown": [ - "lpt=1[6-9]........" , - "lss=1" , - "srr=P" , - "srHook=PRIM" , - "srPoll=PRIM" - ], - "sSiteDown": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SFAIL", - "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] - } -} From 0c025cd9318d50fe9910db3632aa5f68434bb097 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 16:13:48 +0100 Subject: [PATCH 076/123] merge --- SAPHanaSR-tester.spec | 2 +- .../maintenance_with_standby_nodes.json | 168 +++++++++--------- 2 files changed, 85 insertions(+), 85 deletions(-) diff --git a/SAPHanaSR-tester.spec b/SAPHanaSR-tester.spec index c1f44d6c..53333268 100644 --- a/SAPHanaSR-tester.spec +++ b/SAPHanaSR-tester.spec @@ -19,7 +19,7 @@ License: GPL-2.0 Group: Productivity/Clustering/HA AutoReqProv: on Summary: Test suite for SAPHanaSR clusters -Version: 1.2.7.2 +Version: 1.2.7.3 Release: 0 Url: https://www.suse.com/c/fail-safe-operation-of-sap-hana-suse-extends-its-high-availability-solution/ diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index 7f605584..b4af2f67 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -4,19 +4,19 @@ "start": "prereq10", "todo": "expectations needs to be fixed - e.g. step20 sHostDown is wrong, because topology will also be stopped. roles will be ::: not master1:...", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites standby_secn_node", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites standby_secn_node", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -25,10 +25,10 @@ "post": "osn", "pSite": "pSiteUp", "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { + "pHost": "pHostUp", + "sHost": "sHostDown" + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -37,43 +37,43 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "end recover", - "next": "step110", + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SWAIT", + "srPoll == SFAIL" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" + ] + }, + { + "step": "step40", + "name": "end recover", + "next": "step110", "loop": 120, "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step110", - "name": "test prerequitsites standby_prim_node", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step110", + "name": "test prerequitsites standby_prim_node", + "next": "step120", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -86,15 +86,15 @@ "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], - "pHost": "pHostDown", - "sHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score ~ (100|145)" - ] - }, - { + ], + "pHost": "pHostDown", + "sHost": [ + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score ~ (100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -102,33 +102,33 @@ "post": "opn", "wait": 2, "pSite": [ - "lss == 1", - "srr == P", - "lpt == 10", - "srHook == SWAIT", - "srPoll == SFAIL" + "lss == 1", + "srr == P", + "lpt == 10", + "srHook == SWAIT", + "srPoll == SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 150", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 150", + "standby == on" ], "sHost": "pHostUp" - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + }, + { + "step": "final140", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } From 58876b73828efa3b4d667a2ada3d12309a4c30cf Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 16:17:27 +0100 Subject: [PATCH 077/123] faulty-syntax-flep.json - test json syntax checker with a faulty json file --- .../angi-ScaleOut/faulty-syntax-flep.json | 37 +++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 test/json/angi-ScaleOut/faulty-syntax-flep.json diff --git a/test/json/angi-ScaleOut/faulty-syntax-flep.json b/test/json/angi-ScaleOut/faulty-syntax-flep.json new file mode 100644 index 00000000..433946e4 --- /dev/null +++ b/test/json/angi-ScaleOut/faulty-syntax-flep.json @@ -0,0 +1,37 @@ +{ + "test": "flep", + "name": "flep - test the json syntax check", + "start": "prereq10", + "steps": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ + "lpt >~ 2000000000:^(20|30|1.........)$", + "lss == 4", + "srr == P", + "srHook == PRIM", + "srPoll == PRIM", + "hugo is None" + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } + ] +} From f08d75413cd920c922d861dfa07a53cda519d29e Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 16:27:30 +0100 Subject: [PATCH 078/123] maintenance_with_standby_nodes.json: fixed indentation --- .../maintenance_with_standby_nodes.json | 176 +++++------------- 1 file changed, 43 insertions(+), 133 deletions(-) diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index a3e0094f..b705736e 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -4,8 +4,7 @@ "start": "prereq10", "todo": "expectations needs to be fixed - e.g. step20 sHostDown is wrong, because topology will also be stopped. roles will be ::: not master1:...", "steps": [ -<<<<<<< HEAD - { + { "step": "prereq10", "name": "test prerequitsites standby_secn_node", "next": "step20", @@ -15,9 +14,11 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -28,8 +29,8 @@ "sSite": "sSiteDown", "pHost": "pHostUp", "sHost": "sHostDown" - }, - { + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -38,20 +39,20 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" - ], + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SWAIT", + "srPoll == SFAIL" + ], "pHost": "pHostUp", "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" - ] - }, - { + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" + ] + }, + { "step": "step40", "name": "end recover", "next": "step110", @@ -61,8 +62,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" - }, - { + }, + { "step": "step110", "name": "test prerequitsites standby_prim_node", "next": "step120", @@ -73,82 +74,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" - }, - { -======= - { - "step": "prereq10", - "name": "test prerequitsites standby_secn_node", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" - }, - { - "step": "step20", - "name": "secondary site: node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "osn", - "pSite": "pSiteUp", - "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { - "step": "step30", - "name": "secondary site: node back online", - "next": "step40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "end recover", - "next": "step110", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" }, { - "step": "step110", - "name": "test prerequitsites standby_prim_node", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { ->>>>>>> 4cfdc3d4dad3226fedc2a3c6283f6dce4abfeff2 "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -161,15 +88,15 @@ "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], + ], "pHost": "pHostDown", "sHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score ~ (100|145)" - ] - }, - { + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score ~ (100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -177,23 +104,22 @@ "post": "opn", "wait": 2, "pSite": [ - "lss == 1", - "srr == P", - "lpt == 10", - "srHook == SWAIT", - "srPoll == SFAIL" + "lss == 1", + "srr == P", + "lpt == 10", + "srHook == SWAIT", + "srPoll == SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 150", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 150", + "standby == on" ], "sHost": "pHostUp" -<<<<<<< HEAD - }, - { + }, + { "step": "final140", "name": "end recover", "next": "END", @@ -204,25 +130,9 @@ "pSite": "sSiteUp", "sSite": "pSiteUp", "pHost": "sHostUp", - "sHost": "pHostUp" - } -======= - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "sWorker": "pWorkerUp", - "pWorker": "sWorkerUp" + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ->>>>>>> 4cfdc3d4dad3226fedc2a3c6283f6dce4abfeff2 ] } From 5de7a2bb6624180744f7ed229f6ccd28e47b483e Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 16:30:04 +0100 Subject: [PATCH 079/123] mv faulty-checks to directory faults --- test/json/{angi-ScaleOut => faults}/faulty-syntax-flep.json | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename test/json/{angi-ScaleOut => faults}/faulty-syntax-flep.json (100%) diff --git a/test/json/angi-ScaleOut/faulty-syntax-flep.json b/test/json/faults/faulty-syntax-flep.json similarity index 100% rename from test/json/angi-ScaleOut/faulty-syntax-flep.json rename to test/json/faults/faulty-syntax-flep.json From 7fa38001313567fb4011f3fe34993f1364a3fb01 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 16:31:22 +0100 Subject: [PATCH 080/123] flap - indents --- test/json/angi-ScaleUp/flap.json | 60 ++++++++++++++++---------------- 1 file changed, 30 insertions(+), 30 deletions(-) diff --git a/test/json/angi-ScaleUp/flap.json b/test/json/angi-ScaleUp/flap.json index 7afac9be..e2bfeac5 100644 --- a/test/json/angi-ScaleUp/flap.json +++ b/test/json/angi-ScaleUp/flap.json @@ -3,35 +3,35 @@ "name": "flap - test the new test parser", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": [ - "lpt >~ 2000000000:^(20|30|1.........)$", - "lss == 4", - "srr == P", - "srHook == PRIM", - "srPoll == PRIM", - "hugo is None" - ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ + "lpt >~ 2000000000:^(20|30|1.........)$", + "lss == 4", + "srr == P", + "srHook == PRIM", + "srPoll == PRIM", + "hugo is None" + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From b3e692ee0ca7529ea2cb4cb6e7a6825e8ad87c72 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 16:33:08 +0100 Subject: [PATCH 081/123] SAPHanaSR-checkJson: checker for json files --- test/SAPHanaSR-checkJson | 52 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 52 insertions(+) create mode 100755 test/SAPHanaSR-checkJson diff --git a/test/SAPHanaSR-checkJson b/test/SAPHanaSR-checkJson new file mode 100755 index 00000000..4315bd57 --- /dev/null +++ b/test/SAPHanaSR-checkJson @@ -0,0 +1,52 @@ +#!/usr/bin/python3 +# pylint: disable=consider-using-f-string +# pylint: disable=fixme +# TODO: check which imports could be removed in the future (time, re, json)? +# TODO: legacy (classic) has "Sites" instead of "Site" (angi) and "Hosts" (classic/legacy) instead of "Host" (angi) --> could we set that via json files? +""" + SAPHanaSR-checkJson + Author: Fabian Herschel, Jan 2024 + License: GNU General Public License (GPL) + Copyright: (c) 2024 SUSE LLC +""" + +# pylint: disable=unused-import +import json +import argparse +#sys.path.insert(1, '/usr/lib/SAPHanaSR-tester') +#from saphana_sr_test import SaphanasrTest + +parser = argparse.ArgumentParser() +parser.add_argument("--file", help="specify the json file") +parser.add_argument("--quiet", help="do not output json data on success", action="store_true") +args = parser.parse_args() +json_file = None +quiet = False + +if args.file: + json_file = args.file +if args.quiet: + quiet = True + +if json_file is None: + print(f"file not specified") + exit(1) + +try: + with open(json_file, encoding="utf-8") as json_fh: + try: + json_data = (json.load(json_fh)) + except json.decoder.JSONDecodeError as e_jerr: + print(f"json error n file {json_file}: ({e_jerr})") + exit(1) +except FileNotFoundError as e_ferr: + print(f"file not found ({e_ferr})") + exit(1) +except PermissionError as e_ferr: + print(f"permisson error ({e_ferr})") + exit(1) + +if not(quiet): + print(json_data) + + From 39f5da2fd171d435e5a2b85c0c60a0a8f68ea1d9 Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 16:39:08 +0100 Subject: [PATCH 082/123] maintenance_with_standby_nodes.json --- .../angi-ScaleOut/maintenance_with_standby_nodes.json | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index b705736e..bda039cd 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -44,13 +44,13 @@ "srr == S", "srHook == SWAIT", "srPoll == SFAIL" - ], + ], "pHost": "pHostUp", "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" - ] + ] }, { "step": "step40", @@ -88,13 +88,13 @@ "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], + ], "pHost": "pHostDown", "sHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step130", From 6a6cf3d517bdde6fd78b01641ef19e73b027c01b Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 16:57:17 +0100 Subject: [PATCH 083/123] unified indentation to 13 --- .../angi-ScaleOut/block_manual_takeover.json | 64 ++++++------- test/json/angi-ScaleOut/defaults.json | 16 ++-- test/json/angi-ScaleOut/flap.json | 46 +++++----- test/json/angi-ScaleOut/flop.json | 42 ++++----- test/json/angi-ScaleOut/flup.json | 46 +++++----- test/json/angi-ScaleOut/free_log_area.json | 66 ++++++------- .../angi-ScaleOut/freeze_prim_master_nfs.json | 58 ++++++------ .../angi-ScaleOut/freeze_prim_site_nfs.json | 86 ++++++++--------- .../angi-ScaleOut/freeze_secn_site_nfs.json | 84 ++++++++--------- .../angi-ScaleOut/kill_prim_indexserver.json | 88 +++++++++--------- test/json/angi-ScaleOut/kill_prim_inst.json | 92 +++++++++---------- test/json/angi-ScaleOut/kill_prim_node.json | 86 ++++++++--------- .../kill_prim_worker_indexserver.json | 90 +++++++++--------- .../angi-ScaleOut/kill_prim_worker_inst.json | 90 +++++++++--------- .../angi-ScaleOut/kill_prim_worker_node.json | 88 +++++++++--------- .../angi-ScaleOut/kill_secn_indexserver.json | 86 ++++++++--------- test/json/angi-ScaleOut/kill_secn_inst.json | 84 ++++++++--------- test/json/angi-ScaleOut/kill_secn_node.json | 80 ++++++++-------- .../angi-ScaleOut/kill_secn_worker_inst.json | 82 ++++++++--------- .../angi-ScaleOut/kill_secn_worker_node.json | 82 ++++++++--------- .../maintenance_cluster_turn_hana.json | 50 +++++----- test/json/angi-ScaleOut/nop-false.json | 48 +++++----- test/json/angi-ScaleOut/nop.json | 48 +++++----- test/json/angi-ScaleOut/restart_cluster.json | 48 +++++----- .../restart_cluster_hana_running.json | 46 +++++----- .../restart_cluster_turn_hana.json | 50 +++++----- .../json/angi-ScaleOut/standby_prim_node.json | 86 ++++++++--------- .../json/angi-ScaleOut/standby_secn_node.json | 88 +++++++++--------- .../standby_secn_worker_node.json | 92 +++++++++---------- 29 files changed, 1006 insertions(+), 1006 deletions(-) diff --git a/test/json/angi-ScaleOut/block_manual_takeover.json b/test/json/angi-ScaleOut/block_manual_takeover.json index 3ffd626f..d41c4dee 100644 --- a/test/json/angi-ScaleOut/block_manual_takeover.json +++ b/test/json/angi-ScaleOut/block_manual_takeover.json @@ -4,43 +4,43 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "bmt", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "bmt", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 120", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" + "step": "step20", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 120", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/defaults.json b/test/json/angi-ScaleOut/defaults.json index ebd779eb..a18a6c08 100644 --- a/test/json/angi-ScaleOut/defaults.json +++ b/test/json/angi-ScaleOut/defaults.json @@ -2,34 +2,34 @@ "opMode": "logreplay", "srMode": "sync", "checkPtr": { - "globalUp": [ + "globalUp": [ "topology == ScaleOut" ], - "pHostUp": [ + "pHostUp": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "pSiteUp": [ + "pSiteUp": [ "lpt > 1000000000", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" ], - "sSiteUp": [ + "sSiteUp": [ "lpt == 30", "lss == 4", "srr == S", "srHook == SOK", "srPoll == SOK" ], - "sHostUp": [ + "sHostUp": [ "clone_state == DEMOTED", "roles == master1:master:worker:master", "score == 100" ], - "pHostDown": [ + "pHostDown": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 150", @@ -42,14 +42,14 @@ "srHook == PRIM", "srPoll == PRIM" ], - "sSiteDown": [ + "sSiteDown": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" ], - "sHostDown": [ + "sHostDown": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 100", diff --git a/test/json/angi-ScaleOut/flap.json b/test/json/angi-ScaleOut/flap.json index 4adcd2f9..b6e6c840 100644 --- a/test/json/angi-ScaleOut/flap.json +++ b/test/json/angi-ScaleOut/flap.json @@ -4,13 +4,13 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": [ + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ "lpt >~ 2000000000:^(20|30|1.........)$", "lss == 4", "srr == P", @@ -18,24 +18,24 @@ "srPoll == PRIM", "hugo is None" ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/flop.json b/test/json/angi-ScaleOut/flop.json index 087bf988..c402aa63 100644 --- a/test/json/angi-ScaleOut/flop.json +++ b/test/json/angi-ScaleOut/flop.json @@ -4,39 +4,39 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": [ + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ "lpt >~ 2000000000:^(20|30|1.........)$", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": [ + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": [ "lpt is None" ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/flup.json b/test/json/angi-ScaleOut/flup.json index 55b7bfa8..01c9c199 100644 --- a/test/json/angi-ScaleOut/flup.json +++ b/test/json/angi-ScaleOut/flup.json @@ -4,31 +4,31 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/free_log_area.json b/test/json/angi-ScaleOut/free_log_area.json index d4976ff2..d7d553b8 100644 --- a/test/json/angi-ScaleOut/free_log_area.json +++ b/test/json/angi-ScaleOut/free_log_area.json @@ -4,44 +4,44 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_free_log_area", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_free_log_area", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "still running", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 60", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" + "step": "step20", + "name": "still running", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 60", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json index 2d14d281..534f52a9 100644 --- a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json @@ -4,18 +4,18 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_freeze_prim_master_nfs", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_freeze_prim_master_nfs", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { "step": "step20", @@ -36,9 +36,9 @@ "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" @@ -65,30 +65,30 @@ "srHook == PRIM", "srPoll ~ (SOK|PRIM)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json index 4464c7d4..383f4182 100644 --- a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json @@ -4,91 +4,91 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_freeze_prim_site_nfs", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_freeze_prim_site_nfs", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "srr == P", "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", "sWorker": "pWorkerUp", - "pWorker": "sWorkerUp" + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json index 5a568ebc..6b21ad89 100644 --- a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json @@ -5,92 +5,92 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_freeze_secn_site_nfs", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_freeze_secn_site_nfs", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (20|10)", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (20|10)", "lss == 4", "srr == S", "srHook ~ (SOK|SWAIT)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (100|145|150)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_indexserver.json b/test/json/angi-ScaleOut/kill_prim_indexserver.json index 16833470..3182c114 100644 --- a/test/json/angi-ScaleOut/kill_prim_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_indexserver.json @@ -4,78 +4,78 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "comment": "sSite: srPoll could get SFAIL on scale-out", - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "comment": "sSite: srPoll could get SFAIL on scale-out", + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (90|70|5|0)" ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (90|70|5)" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", @@ -83,19 +83,19 @@ ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index d6f3590d..a30c59f2 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -4,79 +4,79 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite = @@pSite@@ or pSite = %pSite", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite = @@pSite@@ or pSite = %pSite", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "comment": "sSite: srPoll could get SFAIL on scale-out", - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "comment": "sSite: srPoll could get SFAIL on scale-out", + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (90|70|5|0)" ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles ~ master1::worker:", "score ~ (90|70|5)" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", @@ -84,19 +84,19 @@ ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_node.json b/test/json/angi-ScaleOut/kill_prim_node.json index 483b1302..e8789fbe 100644 --- a/test/json/angi-ScaleOut/kill_prim_node.json +++ b/test/json/angi-ScaleOut/kill_prim_node.json @@ -4,92 +4,92 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json index 6b113eae..5fcca77a 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json @@ -4,76 +4,76 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "score ~ (90|70|5|0)" ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "todo2": "why do we need SFAIL for srHook?", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "todo2": "why do we need SFAIL for srHook?", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SFAIL)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "score ~ (90|70|5)" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", @@ -81,19 +81,19 @@ ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_inst.json b/test/json/angi-ScaleOut/kill_prim_worker_inst.json index f3d22ce1..5fa71d19 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_inst.json @@ -4,77 +4,77 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_inst", - "todo": "allow something like pSite = @@pSite@@ or pSite = %pSite", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_inst", + "todo": "allow something like pSite = @@pSite@@ or pSite = %pSite", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss ~ (1|2)", "srr == P", "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "score ~ (90|70|5|0)" ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (90|70|5)" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", @@ -82,19 +82,19 @@ ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "sWorker": "pWorkerUp", - "pWorker": "sWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_node.json b/test/json/angi-ScaleOut/kill_prim_worker_node.json index c99d6194..92ca77b7 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_node.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_node.json @@ -4,94 +4,94 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" ], - "pHost": [ + "pHost": [ "clone_state ~ (DEMOTED|UNDEFINED|WAITING4NODES)", "score ~ (90|70|5)" ], - "sHost": [ + "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 240, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 240, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss ~ (1|2)", "srr ~ (P|S)", "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" ], - "pHost": [ + "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" ], - "sHost": [ + "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "sWorker": "pWorkerUp", - "pWorker": "sWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_indexserver.json b/test/json/angi-ScaleOut/kill_secn_indexserver.json index 3ebd62d3..1b7e80f7 100644 --- a/test/json/angi-ScaleOut/kill_secn_indexserver.json +++ b/test/json/angi-ScaleOut/kill_secn_indexserver.json @@ -4,96 +4,96 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sCCC to be the same as at test begin", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sCCC to be the same as at test begin", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_inst.json b/test/json/angi-ScaleOut/kill_secn_inst.json index 5a90b67f..e0cbc361 100644 --- a/test/json/angi-ScaleOut/kill_secn_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_inst.json @@ -4,95 +4,95 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 240, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 240, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_node.json b/test/json/angi-ScaleOut/kill_secn_node.json index 661f00d0..08747792 100644 --- a/test/json/angi-ScaleOut/kill_secn_node.json +++ b/test/json/angi-ScaleOut/kill_secn_node.json @@ -4,90 +4,90 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_worker_inst.json b/test/json/angi-ScaleOut/kill_secn_worker_inst.json index cc1cbfa8..51b0d3a1 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_inst.json @@ -4,75 +4,75 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": "pSiteUp", + "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" ], - "pHost": "pHostUp", - "sHost": [ + "pHost": "pHostUp", + "sHost": [ "clone_state ~ (DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (-INFINITY|0)" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" ], - "pHost": "pHostUp", - "sHost": [ + "pHost": "pHostUp", + "sHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_worker_node.json b/test/json/angi-ScaleOut/kill_secn_worker_node.json index e7a24da8..9a818d43 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_node.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_node.json @@ -4,73 +4,73 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": [ + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": "pSiteUp", + "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" ], - "pHost": "pHostUp", - "sHost": [ + "pHost": "pHostUp", + "sHost": [ "clone_state=WAITING4NODES" ] }, { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": [ + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": [ "lpt=10", "lss=(1|2)", "srr=S", "srHook=(SFAIL|SWAIT)", "srPoll=(SFAIL|SOK)" ], - "pHost": "pHostUp", - "sHost": [ + "pHost": "pHostUp", + "sHost": [ "clone_state=(UNDEFINED|DEMOTED)", "roles=master1::worker:", "score=(-INFINITY|0|-1)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json index 87036c68..fd297ac4 100644 --- a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json @@ -4,33 +4,33 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_maintenance_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_maintenance_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "sWorker": "pWorkerUp", - "pWorker": "sWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "sWorker": "pWorkerUp", + "pWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/nop-false.json b/test/json/angi-ScaleOut/nop-false.json index f04048bd..ddd2eb3b 100644 --- a/test/json/angi-ScaleOut/nop-false.json +++ b/test/json/angi-ScaleOut/nop-false.json @@ -4,34 +4,34 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": [ + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": [ "topology=Nix" ], - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/nop.json b/test/json/angi-ScaleOut/nop.json index fd0f16d0..545b4842 100644 --- a/test/json/angi-ScaleOut/nop.json +++ b/test/json/angi-ScaleOut/nop.json @@ -4,32 +4,32 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": "globalUp", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": "globalUp", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster.json b/test/json/angi-ScaleOut/restart_cluster.json index b51088d5..31d4f1f5 100644 --- a/test/json/angi-ScaleOut/restart_cluster.json +++ b/test/json/angi-ScaleOut/restart_cluster.json @@ -4,32 +4,32 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_restart_cluster", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_restart_cluster", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster_hana_running.json b/test/json/angi-ScaleOut/restart_cluster_hana_running.json index 566a20c4..fea93dee 100644 --- a/test/json/angi-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut/restart_cluster_hana_running.json @@ -4,31 +4,31 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_restart_cluster_hana_running", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_restart_cluster_hana_running", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index e7bd9c42..db7dc289 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -4,33 +4,33 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_restart_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_restart_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/standby_prim_node.json b/test/json/angi-ScaleOut/standby_prim_node.json index 308a3472..7f8813ac 100644 --- a/test/json/angi-ScaleOut/standby_prim_node.json +++ b/test/json/angi-ScaleOut/standby_prim_node.json @@ -4,98 +4,98 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" ], - "pHost": [ + "pHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 150", "standby == on" ], - "sHost": [ + "sHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score ~ (100|145)" ] }, { - "step": "step30", - "name": "takeover on secondary", - "next": "final40", - "loop": 120, - "post": "opn", - "wait": 2, - "pSite": [ + "step": "step30", + "name": "takeover on secondary", + "next": "final40", + "loop": 120, + "post": "opn", + "wait": 2, + "pSite": [ "lss == 1", "srr == P", "lpt == 10", "srHook == SWAIT", "srPoll == SFAIL" ], - "sSite": [ + "sSite": [ "lpt > 1000000000", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" ], - "pHost": [ + "pHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 150", "standby == on" ], - "sHost": [ + "sHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "todo": "allow pointer to prereq10", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp", - "pWorker": "sWorkerUp", - "sWorker": "pWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "todo": "allow pointer to prereq10", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp", + "pWorker": "sWorkerUp", + "sWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/standby_secn_node.json b/test/json/angi-ScaleOut/standby_secn_node.json index c85bc5b1..814673e3 100644 --- a/test/json/angi-ScaleOut/standby_secn_node.json +++ b/test/json/angi-ScaleOut/standby_secn_node.json @@ -4,47 +4,47 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" }, { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "osn", - "pSite": [ + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "osn", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 100", @@ -52,50 +52,50 @@ ] }, { - "step": "step30", - "name": "node back online", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ + "step": "step30", + "name": "node back online", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4", "srr == P", "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" ], - "sSite": [ + "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SWAIT", "srPoll == SFAIL" ], - "pHost": [ + "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" ], - "sHost": [ + "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": "sWorkerUp", + "pWorker": "pWorkerUp" } ] } diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index d9aaba7f..22bbde79 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -4,31 +4,31 @@ "start": "prereq10", "steps": [ { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "standby_secn_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "standby_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" }, { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "online_secn_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": [ + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "online_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": [ "clone_state == UNDEFINED", "roles == slave:slave:worker:slave", "score == -12200", @@ -36,17 +36,17 @@ ] }, { - "step": "step30", - "name": "node back online", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "sWorker": [ + "step": "step30", + "name": "node back online", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "sWorker": [ "clone_state == DEMOTED", "roles == slave:slave:worker:slave", "score == -12200", @@ -54,18 +54,18 @@ ] }, { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", - "pWorker": "pWorkerUp", - "sWorker": "sWorkerUp" + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp", + "pWorker": "pWorkerUp", + "sWorker": "sWorkerUp" } ] } From 0eec52690cf9e8387d2cd458316416f6f8073f0d Mon Sep 17 00:00:00 2001 From: lpinne Date: Tue, 16 Jan 2024 17:24:14 +0100 Subject: [PATCH 084/123] fixed indentation, removed trailing blanks --- .../angi-ScaleOut/block_manual_takeover.json | 8 +++--- test/json/angi-ScaleOut/defaults.json | 26 +++++++++---------- test/json/angi-ScaleOut/flap.json | 6 ++--- test/json/angi-ScaleOut/flop.json | 12 ++++----- test/json/angi-ScaleOut/flup.json | 4 +-- test/json/angi-ScaleOut/free_log_area.json | 6 ++--- .../angi-ScaleOut/freeze_prim_master_nfs.json | 16 ++++++------ .../angi-ScaleOut/freeze_prim_site_nfs.json | 18 ++++++------- .../angi-ScaleOut/freeze_secn_site_nfs.json | 22 ++++++++-------- .../angi-ScaleOut/kill_prim_indexserver.json | 20 +++++++------- test/json/angi-ScaleOut/kill_prim_inst.json | 16 ++++++------ test/json/angi-ScaleOut/kill_prim_node.json | 18 ++++++------- .../kill_prim_worker_indexserver.json | 16 ++++++------ .../angi-ScaleOut/kill_prim_worker_inst.json | 16 ++++++------ .../angi-ScaleOut/kill_prim_worker_node.json | 16 ++++++------ .../angi-ScaleOut/kill_secn_indexserver.json | 20 +++++++------- test/json/angi-ScaleOut/kill_secn_inst.json | 20 +++++++------- test/json/angi-ScaleOut/kill_secn_node.json | 18 ++++++------- .../angi-ScaleOut/kill_secn_worker_inst.json | 12 ++++----- .../angi-ScaleOut/kill_secn_worker_node.json | 14 +++++----- .../maintenance_cluster_turn_hana.json | 2 +- .../maintenance_with_standby_nodes.json | 2 +- test/json/angi-ScaleOut/nop-false.json | 6 ++--- test/json/angi-ScaleOut/nop.json | 4 +-- test/json/angi-ScaleOut/restart_cluster.json | 2 +- .../restart_cluster_hana_running.json | 2 +- .../restart_cluster_turn_hana.json | 2 +- .../json/angi-ScaleOut/standby_prim_node.json | 20 +++++++------- .../json/angi-ScaleOut/standby_secn_node.json | 18 ++++++------- .../standby_secn_worker_node.json | 4 +-- 30 files changed, 183 insertions(+), 183 deletions(-) diff --git a/test/json/angi-ScaleOut/block_manual_takeover.json b/test/json/angi-ScaleOut/block_manual_takeover.json index d41c4dee..5a7961cc 100644 --- a/test/json/angi-ScaleOut/block_manual_takeover.json +++ b/test/json/angi-ScaleOut/block_manual_takeover.json @@ -12,8 +12,8 @@ "post": "bmt", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp", + "pHost": "pHostUp", + "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, @@ -26,7 +26,7 @@ "post": "sleep 120", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp" }, { @@ -37,7 +37,7 @@ "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/defaults.json b/test/json/angi-ScaleOut/defaults.json index a18a6c08..16d97fbe 100644 --- a/test/json/angi-ScaleOut/defaults.json +++ b/test/json/angi-ScaleOut/defaults.json @@ -4,66 +4,66 @@ "checkPtr": { "globalUp": [ "topology == ScaleOut" - ], + ], "pHostUp": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "pSiteUp": [ "lpt > 1000000000", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSiteUp": [ "lpt == 30", "lss == 4", "srr == S", "srHook == SOK", "srPoll == SOK" - ], + ], "sHostUp": [ "clone_state == DEMOTED", "roles == master1:master:worker:master", "score == 100" - ], + ], "pHostDown": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 150", "standby == on" - ], + ], "pSiteDown": [ "lpt > 1000000000", "lss == 1", "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSiteDown": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], + ], "sHostDown": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 100", "standby == on" - ], - "pWorkerUp": [ + ], + "pWorkerUp": [ "clone_state == DEMOTED", "roles == slave:slave:worker:slave", "score == -12200" - ], - "sWorkerUp": [ + ], + "sWorkerUp": [ "clone_state == DEMOTED", "roles == slave:slave:worker:slave", "score == -12200" - ] + ] } } diff --git a/test/json/angi-ScaleOut/flap.json b/test/json/angi-ScaleOut/flap.json index b6e6c840..923292d6 100644 --- a/test/json/angi-ScaleOut/flap.json +++ b/test/json/angi-ScaleOut/flap.json @@ -17,9 +17,9 @@ "srHook == PRIM", "srPoll == PRIM", "hugo is None" - ], + ], "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" @@ -32,7 +32,7 @@ "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/flop.json b/test/json/angi-ScaleOut/flop.json index c402aa63..a8a0551e 100644 --- a/test/json/angi-ScaleOut/flop.json +++ b/test/json/angi-ScaleOut/flop.json @@ -16,11 +16,11 @@ "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -31,11 +31,11 @@ "wait": 1, "pSite": [ "lpt is None" - ], + ], "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/flup.json b/test/json/angi-ScaleOut/flup.json index 01c9c199..8ed26730 100644 --- a/test/json/angi-ScaleOut/flup.json +++ b/test/json/angi-ScaleOut/flup.json @@ -12,7 +12,7 @@ "post": "sleep 4", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" @@ -25,7 +25,7 @@ "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/free_log_area.json b/test/json/angi-ScaleOut/free_log_area.json index d7d553b8..be586702 100644 --- a/test/json/angi-ScaleOut/free_log_area.json +++ b/test/json/angi-ScaleOut/free_log_area.json @@ -12,7 +12,7 @@ "post": "shell sct_test_free_log_area", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" @@ -27,7 +27,7 @@ "post": "sleep 60", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp" }, { @@ -38,7 +38,7 @@ "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json index 534f52a9..73d68f2f 100644 --- a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json @@ -28,21 +28,21 @@ "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -57,23 +57,23 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json index 383f4182..ec89af83 100644 --- a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json @@ -28,21 +28,21 @@ "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -57,23 +57,23 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" - ] + ] }, { "step": "final40", @@ -87,7 +87,7 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "sWorker": "pWorkerUp", + "sWorker": "pWorkerUp", "pWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json index 6b21ad89..88802148 100644 --- a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json @@ -15,7 +15,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -29,21 +29,21 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (20|10)", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], - "sHost": [ - ] + ], + "sHost": [ + ] }, { "step": "step30", @@ -58,24 +58,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (20|10)", "lss == 4", "srr == S", "srHook ~ (SOK|SWAIT)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (100|145|150)" - ] + ] }, { "step": "final40", @@ -89,7 +89,7 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "pWorker": "sWorkerUp", + "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/kill_prim_indexserver.json b/test/json/angi-ScaleOut/kill_prim_indexserver.json index 3182c114..30222ca0 100644 --- a/test/json/angi-ScaleOut/kill_prim_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_indexserver.json @@ -13,7 +13,7 @@ "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", - "sHost": "sHostUp" + "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, @@ -30,24 +30,24 @@ "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (90|70|5|0)" - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -62,25 +62,25 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (90|70|5)" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", "srah == T" - ] + ] }, { "step": "final40", @@ -94,7 +94,7 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "pWorker": "sWorkerUp", + "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index a30c59f2..5f6773d5 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -31,24 +31,24 @@ "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (90|70|5|0)" - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -63,25 +63,25 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles ~ master1::worker:", "score ~ (90|70|5)" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", "srah == T" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/kill_prim_node.json b/test/json/angi-ScaleOut/kill_prim_node.json index e8789fbe..d4231d8a 100644 --- a/test/json/angi-ScaleOut/kill_prim_node.json +++ b/test/json/angi-ScaleOut/kill_prim_node.json @@ -29,21 +29,21 @@ "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -58,23 +58,23 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" - ] + ] }, { "step": "final40", @@ -88,7 +88,7 @@ "sSite": "pSiteUp", "pHost": "sHostUp", "sHost": "pHostUp", - "pWorker": "sWorkerUp", + "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json index 5fcca77a..f8230853 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json @@ -29,23 +29,23 @@ "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "score ~ (90|70|5|0)" - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -61,24 +61,24 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SFAIL)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "score ~ (90|70|5)" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", "srah == T" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/kill_prim_worker_inst.json b/test/json/angi-ScaleOut/kill_prim_worker_inst.json index 5fa71d19..f2b4439c 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_inst.json @@ -30,23 +30,23 @@ "lpt >~ 1000000000:20", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", "score ~ (90|70|5|0)" - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -61,25 +61,25 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (90|70|5)" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)", "srah == T" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/kill_prim_worker_node.json b/test/json/angi-ScaleOut/kill_prim_worker_node.json index 92ca77b7..3157a0b5 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_node.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_node.json @@ -29,23 +29,23 @@ "lpt >~ 1000000000:(20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], + ], "pHost": [ "clone_state ~ (DEMOTED|UNDEFINED|WAITING4NODES)", "score ~ (90|70|5)" - ], + ], "sHost": [ "clone_state ~ (PROMOTED|DEMOTED)", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -60,23 +60,23 @@ "lpt >~ 1000000000:(30|20|10)", "srHook ~ (PRIM|SWAIT|SREG)", "srPoll ~ (PRIM|SFAIL)" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook == PRIM", "srPoll ~ (SOK|PRIM)" - ], + ], "pHost": [ "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", "roles == master1::worker:" - ], + ], "sHost": [ "clone_state ~ (DEMOTED|PROMOTED)", "roles == master1:master:worker:master", "score ~ (100|145|150)" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/kill_secn_indexserver.json b/test/json/angi-ScaleOut/kill_secn_indexserver.json index 1b7e80f7..80ae4be2 100644 --- a/test/json/angi-ScaleOut/kill_secn_indexserver.json +++ b/test/json/angi-ScaleOut/kill_secn_indexserver.json @@ -11,10 +11,10 @@ "wait": 1, "post": "kill_secn_indexserver", "pSite": "pSiteUp", - "sSite": "sSiteUp", + "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -29,24 +29,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" - ] + ] }, { "step": "step30", @@ -61,24 +61,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/kill_secn_inst.json b/test/json/angi-ScaleOut/kill_secn_inst.json index e0cbc361..da619360 100644 --- a/test/json/angi-ScaleOut/kill_secn_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_inst.json @@ -11,7 +11,7 @@ "wait": 1, "post": "kill_secn_inst", "pSite": "pSiteUp", - "sSite": "sSiteUp", + "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", @@ -29,24 +29,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" - ] + ] }, { "step": "step30", @@ -61,24 +61,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" - ] + ] }, { "step": "final40", @@ -91,7 +91,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/kill_secn_node.json b/test/json/angi-ScaleOut/kill_secn_node.json index 08747792..495bd61e 100644 --- a/test/json/angi-ScaleOut/kill_secn_node.json +++ b/test/json/angi-ScaleOut/kill_secn_node.json @@ -14,7 +14,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -29,19 +29,19 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ] + ] }, { "step": "step30", @@ -56,24 +56,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" - ] + ] }, { "step": "final40", @@ -86,7 +86,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/kill_secn_worker_inst.json b/test/json/angi-ScaleOut/kill_secn_worker_inst.json index 51b0d3a1..8d7f8cf7 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_inst.json @@ -14,7 +14,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -30,13 +30,13 @@ "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": "pHostUp", "sHost": [ "clone_state ~ (DEMOTED|UNDEFINED)", "roles == master1::worker:", "score ~ (-INFINITY|0)" - ] + ] }, { "step": "step30", @@ -52,13 +52,13 @@ "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" - ], + ], "pHost": "pHostUp", "sHost": [ "clone_state ~ (UNDEFINED|DEMOTED)", "roles == master1::worker:", "score ~ (-INFINITY|0|-1)" - ] + ] }, { "step": "final40", @@ -71,7 +71,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/kill_secn_worker_node.json b/test/json/angi-ScaleOut/kill_secn_worker_node.json index 9a818d43..fb5e7863 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_node.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_node.json @@ -14,7 +14,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -30,11 +30,11 @@ "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], + ], "pHost": "pHostUp", "sHost": [ - "clone_state=WAITING4NODES" - ] + "clone_state=WAITING4NODES" + ] }, { "step": "step30", @@ -50,13 +50,13 @@ "srr=S", "srHook=(SFAIL|SWAIT)", "srPoll=(SFAIL|SOK)" - ], + ], "pHost": "pHostUp", "sHost": [ "clone_state=(UNDEFINED|DEMOTED)", "roles=master1::worker:", "score=(-INFINITY|0|-1)" - ] + ] }, { "step": "final40", @@ -69,7 +69,7 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" } ] diff --git a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json index fd297ac4..18947dfb 100644 --- a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json @@ -12,7 +12,7 @@ "post": "shell sct_test_maintenance_cluster_turn_hana", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index bda039cd..268eb68f 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -13,7 +13,7 @@ "post": "ssn", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/nop-false.json b/test/json/angi-ScaleOut/nop-false.json index ddd2eb3b..3d8385b0 100644 --- a/test/json/angi-ScaleOut/nop-false.json +++ b/test/json/angi-ScaleOut/nop-false.json @@ -15,10 +15,10 @@ ], "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", - "pWorker": "pWorkerUp" + "pWorker": "pWorkerUp" }, { "step": "final40", @@ -28,7 +28,7 @@ "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" diff --git a/test/json/angi-ScaleOut/nop.json b/test/json/angi-ScaleOut/nop.json index 545b4842..79d4264d 100644 --- a/test/json/angi-ScaleOut/nop.json +++ b/test/json/angi-ScaleOut/nop.json @@ -13,7 +13,7 @@ "global": "globalUp", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" @@ -26,7 +26,7 @@ "wait": 1, "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" diff --git a/test/json/angi-ScaleOut/restart_cluster.json b/test/json/angi-ScaleOut/restart_cluster.json index 31d4f1f5..aaedc7b4 100644 --- a/test/json/angi-ScaleOut/restart_cluster.json +++ b/test/json/angi-ScaleOut/restart_cluster.json @@ -12,7 +12,7 @@ "post": "shell sct_test_restart_cluster", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/restart_cluster_hana_running.json b/test/json/angi-ScaleOut/restart_cluster_hana_running.json index fea93dee..445f7d1f 100644 --- a/test/json/angi-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut/restart_cluster_hana_running.json @@ -12,7 +12,7 @@ "post": "shell sct_test_restart_cluster_hana_running", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index db7dc289..1a84090b 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -12,7 +12,7 @@ "post": "shell sct_test_restart_cluster_turn_hana", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" diff --git a/test/json/angi-ScaleOut/standby_prim_node.json b/test/json/angi-ScaleOut/standby_prim_node.json index 7f8813ac..511c125c 100644 --- a/test/json/angi-ScaleOut/standby_prim_node.json +++ b/test/json/angi-ScaleOut/standby_prim_node.json @@ -12,9 +12,9 @@ "post": "spn", "pSite": "pSiteUp", "sSite": "sSiteUp", - "pHost": "pHostUp", + "pHost": "pHostUp", "sHost": "sHostUp", - "pWorker": "pWorkerUp", + "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" }, { @@ -29,25 +29,25 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], + ], "pHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 150", "standby == on" - ], + ], "sHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score ~ (100|145)" - ] + ] }, { "step": "step30", @@ -62,25 +62,25 @@ "lpt == 10", "srHook == SWAIT", "srPoll == SFAIL" - ], + ], "sSite": [ "lpt > 1000000000", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], + ], "pHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 150", "standby == on" - ], + ], "sHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/standby_secn_node.json b/test/json/angi-ScaleOut/standby_secn_node.json index 814673e3..30acac27 100644 --- a/test/json/angi-ScaleOut/standby_secn_node.json +++ b/test/json/angi-ScaleOut/standby_secn_node.json @@ -11,7 +11,7 @@ "wait": 1, "post": "ssn", "pSite": "pSiteUp", - "sSite": "sSiteUp", + "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", @@ -31,25 +31,25 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state == UNDEFINED", "roles == master1::worker:", "score == 100", "standby == on" - ] + ] }, { "step": "step30", @@ -64,24 +64,24 @@ "lpt > 1000000000", "srHook == PRIM", "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SWAIT", "srPoll == SFAIL" - ], + ], "pHost": [ "clone_state == PROMOTED", "roles == master1:master:worker:master", "score == 150" - ], + ], "sHost": [ "clone_state == DEMOTED", "roles == master1::worker:", "score ~ (-INFINITY|0)" - ] + ] }, { "step": "final40", diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index 22bbde79..95faadcb 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -11,7 +11,7 @@ "wait": 1, "post": "standby_secn_worker_node", "pSite": "pSiteUp", - "sSite": "sSiteUp", + "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", "pWorker": "pWorkerUp", @@ -42,7 +42,7 @@ "loop": 120, "wait": 2, "todo": "pHost+sHost to check site-name", - "pSite": "pSiteUp", + "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", From acdaa5b81dfac60f84756dd3f19d3e34adf04478 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 18:50:24 +0100 Subject: [PATCH 085/123] fix_indent: tool to align json indent of lines --- test/fix_indent | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 test/fix_indent diff --git a/test/fix_indent b/test/fix_indent new file mode 100644 index 00000000..27408956 --- /dev/null +++ b/test/fix_indent @@ -0,0 +1,10 @@ + +file=restart_cluster.json + +awk '{ p=0 } + /^ {3}[^ ]/ { p=1 } + /^ {7}[^ ]/ { p=1 } + /^ {11}[^ ]/ { p=1 } + /^ {15}[^ ]/ { p=1 } + { if(p==1) {printf " "} + {print $0} }' "$file" > "$file".ident From 96f21bbe0c94a45b5d09faed1ef19e91ffa28149 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 18:55:02 +0100 Subject: [PATCH 086/123] test/json/angi-ScaleUp - fix indents of lines --- .../angi-ScaleUp/block_manual_takeover.json | 60 +++---- test/json/angi-ScaleUp/block_sr.json | 68 +++---- .../block_sr_and_freeze_prim_fs.json | 134 +++++++------- test/json/angi-ScaleUp/defaults.json | 72 ++++---- .../angi-ScaleUp/demo_kill_prim_inst.json | 132 +++++++------- test/json/angi-ScaleUp/flop.json | 40 ++--- test/json/angi-ScaleUp/flup.json | 36 ++-- test/json/angi-ScaleUp/free_log_area.json | 70 ++++---- test/json/angi-ScaleUp/freeze_prim_fs.json | 148 +++++++-------- .../angi-ScaleUp/kill_prim_indexserver.json | 128 ++++++------- test/json/angi-ScaleUp/kill_prim_inst.json | 132 +++++++------- test/json/angi-ScaleUp/kill_prim_node.json | 124 ++++++------- .../angi-ScaleUp/kill_secn_indexserver.json | 126 ++++++------- test/json/angi-ScaleUp/kill_secn_inst.json | 124 ++++++------- test/json/angi-ScaleUp/kill_secn_node.json | 124 ++++++------- .../maintenance_cluster_turn_hana.json | 50 +++--- .../maintenance_with_standby_nodes.json | 168 +++++++++--------- test/json/angi-ScaleUp/nop.json | 46 ++--- test/json/angi-ScaleUp/restart_cluster.json | 48 ++--- .../restart_cluster_hana_running.json | 48 ++--- .../restart_cluster_turn_hana.json | 50 +++--- test/json/angi-ScaleUp/split_brain_prio.json | 144 +++++++-------- test/json/angi-ScaleUp/standby_prim_node.json | 160 ++++++++--------- test/json/angi-ScaleUp/standby_secn_node.json | 156 ++++++++-------- 24 files changed, 1194 insertions(+), 1194 deletions(-) diff --git a/test/json/angi-ScaleUp/block_manual_takeover.json b/test/json/angi-ScaleUp/block_manual_takeover.json index 4a66bcff..41863d25 100644 --- a/test/json/angi-ScaleUp/block_manual_takeover.json +++ b/test/json/angi-ScaleUp/block_manual_takeover.json @@ -3,40 +3,40 @@ "name": "blocked manual takeover", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "bmt", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 120", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "bmt", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 120", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/block_sr.json b/test/json/angi-ScaleUp/block_sr.json index 59220fc8..25b448be 100644 --- a/test/json/angi-ScaleUp/block_sr.json +++ b/test/json/angi-ScaleUp/block_sr.json @@ -3,43 +3,43 @@ "name": "block sr and check SFAIL attribute; unblock to recover", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_block_sap_hana_sr", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "check SFAIL", - "next": "final40", - "loop": 30, - "wait": 2, - "post": "shell sct_test_unblock_sap_hana_sr", - "pSite": "pSiteUp", + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_block_sap_hana_sr", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "check SFAIL", + "next": "final40", + "loop": 30, + "wait": 2, + "post": "shell sct_test_unblock_sap_hana_sr", + "pSite": "pSiteUp", "sSite": [ "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": "pHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + ], + "pHost": "pHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 120, + "wait": 2, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/block_sr_and_freeze_prim_fs.json b/test/json/angi-ScaleUp/block_sr_and_freeze_prim_fs.json index 0ee48e5d..3175e217 100644 --- a/test/json/angi-ScaleUp/block_sr_and_freeze_prim_fs.json +++ b/test/json/angi-ScaleUp/block_sr_and_freeze_prim_fs.json @@ -3,88 +3,88 @@ "name": "block_sr_and_freeze_prim_fs", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_block_sap_hana_sr", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "check SFAIL", - "next": "step30", - "loop": 30, - "wait": 2, - "post": "shell sct_test_freeze_prim_fs", - "pSite": "pSiteUp", + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_block_sap_hana_sr", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "check SFAIL", + "next": "step30", + "loop": 30, + "wait": 2, + "post": "shell sct_test_freeze_prim_fs", + "pSite": "pSiteUp", "sSite": [ "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": "pHostUp" - }, - { - "step": "step30", - "name": "wait with frozen fs", - "next": "step35", - "loop": 30, - "wait": 2, - "post": "sleep 120", - "pSite": "pSiteUp", + ], + "pHost": "pHostUp" + }, + { + "step": "step30", + "name": "wait with frozen fs", + "next": "step35", + "loop": 30, + "wait": 2, + "post": "sleep 120", + "pSite": "pSiteUp", "sSite": [ "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": "pHostUp" - }, - { - "step": "step35", - "name": "unfreeze fs", - "next": "step37", - "loop": 30, - "wait": 2, - "post": "shell sct_test_unfreeze_prim_fs", - "pSite": "pSiteUp", + ], + "pHost": "pHostUp" + }, + { + "step": "step35", + "name": "unfreeze fs", + "next": "step37", + "loop": 30, + "wait": 2, + "post": "shell sct_test_unfreeze_prim_fs", + "pSite": "pSiteUp", "sSite": [ "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": "pHostUp" - }, - { - "step": "step37", - "name": "unblock sr", - "next": "final40", - "loop": 30, - "wait": 2, - "post": "shell sct_test_unblock_sap_hana_sr", - "pSite": "pSiteUp", + ], + "pHost": "pHostUp" + }, + { + "step": "step37", + "name": "unblock sr", + "next": "final40", + "loop": 30, + "wait": 2, + "post": "shell sct_test_unblock_sap_hana_sr", + "pSite": "pSiteUp", "sSite": [ "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": "pHostUp" - }, - { - "step": "final40", - "name": "recovery", - "next": "END", - "loop": 120, - "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + ], + "pHost": "pHostUp" + }, + { + "step": "final40", + "name": "recovery", + "next": "END", + "loop": 120, + "wait": 2, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/defaults.json b/test/json/angi-ScaleUp/defaults.json index 7c383408..041dd4a7 100644 --- a/test/json/angi-ScaleUp/defaults.json +++ b/test/json/angi-ScaleUp/defaults.json @@ -2,71 +2,71 @@ "opMode": "logreplay", "srMode": "sync", "checkPtr": { - "globalUp": [ - "topology == ScaleUp" - ], - "pHostUp": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" - ], - "pSiteUp": [ + "globalUp": [ + "topology == ScaleUp" + ], + "pHostUp": [ + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" + ], + "pSiteUp": [ "lpt > 1000000000", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], - "pSiteUpDemo": [ + ], + "pSiteUpDemo": [ "lpt > 1000000000", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "lpt == 30", "lss == 4", "srr == S", "srHook == SOK", "srPoll == SOK" - ], - "sSiteUpDemo": [ + ], + "sSiteUpDemo": [ "lpt == 30", "lss == 4", "srr == S", "srPoll == SOK" - ], - "sHostUp": [ - "clone_state == DEMOTED", - "roles == master1:master:worker:master", - "score == 100" - ], - "pHostDown": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score == 150" , - "standby == on" - ], + ], + "sHostUp": [ + "clone_state == DEMOTED", + "roles == master1:master:worker:master", + "score == 100" + ], + "pHostDown": [ + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score == 150" , + "standby == on" + ], "pSiteDown": [ "lpt > 1000000000", "lss == 1" , "srr == P" , "srHook == PRIM" , "srPoll == PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt == 10", "lss == 1", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "sHostDown": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score == 100" , - "standby == on" - ] + ], + "sHostDown": [ + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score == 100" , + "standby == on" + ] } } diff --git a/test/json/angi-ScaleUp/demo_kill_prim_inst.json b/test/json/angi-ScaleUp/demo_kill_prim_inst.json index 1c81c9f5..74b565fc 100644 --- a/test/json/angi-ScaleUp/demo_kill_prim_inst.json +++ b/test/json/angi-ScaleUp/demo_kill_prim_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequisites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite == @@pSite@@ or pSite == %pSite", - "todo1": "allow something like lss>2, lpt>10000, score! == 123", - "pSite": "pSiteUpDemo", - "sSite": "sSiteUpDemo", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequisites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite == @@pSite@@ or pSite == %pSite", + "todo1": "allow something like lss>2, lpt>10000, score! == 123", + "pSite": "pSiteUpDemo", + "sSite": "sSiteUpDemo", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,26 +29,26 @@ "lpt >~ 1000000000:20" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], - "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)" , - "roles == master1::worker:" , - "score ~ (90|5|0)" - ], - "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)" , + "roles == master1::worker:" , + "score ~ (90|5|0)" + ], + "sHost": [ + "clone_state ~ (PROMOTED|DEMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,43 +56,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 1" , - "srr == P" , - "lpt >~ 1000000000:(30|20|10)" , - "srHook ~ (PRIM|SWAIT|SREG)" , - "srPoll == PRIM" - ], + "lss == 1" , + "srr == P" , + "lpt >~ 1000000000:(30|20|10)" , + "srHook ~ (PRIM|SWAIT|SREG)" , + "srPoll == PRIM" + ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll == SOK" - ], - "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)" , - "roles == master1::worker:" , - "score ~ (90|5)" - ], - "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" , - "srah == T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll == SOK" + ], + "pHost": [ + "clone_state ~ (UNDEFINED|DEMOTED)" , + "roles == master1::worker:" , + "score ~ (90|5)" + ], + "sHost": [ + "clone_state ~ (DEMOTED|PROMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" , + "srah == T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/flop.json b/test/json/angi-ScaleUp/flop.json index 671fbfad..5a5a5442 100644 --- a/test/json/angi-ScaleUp/flop.json +++ b/test/json/angi-ScaleUp/flop.json @@ -3,36 +3,36 @@ "name": "flop - this test should NOT pass successfully", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ "lpt >~ 2000000000:^(20|30|1.........)$", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM" - ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": [ + "pSite": [ "lpt is None" - ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + " sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/flup.json b/test/json/angi-ScaleUp/flup.json index bfe0eeb0..30fb9374 100644 --- a/test/json/angi-ScaleUp/flup.json +++ b/test/json/angi-ScaleUp/flup.json @@ -3,28 +3,28 @@ "name": "flup - like nop but very short sleep only - only for checking the test engine", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/free_log_area.json b/test/json/angi-ScaleUp/free_log_area.json index 8bb84d70..ab2ceb68 100644 --- a/test/json/angi-ScaleUp/free_log_area.json +++ b/test/json/angi-ScaleUp/free_log_area.json @@ -3,40 +3,40 @@ "name": "free hana log area on primary site", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_free_log_area", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "still running", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 60", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_free_log_area", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "still running", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 60", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/freeze_prim_fs.json b/test/json/angi-ScaleUp/freeze_prim_fs.json index 2c0cdeab..c50565d6 100644 --- a/test/json/angi-ScaleUp/freeze_prim_fs.json +++ b/test/json/angi-ScaleUp/freeze_prim_fs.json @@ -3,88 +3,88 @@ "name": "freeze sap hana fs on primary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell sct_test_freeze_prim_fs", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell sct_test_freeze_prim_fs", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "srr == P" , "lpt >~ 1000000000:(20|10)" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll == PRIM" - ], - "sSite": [ + ], + "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], - "pHost": [ - ], - "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 300, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ - "lss ~ (1|2)", - "srr ~ (P|S)" , - "lpt >~ 1000000000:(30|20|10)" , - "srHook ~ (PRIM|SWAIT|SREG)" , - "srPoll ~ (PRIM|SFAIL)" - ], - "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|PRIM)" - ], - "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)" , - "roles == master1::worker:" - ], - "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145|150)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + ], + "pHost": [ + ], + "sHost": [ + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 300, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ + "lss ~ (1|2)", + "srr ~ (P|S)" , + "lpt >~ 1000000000:(30|20|10)" , + "srHook ~ (PRIM|SWAIT|SREG)" , + "srPoll ~ (PRIM|SFAIL)" + ], + "sSite": [ + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|PRIM)" + ], + "pHost": [ + "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)" , + "roles == master1::worker:" + ], + "sHost": [ + "clone_state ~ (DEMOTED|PROMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/kill_prim_indexserver.json b/test/json/angi-ScaleUp/kill_prim_indexserver.json index 625dea85..9e192ae5 100644 --- a/test/json/angi-ScaleUp/kill_prim_indexserver.json +++ b/test/json/angi-ScaleUp/kill_prim_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,25 +27,25 @@ "lpt >~ 1000000000:20" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:(30|10)", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)" - ], - "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)" , - "roles == master1::worker:" , - "score ~ (90|5|0)" - ], - "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)" , + "roles == master1::worker:" , + "score ~ (90|5|0)" + ], + "sHost": [ + "clone_state ~ (PROMOTED|DEMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,43 +53,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 1" , - "srr == P" , - "lpt >~ 1000000000:(30|20|10)" , - "srHook ~ (PRIM|SWAIT|SREG)" , - "srPoll == PRIM" - ], + "lss == 1" , + "srr == P" , + "lpt >~ 1000000000:(30|20|10)" , + "srHook ~ (PRIM|SWAIT|SREG)" , + "srPoll == PRIM" + ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll == SOK" - ], - "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)" , - "roles == master1::worker:" , - "score ~ (90|5)" - ], - "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" , - "srah == T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll == SOK" + ], + "pHost": [ + "clone_state ~ (UNDEFINED|DEMOTED)" , + "roles == master1::worker:" , + "score ~ (90|5)" + ], + "sHost": [ + "clone_state ~ (DEMOTED|PROMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" , + "srah == T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/kill_prim_inst.json b/test/json/angi-ScaleUp/kill_prim_inst.json index 93223555..ee40663b 100644 --- a/test/json/angi-ScaleUp/kill_prim_inst.json +++ b/test/json/angi-ScaleUp/kill_prim_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequisites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequisites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,26 +29,26 @@ "lpt >~ 1000000000:20" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt >~ 1000000000:30", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (PRIM|SOK)" - ], - "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)" , - "roles == master1::worker:" , - "score ~ (90|5|0)" - ], - "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)" , + "roles == master1::worker:" , + "score ~ (90|5|0)" + ], + "sHost": [ + "clone_state ~ (PROMOTED|DEMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,43 +56,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 1" , - "srr == P" , - "lpt >~ 1000000000:(30|20|10)" , - "srHook ~ (PRIM|SWAIT|SREG)" , - "srPoll == PRIM" - ], + "lss == 1" , + "srr == P" , + "lpt >~ 1000000000:(30|20|10)" , + "srHook ~ (PRIM|SWAIT|SREG)" , + "srPoll == PRIM" + ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll == SOK" - ], - "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)" , - "roles == master1::worker:" , - "score ~ (90|5)" - ], - "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" , - "srah == T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll == SOK" + ], + "pHost": [ + "clone_state ~ (UNDEFINED|DEMOTED)" , + "roles == master1::worker:" , + "score ~ (90|5)" + ], + "sHost": [ + "clone_state ~ (DEMOTED|PROMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" , + "srah == T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/kill_prim_node.json b/test/json/angi-ScaleUp/kill_prim_node.json index 160f480c..f0802493 100644 --- a/test/json/angi-ScaleUp/kill_prim_node.json +++ b/test/json/angi-ScaleUp/kill_prim_node.json @@ -3,19 +3,19 @@ "name": "Kill primary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -26,26 +26,26 @@ "lpt ~ (1[6-9]........|20|10)" , "srHook ~ (PRIM|SWAIT|SREG)" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (1[6-9]........|30)", "lss == 4", "srr ~ (S|P)", "srHook ~ (PRIM|SOK)", "srPoll ~ (SOK|SFAIL)" - ], - "pHost": [ - "clone_state is None", - "roles is None", - "score is None" - ], - "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state is None", + "roles is None", + "score is None" + ], + "sHost": [ + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,41 +53,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss ~ (1|2)", - "srr ~ (P|S)" , - "lpt ~ (1[6-9]........|30|20|10)" , - "srHook ~ (PRIM|SWAIT|SREG)" , - "srPoll ~ (PRIM|SFAIL)" - ], + "lss ~ (1|2)", + "srr ~ (P|S)" , + "lpt ~ (1[6-9]........|30|20|10)" , + "srHook ~ (PRIM|SWAIT|SREG)" , + "srPoll ~ (PRIM|SFAIL)" + ], "sSite": [ - "lpt ~ (1[6-9]........|30)", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|PRIM)" - ], - "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)" , - "roles == master1::worker:" - ], - "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)" , - "roles == master1:master:worker:master" , - "score ~ (100|145|150)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt ~ (1[6-9]........|30)", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|PRIM)" + ], + "pHost": [ + "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)" , + "roles == master1::worker:" + ], + "sHost": [ + "clone_state ~ (DEMOTED|PROMOTED)" , + "roles == master1:master:worker:master" , + "score ~ (100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/kill_secn_indexserver.json b/test/json/angi-ScaleUp/kill_secn_indexserver.json index d1939c74..f6fbfa3f 100644 --- a/test/json/angi-ScaleUp/kill_secn_indexserver.json +++ b/test/json/angi-ScaleUp/kill_secn_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill secondary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook == SFAIL", "srPoll ~ (SFAIL|SOK)" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state == DEMOTED" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state == DEMOTED" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4" , - "srr == P" , - "lpt > 1000000000" , - "srHook == PRIM" , - "srPoll == PRIM" - ], + "lss == 4" , + "srr == P" , + "lpt > 1000000000" , + "srHook == PRIM" , + "srPoll == PRIM" + ], "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SFAIL", - "srPoll ~ (SFAIL|SOK)" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 240, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sCCC to be the same as at test begin", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SFAIL", + "srPoll ~ (SFAIL|SOK)" + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 240, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sCCC to be the same as at test begin", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/kill_secn_inst.json b/test/json/angi-ScaleUp/kill_secn_inst.json index a15a8588..66b68f6d 100644 --- a/test/json/angi-ScaleUp/kill_secn_inst.json +++ b/test/json/angi-ScaleUp/kill_secn_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt ~ (10|30)", "lss ~ (1|2)", "srr == S", "srHook ~ (SFAIL|SWAIT)", "srPoll ~ (SFAIL|SOK)" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state == DEMOTED" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state == DEMOTED" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,41 +54,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4" , - "srr == P" , - "lpt > 1000000000" , - "srHook == PRIM" , - "srPoll == PRIM" - ], + "lss == 4" , + "srr == P" , + "lpt > 1000000000" , + "srHook == PRIM" , + "srPoll == PRIM" + ], "sSite": [ - "lpt == 10", - "lss ~ (1|2)", - "srr == S", - "srHook ~ (SFAIL|SWAIT)", - "srPoll ~ (SFAIL|SOK)" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 240, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt == 10", + "lss ~ (1|2)", + "srr == S", + "srHook ~ (SFAIL|SWAIT)", + "srPoll ~ (SFAIL|SOK)" + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state ~ (UNDEFINED|DEMOTED)" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 240, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/kill_secn_node.json b/test/json/angi-ScaleUp/kill_secn_node.json index c8a12b47..18221292 100644 --- a/test/json/angi-ScaleUp/kill_secn_node.json +++ b/test/json/angi-ScaleUp/kill_secn_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,25 +27,25 @@ "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" - ], + ], "sSite": [ "lpt == 10", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state is None", - "roles is None" , - "score is None" - ] - }, - { + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state is None", + "roles is None" , + "score is None" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,41 +53,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4" , - "srr == P" , - "lpt > 1000000000" , - "srHook == PRIM" , - "srPoll == PRIM" - ], + "lss == 4" , + "srr == P" , + "lpt > 1000000000" , + "srHook == PRIM" , + "srPoll == PRIM" + ], "sSite": [ - "lpt == 10", - "lss ~ (1|2)", - "srr == S", - "srHook ~ (SFAIL|SWAIT)", - "srPoll ~ (SFAIL|SOK)" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt == 10", + "lss ~ (1|2)", + "srr == S", + "srHook ~ (SFAIL|SWAIT)", + "srPoll ~ (SFAIL|SOK)" + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state ~ (UNDEFINED|DEMOTED)" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleUp/maintenance_cluster_turn_hana.json index 54e79d46..2c821542 100644 --- a/test/json/angi-ScaleUp/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleUp/maintenance_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "maintenance_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_maintenance_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_maintenance_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json b/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json index 32788213..6b064b31 100644 --- a/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleUp/maintenance_with_standby_nodes.json @@ -3,19 +3,19 @@ "name": "standby+online secondary then standby+online primary", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites ssn", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites ssn", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -24,10 +24,10 @@ "post": "osn", "pSite": "pSiteUp", "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { + "pHost": "pHostUp", + "sHost": "sHostDown" + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -36,43 +36,43 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state == DEMOTED" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "sr up and running again", - "next": "step110", + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SWAIT", + "srPoll == SFAIL" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state == DEMOTED" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0)" + ] + }, + { + "step": "step40", + "name": "sr up and running again", + "next": "step110", "loop": 120, "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step110", - "name": "test prerequitsites spn", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step110", + "name": "test prerequitsites spn", + "next": "step120", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -85,15 +85,15 @@ "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], - "pHost": "pHostDown", - "sHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { + ], + "pHost": "pHostDown", + "sHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -101,33 +101,33 @@ "post": "opn", "wait": 2, "pSite": [ - "lss == 1" , - "srr == P" , - "lpt == 10" , - "srHook ~ (SWAIT|SFAIL)" , - "srPoll == SFAIL" + "lss == 1" , + "srr == P" , + "lpt == 10" , + "srHook ~ (SWAIT|SFAIL)" , + "srPoll == SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score == 150" , - "standby == on" + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score == 150" , + "standby == on" ], "sHost": "pHostUp" - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + }, + { + "step": "final140", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/nop.json b/test/json/angi-ScaleUp/nop.json index 767d5d02..6403c51e 100644 --- a/test/json/angi-ScaleUp/nop.json +++ b/test/json/angi-ScaleUp/nop.json @@ -3,28 +3,28 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/restart_cluster.json b/test/json/angi-ScaleUp/restart_cluster.json index 1adb528e..43885d60 100644 --- a/test/json/angi-ScaleUp/restart_cluster.json +++ b/test/json/angi-ScaleUp/restart_cluster.json @@ -3,29 +3,29 @@ "name": "stop and restart cluster and hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_restart_cluster", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_restart_cluster", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/restart_cluster_hana_running.json b/test/json/angi-ScaleUp/restart_cluster_hana_running.json index 337651f1..bd543890 100644 --- a/test/json/angi-ScaleUp/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleUp/restart_cluster_hana_running.json @@ -3,29 +3,29 @@ "name": "stop and restart cluster, keep hana_running", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_restart_cluster_hana_running", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_restart_cluster_hana_running", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/restart_cluster_turn_hana.json b/test/json/angi-ScaleUp/restart_cluster_turn_hana.json index d08b8008..cf719a26 100644 --- a/test/json/angi-ScaleUp/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleUp/restart_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "stop cluster, turn hana, start cluster", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell sct_test_restart_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell sct_test_restart_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/split_brain_prio.json b/test/json/angi-ScaleUp/split_brain_prio.json index 15be2ee0..3e9a675d 100644 --- a/test/json/angi-ScaleUp/split_brain_prio.json +++ b/test/json/angi-ScaleUp/split_brain_prio.json @@ -3,86 +3,86 @@ "name": "split brain with prio fencing to simulate fence of secondary", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "simulate_split_brain", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "failure detected", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "simulate_split_brain", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "failure detected", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 4" , "srr == P" , "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" - ], - "sSite": [ + ], + "sSite": [ "lpt == 10", "srr == S", "srHook == SFAIL", "srPoll == SFAIL" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ] - }, - { - "step": "step30", - "name": "begin recover", - "next": "final40", - "loop": 120, - "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ - "lss == 4" , - "srr == P" , - "lpt > 1000000000" , - "srHook == PRIM" , - "srPoll == PRIM" - ], - "sSite": [ - "lpt == 10", - "lss ~ (1|2)", - "srr == S", - "srHook ~ (SFAIL|SWAIT)", - "srPoll ~ (SFAIL|SOK)" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ] + }, + { + "step": "step30", + "name": "begin recover", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ + "lss == 4" , + "srr == P" , + "lpt > 1000000000" , + "srHook == PRIM" , + "srPoll == PRIM" + ], + "sSite": [ + "lpt == 10", + "lss ~ (1|2)", + "srr == S", + "srHook ~ (SFAIL|SWAIT)", + "srPoll ~ (SFAIL|SOK)" + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state ~ (UNDEFINED|DEMOTED)" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/standby_prim_node.json b/test/json/angi-ScaleUp/standby_prim_node.json index 6014411b..a21f8ad6 100644 --- a/test/json/angi-ScaleUp/standby_prim_node.json +++ b/test/json/angi-ScaleUp/standby_prim_node.json @@ -3,95 +3,95 @@ "name": "standby primary node (and online again)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "pSite": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "pSite": [ "lss == 1" , "srr == P" , "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" - ], - "sSite": [ + ], + "sSite": [ "lpt ~ (30|1[6-9]........)", "lss == 4", "srr == S", "srHook ~ (PRIM|SOK)", "srPoll == SOK" - ], - "pHost": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score == 150" , - "standby == on" - ], - "sHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score ~ (100|145)" - ] - }, - { - "step": "step30", - "name": "takeover on secondary", - "next": "final40", - "loop": 120, - "post": "opn", - "wait": 2, - "pSite": [ - "lss == 1" , - "srr == P" , - "lpt == 10" , - "srHook == SWAIT" , - "srPoll == SFAIL" - ], - "sSite": [ - "lpt > 1000000000", - "lss == 4", - "srr == P", - "srHook == PRIM", - "srPoll == PRIM" - ], - "pHost": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score == 150" , - "standby == on" - ], - "sHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "todo": "allow pointer to prereq10", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + ], + "pHost": [ + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score == 150" , + "standby == on" + ], + "sHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score ~ (100|145)" + ] + }, + { + "step": "step30", + "name": "takeover on secondary", + "next": "final40", + "loop": 120, + "post": "opn", + "wait": 2, + "pSite": [ + "lss == 1" , + "srr == P" , + "lpt == 10" , + "srHook == SWAIT" , + "srPoll == SFAIL" + ], + "sSite": [ + "lpt > 1000000000", + "lss == 4", + "srr == P", + "srHook == PRIM", + "srPoll == PRIM" + ], + "pHost": [ + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score == 150" , + "standby == on" + ], + "sHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "todo": "allow pointer to prereq10", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleUp/standby_secn_node.json b/test/json/angi-ScaleUp/standby_secn_node.json index 6c50bbf7..514de4ad 100644 --- a/test/json/angi-ScaleUp/standby_secn_node.json +++ b/test/json/angi-ScaleUp/standby_secn_node.json @@ -3,94 +3,94 @@ "name": "standby secondary node (and online again)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "node is standby", - "next": "step30", - "loop": 120, - "wait": 2, - "post": "osn", - "pSite": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "node is standby", + "next": "step30", + "loop": 120, + "wait": 2, + "post": "osn", + "pSite": [ + "lss == 4" , + "srr == P" , + "lpt > 1000000000" , + "srHook == PRIM" , + "srPoll == PRIM" + ], + "sSite": [ + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SFAIL", + "srPoll == SFAIL" + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state == UNDEFINED" , + "roles == master1::worker:" , + "score == 100" , + "standby == on" + ] + }, + { + "step": "step30", + "name": "node back online", + "next": "final40", + "loop": 120, + "wait": 2, + "todo": "pHost+sHost to check site-name", + "pSite": [ "lss == 4" , "srr == P" , "lpt > 1000000000" , "srHook == PRIM" , "srPoll == PRIM" - ], - "sSite": [ + ], + "sSite": [ "lpt == 10", "lss == 1", "srr == S", - "srHook == SFAIL", + "srHook == SWAIT", "srPoll == SFAIL" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state == UNDEFINED" , - "roles == master1::worker:" , - "score == 100" , - "standby == on" - ] - }, - { - "step": "step30", - "name": "node back online", - "next": "final40", + ], + "pHost": [ + "clone_state == PROMOTED" , + "roles == master1:master:worker:master" , + "score == 150" + ], + "sHost": [ + "clone_state == DEMOTED" , + "roles == master1::worker:" , + "score ~ (-INFINITY|0)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", "loop": 120, "wait": 2, - "todo": "pHost+sHost to check site-name", - "pSite": [ - "lss == 4" , - "srr == P" , - "lpt > 1000000000" , - "srHook == PRIM" , - "srPoll == PRIM" - ], - "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" - ], - "pHost": [ - "clone_state == PROMOTED" , - "roles == master1:master:worker:master" , - "score == 150" - ], - "sHost": [ - "clone_state == DEMOTED" , - "roles == master1::worker:" , - "score ~ (-INFINITY|0)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From 4789e1121c27529784c2ad1f1c6c9a32d2924d2b Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 18:56:24 +0100 Subject: [PATCH 087/123] test/json/classic-ScaleOut - fix indents of lines --- .../block_manual_takeover.json | 60 +++---- test/json/classic-ScaleOut/defaults.json | 66 +++---- test/json/classic-ScaleOut/free_log_area.json | 70 ++++---- .../kill_prim_indexserver.json | 128 ++++++------- .../json/classic-ScaleOut/kill_prim_inst.json | 132 +++++++------- .../json/classic-ScaleOut/kill_prim_node.json | 118 ++++++------ .../kill_prim_worker_indexserver.json | 124 ++++++------- .../kill_prim_worker_inst.json | 130 +++++++------- .../kill_prim_worker_node.json | 122 ++++++------- .../kill_secn_indexserver.json | 126 ++++++------- .../json/classic-ScaleOut/kill_secn_inst.json | 124 ++++++------- .../json/classic-ScaleOut/kill_secn_node.json | 114 ++++++------ .../kill_secn_worker_inst.json | 94 +++++----- .../kill_secn_worker_node.json | 90 +++++----- .../maintenance_cluster_turn_hana.json | 50 +++--- .../maintenance_with_standby_nodes.json | 168 +++++++++--------- test/json/classic-ScaleOut/nop-false.json | 42 ++--- test/json/classic-ScaleOut/nop.json | 38 ++-- .../classic-ScaleOut/restart_cluster.json | 48 ++--- .../restart_cluster_hana_running.json | 48 ++--- .../restart_cluster_turn_hana.json | 50 +++--- .../classic-ScaleOut/standby_prim_node.json | 130 +++++++------- .../classic-ScaleOut/standby_secn_node.json | 126 ++++++------- 23 files changed, 1099 insertions(+), 1099 deletions(-) diff --git a/test/json/classic-ScaleOut/block_manual_takeover.json b/test/json/classic-ScaleOut/block_manual_takeover.json index 4a66bcff..41863d25 100644 --- a/test/json/classic-ScaleOut/block_manual_takeover.json +++ b/test/json/classic-ScaleOut/block_manual_takeover.json @@ -3,40 +3,40 @@ "name": "blocked manual takeover", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "bmt", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 120", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "bmt", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 120", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/defaults.json b/test/json/classic-ScaleOut/defaults.json index aabce016..fb1840a6 100644 --- a/test/json/classic-ScaleOut/defaults.json +++ b/test/json/classic-ScaleOut/defaults.json @@ -2,58 +2,58 @@ "opMode": "logreplay", "srMode": "sync", "checkPtr": { - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ + "globalUp": [ + "topology=ScaleOut" + ], + "pHostUp": [ + "clone_state=PROMOTED", + "roles=master1:master:worker:master", + "score=150" + ], + "pSiteUp": [ "lpt=1[6-9]........", "lss=4", "srr=P", "srHook=PRIM", "srPoll=PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "lpt=30", "lss=4", - "srr=S", + "srr=S", "srHook=SOK", "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles=master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], + ], + "sHostUp": [ + "clone_state=DEMOTED", + "roles=master1:master:worker:master", + "score=100" + ], + "pHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], "pSiteDown": [ "lpt=1[6-9]........" , "lss=1" , "srr=P" , "srHook=PRIM" , "srPoll=PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] + ], + "sHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] } } diff --git a/test/json/classic-ScaleOut/free_log_area.json b/test/json/classic-ScaleOut/free_log_area.json index 3de2d553..f1708d42 100644 --- a/test/json/classic-ScaleOut/free_log_area.json +++ b/test/json/classic-ScaleOut/free_log_area.json @@ -3,40 +3,40 @@ "name": "free log area on primary", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell test_free_log_area", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "still running", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 60", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell test_free_log_area", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "still running", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 60", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_prim_indexserver.json b/test/json/classic-ScaleOut/kill_prim_indexserver.json index ebe08863..47e0ed1a 100644 --- a/test/json/classic-ScaleOut/kill_prim_indexserver.json +++ b/test/json/classic-ScaleOut/kill_prim_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -28,26 +28,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -55,43 +55,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_prim_inst.json b/test/json/classic-ScaleOut/kill_prim_inst.json index cd31c9d7..78260e10 100644 --- a/test/json/classic-ScaleOut/kill_prim_inst.json +++ b/test/json/classic-ScaleOut/kill_prim_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -30,26 +30,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -57,43 +57,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_prim_node.json b/test/json/classic-ScaleOut/kill_prim_node.json index 5df8781f..cad372b6 100644 --- a/test/json/classic-ScaleOut/kill_prim_node.json +++ b/test/json/classic-ScaleOut/kill_prim_node.json @@ -3,19 +3,19 @@ "name": "Kill primary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,23 +27,23 @@ "lpt=(1[6-9]........|20|10)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=(S|P)", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -51,41 +51,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=(1|2)", - "srr=(P|S)" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=(PRIM|SFAIL)" - ], + "lss=(1|2)", + "srr=(P|S)" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=(PRIM|SFAIL)" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|PRIM)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , - "roles=master1::worker:" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145|150)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|PRIM)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , + "roles=master1::worker:" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_prim_worker_indexserver.json b/test/json/classic-ScaleOut/kill_prim_worker_indexserver.json index dbc8b46c..dc6caba4 100644 --- a/test/json/classic-ScaleOut/kill_prim_worker_indexserver.json +++ b/test/json/classic-ScaleOut/kill_prim_worker_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary worker indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,25 +27,25 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "todo": "pHost+sHost to check site-name", "todo2": "why do we need SFAIL for srHook?", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=(PRIM|SFAIL)", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=(PRIM|SFAIL)", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_prim_worker_inst.json b/test/json/classic-ScaleOut/kill_prim_worker_inst.json index da4900be..36c1232d 100644 --- a/test/json/classic-ScaleOut/kill_prim_worker_inst.json +++ b/test/json/classic-ScaleOut/kill_prim_worker_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary worker instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,25 +29,25 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -55,43 +55,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:", - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:", + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_prim_worker_node.json b/test/json/classic-ScaleOut/kill_prim_worker_node.json index 01250295..663372a7 100644 --- a/test/json/classic-ScaleOut/kill_prim_worker_node.json +++ b/test/json/classic-ScaleOut/kill_prim_worker_node.json @@ -3,19 +3,19 @@ "name": "Kill primary worker node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,25 +27,25 @@ "lpt=(1[6-9]........|20|10)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=(S|P)", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(DEMOTED|UNDEFINED|WAITING4NODES)" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(DEMOTED|UNDEFINED|WAITING4NODES)" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,41 +53,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=(1|2)", - "srr=(P|S)" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=(PRIM|SFAIL)" - ], + "lss=(1|2)", + "srr=(P|S)" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=(PRIM|SFAIL)" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|PRIM)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , - "roles=master1::worker:" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145|150)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|PRIM)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , + "roles=master1::worker:" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_secn_indexserver.json b/test/json/classic-ScaleOut/kill_secn_indexserver.json index 9d14c833..4f500059 100644 --- a/test/json/classic-ScaleOut/kill_secn_indexserver.json +++ b/test/json/classic-ScaleOut/kill_secn_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill secondary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SFAIL", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sCCC to be the same as at test begin", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SFAIL", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sCCC to be the same as at test begin", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_secn_inst.json b/test/json/classic-ScaleOut/kill_secn_inst.json index d6e67b09..2db5e9b2 100644 --- a/test/json/classic-ScaleOut/kill_secn_inst.json +++ b/test/json/classic-ScaleOut/kill_secn_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,41 +54,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 240, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 240, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_secn_node.json b/test/json/classic-ScaleOut/kill_secn_node.json index 3df7a74c..a5febca2 100644 --- a/test/json/classic-ScaleOut/kill_secn_node.json +++ b/test/json/classic-ScaleOut/kill_secn_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,21 +27,21 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -49,41 +49,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_secn_worker_inst.json b/test/json/classic-ScaleOut/kill_secn_worker_inst.json index 2af4b7ea..6a00bf97 100644 --- a/test/json/classic-ScaleOut/kill_secn_worker_inst.json +++ b/test/json/classic-ScaleOut/kill_secn_worker_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary worker instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -28,15 +28,15 @@ "srr=S", "srHook=(SFAIL|SWAIT)", "srPoll=(SFAIL|SOK)" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=(DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=(DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -45,30 +45,30 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/kill_secn_worker_node.json b/test/json/classic-ScaleOut/kill_secn_worker_node.json index cc955eac..ddbaad16 100644 --- a/test/json/classic-ScaleOut/kill_secn_worker_node.json +++ b/test/json/classic-ScaleOut/kill_secn_worker_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary worker node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -28,13 +28,13 @@ "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=WAITING4NODES" - ] - }, - { + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=WAITING4NODES" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -43,30 +43,30 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/classic-ScaleOut/maintenance_cluster_turn_hana.json index 15aacfa3..cdf90e80 100644 --- a/test/json/classic-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/classic-ScaleOut/maintenance_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "maintenance_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_maintenance_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_maintenance_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/maintenance_with_standby_nodes.json b/test/json/classic-ScaleOut/maintenance_with_standby_nodes.json index 93ec9f13..feba872b 100644 --- a/test/json/classic-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/classic-ScaleOut/maintenance_with_standby_nodes.json @@ -6,19 +6,19 @@ "mstResource": "ms_SAPHanaCon_HA1_HDB00", "todo": "expectations needs to be fixed - e.g. step20 sHostDown is wrong, because topology will also be stopped. roles will be ::: not master1:...", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites ssn", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites ssn", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -27,10 +27,10 @@ "post": "osn", "pSite": "pSiteUp", "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { + "pHost": "pHostUp", + "sHost": "sHostDown" + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -39,43 +39,43 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "end recover", - "next": "step110", + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "step40", + "name": "end recover", + "next": "step110", "loop": 120, "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step110", - "name": "test prerequitsites spn", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step110", + "name": "test prerequitsites spn", + "next": "step120", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -88,15 +88,15 @@ "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": "pHostDown", - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": "pHostDown", + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -104,33 +104,33 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" ], "sHost": "pHostUp" - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + }, + { + "step": "final140", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/nop-false.json b/test/json/classic-ScaleOut/nop-false.json index a3c5ad48..46924104 100644 --- a/test/json/classic-ScaleOut/nop-false.json +++ b/test/json/classic-ScaleOut/nop-false.json @@ -3,31 +3,31 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": [ - "topology=Nix" - ], - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": [ + "topology=Nix" + ], + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/nop.json b/test/json/classic-ScaleOut/nop.json index e3d9826e..31d4111e 100644 --- a/test/json/classic-ScaleOut/nop.json +++ b/test/json/classic-ScaleOut/nop.json @@ -3,29 +3,29 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": "globalUp", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": "globalUp", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/restart_cluster.json b/test/json/classic-ScaleOut/restart_cluster.json index 7018fdd2..c59f8e20 100644 --- a/test/json/classic-ScaleOut/restart_cluster.json +++ b/test/json/classic-ScaleOut/restart_cluster.json @@ -3,29 +3,29 @@ "name": "restart_cluster", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/restart_cluster_hana_running.json b/test/json/classic-ScaleOut/restart_cluster_hana_running.json index b7a20ca7..25ebf149 100644 --- a/test/json/classic-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/classic-ScaleOut/restart_cluster_hana_running.json @@ -3,29 +3,29 @@ "name": "restart_cluster_hana_running", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_hana_running", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_hana_running", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/restart_cluster_turn_hana.json b/test/json/classic-ScaleOut/restart_cluster_turn_hana.json index cd398f38..fc1a482a 100644 --- a/test/json/classic-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/classic-ScaleOut/restart_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "restart_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/standby_prim_node.json b/test/json/classic-ScaleOut/standby_prim_node.json index 765654ad..1e047088 100644 --- a/test/json/classic-ScaleOut/standby_prim_node.json +++ b/test/json/classic-ScaleOut/standby_prim_node.json @@ -3,19 +3,19 @@ "name": "standby primary node (and online again)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -27,27 +27,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(30|1[6-9]........)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "takeover on secondary", "next": "final40", @@ -55,43 +55,43 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" - ], + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" + ], "sSite": [ - "lpt=1[6-9]........", - "lss=4", - "srr=P", - "srHook=PRIM", - "srPoll=PRIM" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "todo": "allow pointer to prereq10", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=1[6-9]........", + "lss=4", + "srr=P", + "srHook=PRIM", + "srPoll=PRIM" + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "todo": "allow pointer to prereq10", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut/standby_secn_node.json b/test/json/classic-ScaleOut/standby_secn_node.json index 686e3063..ae59404c 100644 --- a/test/json/classic-ScaleOut/standby_secn_node.json +++ b/test/json/classic-ScaleOut/standby_secn_node.json @@ -5,19 +5,19 @@ "sid": "HA1", "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -30,27 +30,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] + }, + { "step": "step30", "name": "node back online", "next": "final40", @@ -58,41 +58,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From 82aa877430c5719fbf9c1e53fc000b21b9741e8d Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 18:57:41 +0100 Subject: [PATCH 088/123] test/json/classic-ScaleUp - fix indents of lines --- .../block_manual_takeover.json | 60 +++---- test/json/classic-ScaleUp/defaults.json | 32 ++-- test/json/classic-ScaleUp/free_log_area.json | 70 ++++---- .../kill_prim_indexserver.json | 128 ++++++------- test/json/classic-ScaleUp/kill_prim_inst.json | 132 +++++++------- .../kill_secn_indexserver.json | 126 ++++++------- test/json/classic-ScaleUp/kill_secn_inst.json | 124 ++++++------- .../maintenance_cluster_turn_hana.json | 50 +++--- .../maintenance_with_standby_nodes.json | 168 +++++++++--------- test/json/classic-ScaleUp/nop.json | 36 ++-- .../json/classic-ScaleUp/restart_cluster.json | 48 ++--- .../restart_cluster_hana_running.json | 48 ++--- .../restart_cluster_turn_hana.json | 50 +++--- .../classic-ScaleUp/standby_prim_node.json | 130 +++++++------- .../classic-ScaleUp/standby_secn_node.json | 126 ++++++------- 15 files changed, 664 insertions(+), 664 deletions(-) diff --git a/test/json/classic-ScaleUp/block_manual_takeover.json b/test/json/classic-ScaleUp/block_manual_takeover.json index 4a66bcff..41863d25 100644 --- a/test/json/classic-ScaleUp/block_manual_takeover.json +++ b/test/json/classic-ScaleUp/block_manual_takeover.json @@ -3,40 +3,40 @@ "name": "blocked manual takeover", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "bmt", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 120", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "bmt", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 120", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/defaults.json b/test/json/classic-ScaleUp/defaults.json index 54169f62..108007d8 100644 --- a/test/json/classic-ScaleUp/defaults.json +++ b/test/json/classic-ScaleUp/defaults.json @@ -2,48 +2,48 @@ "opMode": "logreplay", "srMode": "sync", "checkPtr": { - "todo": "lpa_ha1_lpt must be flexible (like lpa_@@sid@@_lpt);", - "pHostUp": [ + "todo": "lpa_ha1_lpt must be flexible (like lpa_@@sid@@_lpt);", + "pHostUp": [ "clone_state=PROMOTED", "lpa_ha1_lpt=1[6-9]........", "roles=4:P:master1:master:worker:master", "score=150", "sync_state=PRIM" - ], - "pSiteUp": [ + ], + "pSiteUp": [ "srHook=PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "srHook=SOK" - ], - "sHostUp": [ + ], + "sHostUp": [ "clone_state=DEMOTED", "roles=4:S:master1:master:worker:master", "score=100", "lpa_ha1_lpt=30", "sync_state=SOK" - ], - "pHostDown": [ + ], + "pHostDown": [ "clone_state=UNDEFINED" , "roles=1:P:master1::worker:" , "score=150", "standby=on", "sync_state=PRIM" - ], + ], "pSiteDown": [ "lpt=1[6-9]........" , "srHook=PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt=10", "srHook=SFAIL" - ], - "sHostDown": [ + ], + "sHostDown": [ "clone_state=UNDEFINED" , "roles=1:S:master1::worker:" , "score=100" , "standby=on", "srPoll=SFAIL" - ] + ] } } diff --git a/test/json/classic-ScaleUp/free_log_area.json b/test/json/classic-ScaleUp/free_log_area.json index 3de2d553..f1708d42 100644 --- a/test/json/classic-ScaleUp/free_log_area.json +++ b/test/json/classic-ScaleUp/free_log_area.json @@ -3,40 +3,40 @@ "name": "free log area on primary", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell test_free_log_area", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "still running", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 60", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell test_free_log_area", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "still running", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 60", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/kill_prim_indexserver.json b/test/json/classic-ScaleUp/kill_prim_indexserver.json index 3972ac33..e3f7bb18 100644 --- a/test/json/classic-ScaleUp/kill_prim_indexserver.json +++ b/test/json/classic-ScaleUp/kill_prim_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,43 +54,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=SOK" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=SOK" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/kill_prim_inst.json b/test/json/classic-ScaleUp/kill_prim_inst.json index 1f73386b..8330d1f2 100644 --- a/test/json/classic-ScaleUp/kill_prim_inst.json +++ b/test/json/classic-ScaleUp/kill_prim_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,26 +29,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,43 +56,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=SOK" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=SOK" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/kill_secn_indexserver.json b/test/json/classic-ScaleUp/kill_secn_indexserver.json index 9d14c833..4f500059 100644 --- a/test/json/classic-ScaleUp/kill_secn_indexserver.json +++ b/test/json/classic-ScaleUp/kill_secn_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill secondary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SFAIL", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sCCC to be the same as at test begin", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SFAIL", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sCCC to be the same as at test begin", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/kill_secn_inst.json b/test/json/classic-ScaleUp/kill_secn_inst.json index 53f2cfce..ac57eb18 100644 --- a/test/json/classic-ScaleUp/kill_secn_inst.json +++ b/test/json/classic-ScaleUp/kill_secn_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,41 +54,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/maintenance_cluster_turn_hana.json b/test/json/classic-ScaleUp/maintenance_cluster_turn_hana.json index 15aacfa3..cdf90e80 100644 --- a/test/json/classic-ScaleUp/maintenance_cluster_turn_hana.json +++ b/test/json/classic-ScaleUp/maintenance_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "maintenance_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_maintenance_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_maintenance_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/maintenance_with_standby_nodes.json b/test/json/classic-ScaleUp/maintenance_with_standby_nodes.json index 07cbab5e..f9755425 100644 --- a/test/json/classic-ScaleUp/maintenance_with_standby_nodes.json +++ b/test/json/classic-ScaleUp/maintenance_with_standby_nodes.json @@ -5,19 +5,19 @@ "sid": "HA1", "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites ssn", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites ssn", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -26,10 +26,10 @@ "post": "osn", "pSite": "pSiteUp", "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { + "pHost": "pHostUp", + "sHost": "sHostDown" + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -38,43 +38,43 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "end recover", - "next": "step110", + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "step40", + "name": "end recover", + "next": "step110", "loop": 120, "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step110", - "name": "test prerequitsites spn", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step110", + "name": "test prerequitsites spn", + "next": "step120", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -87,15 +87,15 @@ "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": "pHostDown", - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": "pHostDown", + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -103,33 +103,33 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" ], "sHost": "pHostUp" - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + }, + { + "step": "final140", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/nop.json b/test/json/classic-ScaleUp/nop.json index 872854df..6403c51e 100644 --- a/test/json/classic-ScaleUp/nop.json +++ b/test/json/classic-ScaleUp/nop.json @@ -3,28 +3,28 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/restart_cluster.json b/test/json/classic-ScaleUp/restart_cluster.json index 7018fdd2..c59f8e20 100644 --- a/test/json/classic-ScaleUp/restart_cluster.json +++ b/test/json/classic-ScaleUp/restart_cluster.json @@ -3,29 +3,29 @@ "name": "restart_cluster", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/restart_cluster_hana_running.json b/test/json/classic-ScaleUp/restart_cluster_hana_running.json index b7a20ca7..25ebf149 100644 --- a/test/json/classic-ScaleUp/restart_cluster_hana_running.json +++ b/test/json/classic-ScaleUp/restart_cluster_hana_running.json @@ -3,29 +3,29 @@ "name": "restart_cluster_hana_running", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_hana_running", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_hana_running", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/restart_cluster_turn_hana.json b/test/json/classic-ScaleUp/restart_cluster_turn_hana.json index cd398f38..fc1a482a 100644 --- a/test/json/classic-ScaleUp/restart_cluster_turn_hana.json +++ b/test/json/classic-ScaleUp/restart_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "restart_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/standby_prim_node.json b/test/json/classic-ScaleUp/standby_prim_node.json index 765654ad..1e047088 100644 --- a/test/json/classic-ScaleUp/standby_prim_node.json +++ b/test/json/classic-ScaleUp/standby_prim_node.json @@ -3,19 +3,19 @@ "name": "standby primary node (and online again)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -27,27 +27,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(30|1[6-9]........)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "takeover on secondary", "next": "final40", @@ -55,43 +55,43 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" - ], + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" + ], "sSite": [ - "lpt=1[6-9]........", - "lss=4", - "srr=P", - "srHook=PRIM", - "srPoll=PRIM" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "todo": "allow pointer to prereq10", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=1[6-9]........", + "lss=4", + "srr=P", + "srHook=PRIM", + "srPoll=PRIM" + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "todo": "allow pointer to prereq10", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleUp/standby_secn_node.json b/test/json/classic-ScaleUp/standby_secn_node.json index 686e3063..ae59404c 100644 --- a/test/json/classic-ScaleUp/standby_secn_node.json +++ b/test/json/classic-ScaleUp/standby_secn_node.json @@ -5,19 +5,19 @@ "sid": "HA1", "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -30,27 +30,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] + }, + { "step": "step30", "name": "node back online", "next": "final40", @@ -58,41 +58,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From b93b4c07b02e794693e0cb929e906c58fc1a9a29 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 18:58:49 +0100 Subject: [PATCH 089/123] test/json/faults - fix indents of lines --- test/json/faults/faulty-syntax-flep.json | 38 ++++++++++++------------ 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/test/json/faults/faulty-syntax-flep.json b/test/json/faults/faulty-syntax-flep.json index 433946e4..ca0cafde 100644 --- a/test/json/faults/faulty-syntax-flep.json +++ b/test/json/faults/faulty-syntax-flep.json @@ -3,35 +3,35 @@ "name": "flep - test the json syntax check", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 4", - "pSite": [ + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 4", + "pSite": [ "lpt >~ 2000000000:^(20|30|1.........)$", "lss == 4", "srr == P", "srHook == PRIM", "srPoll == PRIM", "hugo is None" - ], - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + ], + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From 253c7fe55423513542834c8f912a09f8ef86f63e Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 18:59:48 +0100 Subject: [PATCH 090/123] test/json/faults - fix indents of lines --- .../block_manual_takeover.json | 60 +++---- .../defaults+newComparators.json | 78 ++++---- test/json/classic-ScaleOut-BW/defaults.json | 66 +++---- .../classic-ScaleOut-BW/free_log_area.json | 70 ++++---- .../kill_prim_indexserver.json | 128 ++++++------- .../classic-ScaleOut-BW/kill_prim_inst.json | 132 +++++++------- .../classic-ScaleOut-BW/kill_prim_node.json | 118 ++++++------ .../kill_prim_worker_indexserver.json | 124 ++++++------- .../kill_prim_worker_inst.json | 130 +++++++------- .../kill_prim_worker_node.json | 122 ++++++------- .../kill_secn_indexserver.json | 126 ++++++------- .../classic-ScaleOut-BW/kill_secn_inst.json | 124 ++++++------- .../classic-ScaleOut-BW/kill_secn_node.json | 114 ++++++------ .../kill_secn_worker_inst.json | 94 +++++----- .../kill_secn_worker_node.json | 90 +++++----- .../maintenance_cluster_turn_hana.json | 50 +++--- .../maintenance_with_standby_nodes.json | 168 +++++++++--------- test/json/classic-ScaleOut-BW/nop-false.json | 42 ++--- test/json/classic-ScaleOut-BW/nop.json | 38 ++-- .../classic-ScaleOut-BW/restart_cluster.json | 48 ++--- .../restart_cluster_hana_running.json | 48 ++--- .../restart_cluster_turn_hana.json | 50 +++--- .../standby_prim_node.json | 130 +++++++------- .../standby_secn_node.json | 126 ++++++------- 24 files changed, 1138 insertions(+), 1138 deletions(-) diff --git a/test/json/classic-ScaleOut-BW/block_manual_takeover.json b/test/json/classic-ScaleOut-BW/block_manual_takeover.json index 4a66bcff..41863d25 100644 --- a/test/json/classic-ScaleOut-BW/block_manual_takeover.json +++ b/test/json/classic-ScaleOut-BW/block_manual_takeover.json @@ -3,40 +3,40 @@ "name": "blocked manual takeover", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "bmt", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 120", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "bmt", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 120", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/defaults+newComparators.json b/test/json/classic-ScaleOut-BW/defaults+newComparators.json index 44699c70..1287f744 100644 --- a/test/json/classic-ScaleOut-BW/defaults+newComparators.json +++ b/test/json/classic-ScaleOut-BW/defaults+newComparators.json @@ -1,66 +1,66 @@ { "checkPtr": { - "comparartorinline": [ + "comparartorinline": [ "alfa!=dassollungleichsein", "lpa_@@sid@@_lpt > 160000", "beta=dassollgleichsein" - ], - "comparatortuple": [ - ("noty", "alfa=ungleich"), - () - ], - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ + ], + "comparatortuple": [ + ("noty", "alfa=ungleich"), + () + ], + "globalUp": [ + "topology=ScaleOut" + ], + "pHostUp": [ + "clone_state=PROMOTED", + "roles=master1:master:worker:master", + "score=150" + ], + "pSiteUp": [ "lpt=1[6-9]........", "lss=4", "srr=P", "srHook=PRIM", "srPoll=PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "lpt=30", "lss=4", - "srr=S", + "srr=S", "srHook=SOK", "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles=master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], + ], + "sHostUp": [ + "clone_state=DEMOTED", + "roles=master1:master:worker:master", + "score=100" + ], + "pHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], "pSiteDown": [ "lpt=1[6-9]........" , "lss=1" , "srr=P" , "srHook=PRIM" , "srPoll=PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] + ], + "sHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] } } diff --git a/test/json/classic-ScaleOut-BW/defaults.json b/test/json/classic-ScaleOut-BW/defaults.json index aabce016..fb1840a6 100644 --- a/test/json/classic-ScaleOut-BW/defaults.json +++ b/test/json/classic-ScaleOut-BW/defaults.json @@ -2,58 +2,58 @@ "opMode": "logreplay", "srMode": "sync", "checkPtr": { - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ + "globalUp": [ + "topology=ScaleOut" + ], + "pHostUp": [ + "clone_state=PROMOTED", + "roles=master1:master:worker:master", + "score=150" + ], + "pSiteUp": [ "lpt=1[6-9]........", "lss=4", "srr=P", "srHook=PRIM", "srPoll=PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "lpt=30", "lss=4", - "srr=S", + "srr=S", "srHook=SOK", "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles=master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], + ], + "sHostUp": [ + "clone_state=DEMOTED", + "roles=master1:master:worker:master", + "score=100" + ], + "pHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], "pSiteDown": [ "lpt=1[6-9]........" , "lss=1" , "srr=P" , "srHook=PRIM" , "srPoll=PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] + ], + "sHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] } } diff --git a/test/json/classic-ScaleOut-BW/free_log_area.json b/test/json/classic-ScaleOut-BW/free_log_area.json index 3de2d553..f1708d42 100644 --- a/test/json/classic-ScaleOut-BW/free_log_area.json +++ b/test/json/classic-ScaleOut-BW/free_log_area.json @@ -3,40 +3,40 @@ "name": "free log area on primary", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell test_free_log_area", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "still running", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 60", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell test_free_log_area", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "still running", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 60", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_prim_indexserver.json b/test/json/classic-ScaleOut-BW/kill_prim_indexserver.json index ebe08863..47e0ed1a 100644 --- a/test/json/classic-ScaleOut-BW/kill_prim_indexserver.json +++ b/test/json/classic-ScaleOut-BW/kill_prim_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -28,26 +28,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -55,43 +55,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_prim_inst.json b/test/json/classic-ScaleOut-BW/kill_prim_inst.json index cd31c9d7..78260e10 100644 --- a/test/json/classic-ScaleOut-BW/kill_prim_inst.json +++ b/test/json/classic-ScaleOut-BW/kill_prim_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -30,26 +30,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -57,43 +57,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_prim_node.json b/test/json/classic-ScaleOut-BW/kill_prim_node.json index 5df8781f..cad372b6 100644 --- a/test/json/classic-ScaleOut-BW/kill_prim_node.json +++ b/test/json/classic-ScaleOut-BW/kill_prim_node.json @@ -3,19 +3,19 @@ "name": "Kill primary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,23 +27,23 @@ "lpt=(1[6-9]........|20|10)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=(S|P)", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -51,41 +51,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=(1|2)", - "srr=(P|S)" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=(PRIM|SFAIL)" - ], + "lss=(1|2)", + "srr=(P|S)" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=(PRIM|SFAIL)" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|PRIM)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , - "roles=master1::worker:" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145|150)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|PRIM)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , + "roles=master1::worker:" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_prim_worker_indexserver.json b/test/json/classic-ScaleOut-BW/kill_prim_worker_indexserver.json index dbc8b46c..dc6caba4 100644 --- a/test/json/classic-ScaleOut-BW/kill_prim_worker_indexserver.json +++ b/test/json/classic-ScaleOut-BW/kill_prim_worker_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary worker indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,25 +27,25 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "todo": "pHost+sHost to check site-name", "todo2": "why do we need SFAIL for srHook?", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=(PRIM|SFAIL)", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 360, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=(PRIM|SFAIL)", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 360, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_prim_worker_inst.json b/test/json/classic-ScaleOut-BW/kill_prim_worker_inst.json index da4900be..36c1232d 100644 --- a/test/json/classic-ScaleOut-BW/kill_prim_worker_inst.json +++ b/test/json/classic-ScaleOut-BW/kill_prim_worker_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary worker instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,25 +29,25 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "score=(90|70|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "score=(90|70|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -55,43 +55,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:", - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|SFAIL)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:", + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_prim_worker_node.json b/test/json/classic-ScaleOut-BW/kill_prim_worker_node.json index 01250295..663372a7 100644 --- a/test/json/classic-ScaleOut-BW/kill_prim_worker_node.json +++ b/test/json/classic-ScaleOut-BW/kill_prim_worker_node.json @@ -3,19 +3,19 @@ "name": "Kill primary worker node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,25 +27,25 @@ "lpt=(1[6-9]........|20|10)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=(S|P)", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(DEMOTED|UNDEFINED|WAITING4NODES)" , - "score=(90|70|5)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(DEMOTED|UNDEFINED|WAITING4NODES)" , + "score=(90|70|5)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,41 +53,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=(1|2)", - "srr=(P|S)" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=(PRIM|SFAIL)" - ], + "lss=(1|2)", + "srr=(P|S)" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=(PRIM|SFAIL)" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|PRIM)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , - "roles=master1::worker:" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145|150)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|PRIM)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED|WAITING4NODES)" , + "roles=master1::worker:" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145|150)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_secn_indexserver.json b/test/json/classic-ScaleOut-BW/kill_secn_indexserver.json index 9d14c833..4f500059 100644 --- a/test/json/classic-ScaleOut-BW/kill_secn_indexserver.json +++ b/test/json/classic-ScaleOut-BW/kill_secn_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill secondary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SFAIL", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sCCC to be the same as at test begin", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SFAIL", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sCCC to be the same as at test begin", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_secn_inst.json b/test/json/classic-ScaleOut-BW/kill_secn_inst.json index d6e67b09..2db5e9b2 100644 --- a/test/json/classic-ScaleOut-BW/kill_secn_inst.json +++ b/test/json/classic-ScaleOut-BW/kill_secn_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,41 +54,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 240, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 240, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_secn_node.json b/test/json/classic-ScaleOut-BW/kill_secn_node.json index 3df7a74c..a5febca2 100644 --- a/test/json/classic-ScaleOut-BW/kill_secn_node.json +++ b/test/json/classic-ScaleOut-BW/kill_secn_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,21 +27,21 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -49,41 +49,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_secn_worker_inst.json b/test/json/classic-ScaleOut-BW/kill_secn_worker_inst.json index 2af4b7ea..6a00bf97 100644 --- a/test/json/classic-ScaleOut-BW/kill_secn_worker_inst.json +++ b/test/json/classic-ScaleOut-BW/kill_secn_worker_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary worker instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -28,15 +28,15 @@ "srr=S", "srHook=(SFAIL|SWAIT)", "srPoll=(SFAIL|SOK)" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=(DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=(DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -45,30 +45,30 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/kill_secn_worker_node.json b/test/json/classic-ScaleOut-BW/kill_secn_worker_node.json index cc955eac..ddbaad16 100644 --- a/test/json/classic-ScaleOut-BW/kill_secn_worker_node.json +++ b/test/json/classic-ScaleOut-BW/kill_secn_worker_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary worker node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -28,13 +28,13 @@ "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=WAITING4NODES" - ] - }, - { + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=WAITING4NODES" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -43,30 +43,30 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/maintenance_cluster_turn_hana.json b/test/json/classic-ScaleOut-BW/maintenance_cluster_turn_hana.json index 15aacfa3..cdf90e80 100644 --- a/test/json/classic-ScaleOut-BW/maintenance_cluster_turn_hana.json +++ b/test/json/classic-ScaleOut-BW/maintenance_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "maintenance_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_maintenance_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_maintenance_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/maintenance_with_standby_nodes.json b/test/json/classic-ScaleOut-BW/maintenance_with_standby_nodes.json index 93ec9f13..feba872b 100644 --- a/test/json/classic-ScaleOut-BW/maintenance_with_standby_nodes.json +++ b/test/json/classic-ScaleOut-BW/maintenance_with_standby_nodes.json @@ -6,19 +6,19 @@ "mstResource": "ms_SAPHanaCon_HA1_HDB00", "todo": "expectations needs to be fixed - e.g. step20 sHostDown is wrong, because topology will also be stopped. roles will be ::: not master1:...", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites ssn", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites ssn", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -27,10 +27,10 @@ "post": "osn", "pSite": "pSiteUp", "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { + "pHost": "pHostUp", + "sHost": "sHostDown" + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -39,43 +39,43 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "end recover", - "next": "step110", + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "step40", + "name": "end recover", + "next": "step110", "loop": 120, "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step110", - "name": "test prerequitsites spn", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step110", + "name": "test prerequitsites spn", + "next": "step120", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -88,15 +88,15 @@ "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": "pHostDown", - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": "pHostDown", + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -104,33 +104,33 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" ], "sHost": "pHostUp" - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + }, + { + "step": "final140", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/nop-false.json b/test/json/classic-ScaleOut-BW/nop-false.json index a3c5ad48..46924104 100644 --- a/test/json/classic-ScaleOut-BW/nop-false.json +++ b/test/json/classic-ScaleOut-BW/nop-false.json @@ -3,31 +3,31 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": [ - "topology=Nix" - ], - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": [ + "topology=Nix" + ], + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/nop.json b/test/json/classic-ScaleOut-BW/nop.json index e3d9826e..31d4111e 100644 --- a/test/json/classic-ScaleOut-BW/nop.json +++ b/test/json/classic-ScaleOut-BW/nop.json @@ -3,29 +3,29 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": "globalUp", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": "globalUp", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/restart_cluster.json b/test/json/classic-ScaleOut-BW/restart_cluster.json index 7018fdd2..c59f8e20 100644 --- a/test/json/classic-ScaleOut-BW/restart_cluster.json +++ b/test/json/classic-ScaleOut-BW/restart_cluster.json @@ -3,29 +3,29 @@ "name": "restart_cluster", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/restart_cluster_hana_running.json b/test/json/classic-ScaleOut-BW/restart_cluster_hana_running.json index b7a20ca7..25ebf149 100644 --- a/test/json/classic-ScaleOut-BW/restart_cluster_hana_running.json +++ b/test/json/classic-ScaleOut-BW/restart_cluster_hana_running.json @@ -3,29 +3,29 @@ "name": "restart_cluster_hana_running", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_hana_running", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_hana_running", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/restart_cluster_turn_hana.json b/test/json/classic-ScaleOut-BW/restart_cluster_turn_hana.json index cd398f38..fc1a482a 100644 --- a/test/json/classic-ScaleOut-BW/restart_cluster_turn_hana.json +++ b/test/json/classic-ScaleOut-BW/restart_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "restart_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/standby_prim_node.json b/test/json/classic-ScaleOut-BW/standby_prim_node.json index 765654ad..1e047088 100644 --- a/test/json/classic-ScaleOut-BW/standby_prim_node.json +++ b/test/json/classic-ScaleOut-BW/standby_prim_node.json @@ -3,19 +3,19 @@ "name": "standby primary node (and online again)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -27,27 +27,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(30|1[6-9]........)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "takeover on secondary", "next": "final40", @@ -55,43 +55,43 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" - ], + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" + ], "sSite": [ - "lpt=1[6-9]........", - "lss=4", - "srr=P", - "srHook=PRIM", - "srPoll=PRIM" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "todo": "allow pointer to prereq10", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=1[6-9]........", + "lss=4", + "srr=P", + "srHook=PRIM", + "srPoll=PRIM" + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "todo": "allow pointer to prereq10", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/classic-ScaleOut-BW/standby_secn_node.json b/test/json/classic-ScaleOut-BW/standby_secn_node.json index 686e3063..ae59404c 100644 --- a/test/json/classic-ScaleOut-BW/standby_secn_node.json +++ b/test/json/classic-ScaleOut-BW/standby_secn_node.json @@ -5,19 +5,19 @@ "sid": "HA1", "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -30,27 +30,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] + }, + { "step": "step30", "name": "node back online", "next": "final40", @@ -58,41 +58,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From 7a14d8457fadd69c4a74ae11a5f9c5c30e54fde7 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 19:03:06 +0100 Subject: [PATCH 091/123] test/json/angi-ScaleOut - fix indents (1,5,9,13,17) of lines --- .../angi-ScaleOut/block_manual_takeover.json | 12 ++-- test/json/angi-ScaleOut/defaults.json | 44 ++++++------ test/json/angi-ScaleOut/flap.json | 8 +-- test/json/angi-ScaleOut/flop.json | 8 +-- test/json/angi-ScaleOut/flup.json | 8 +-- test/json/angi-ScaleOut/free_log_area.json | 12 ++-- .../angi-ScaleOut/freeze_prim_master_nfs.json | 52 +++++++------- .../angi-ScaleOut/freeze_prim_site_nfs.json | 52 +++++++------- .../angi-ScaleOut/freeze_secn_site_nfs.json | 54 +++++++------- .../angi-ScaleOut/kill_prim_indexserver.json | 62 ++++++++-------- test/json/angi-ScaleOut/kill_prim_inst.json | 62 ++++++++-------- test/json/angi-ScaleOut/kill_prim_node.json | 52 +++++++------- .../kill_prim_worker_indexserver.json | 58 +++++++-------- .../angi-ScaleOut/kill_prim_worker_inst.json | 60 ++++++++-------- .../angi-ScaleOut/kill_prim_worker_node.json | 56 +++++++-------- .../angi-ScaleOut/kill_secn_indexserver.json | 60 ++++++++-------- test/json/angi-ScaleOut/kill_secn_inst.json | 60 ++++++++-------- test/json/angi-ScaleOut/kill_secn_node.json | 54 +++++++------- .../angi-ScaleOut/kill_secn_worker_inst.json | 38 +++++----- .../angi-ScaleOut/kill_secn_worker_node.json | 34 ++++----- .../maintenance_cluster_turn_hana.json | 8 +-- .../maintenance_with_standby_nodes.json | 72 +++++++++---------- test/json/angi-ScaleOut/nop-false.json | 12 ++-- test/json/angi-ScaleOut/nop.json | 8 +-- test/json/angi-ScaleOut/restart_cluster.json | 8 +-- .../restart_cluster_hana_running.json | 8 +-- .../restart_cluster_turn_hana.json | 8 +-- .../json/angi-ScaleOut/standby_prim_node.json | 64 ++++++++--------- .../json/angi-ScaleOut/standby_secn_node.json | 62 ++++++++-------- .../standby_secn_worker_node.json | 32 ++++----- 30 files changed, 564 insertions(+), 564 deletions(-) diff --git a/test/json/angi-ScaleOut/block_manual_takeover.json b/test/json/angi-ScaleOut/block_manual_takeover.json index 5a7961cc..a166fe2d 100644 --- a/test/json/angi-ScaleOut/block_manual_takeover.json +++ b/test/json/angi-ScaleOut/block_manual_takeover.json @@ -3,7 +3,7 @@ "name": "block manual takeover, using susTkOver.py", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "test prerequitsites", "next": "final40", @@ -28,8 +28,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -41,6 +41,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/defaults.json b/test/json/angi-ScaleOut/defaults.json index 16d97fbe..3491593e 100644 --- a/test/json/angi-ScaleOut/defaults.json +++ b/test/json/angi-ScaleOut/defaults.json @@ -3,12 +3,12 @@ "srMode": "sync", "checkPtr": { "globalUp": [ - "topology == ScaleOut" + "topology == ScaleOut" ], "pHostUp": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "pSiteUp": [ "lpt > 1000000000", @@ -20,20 +20,20 @@ "sSiteUp": [ "lpt == 30", "lss == 4", - "srr == S", + "srr == S", "srHook == SOK", "srPoll == SOK" ], "sHostUp": [ - "clone_state == DEMOTED", - "roles == master1:master:worker:master", - "score == 100" + "clone_state == DEMOTED", + "roles == master1:master:worker:master", + "score == 100" ], "pHostDown": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 150", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 150", + "standby == on" ], "pSiteDown": [ "lpt > 1000000000", @@ -50,20 +50,20 @@ "srPoll == SFAIL" ], "sHostDown": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 100", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 100", + "standby == on" ], "pWorkerUp": [ - "clone_state == DEMOTED", - "roles == slave:slave:worker:slave", - "score == -12200" + "clone_state == DEMOTED", + "roles == slave:slave:worker:slave", + "score == -12200" ], "sWorkerUp": [ - "clone_state == DEMOTED", - "roles == slave:slave:worker:slave", - "score == -12200" + "clone_state == DEMOTED", + "roles == slave:slave:worker:slave", + "score == -12200" ] } } diff --git a/test/json/angi-ScaleOut/flap.json b/test/json/angi-ScaleOut/flap.json index 923292d6..f15e2308 100644 --- a/test/json/angi-ScaleOut/flap.json +++ b/test/json/angi-ScaleOut/flap.json @@ -3,7 +3,7 @@ "name": "flap - test the new test parser", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -23,8 +23,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -36,6 +36,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/flop.json b/test/json/angi-ScaleOut/flop.json index a8a0551e..8d46d927 100644 --- a/test/json/angi-ScaleOut/flop.json +++ b/test/json/angi-ScaleOut/flop.json @@ -3,7 +3,7 @@ "name": "flop - this test should NOT pass successfully", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -22,8 +22,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -37,6 +37,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/flup.json b/test/json/angi-ScaleOut/flup.json index 8ed26730..f5bc3936 100644 --- a/test/json/angi-ScaleOut/flup.json +++ b/test/json/angi-ScaleOut/flup.json @@ -3,7 +3,7 @@ "name": "flup - like nop but very short sleep only - only for checking the test engine", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -29,6 +29,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/free_log_area.json b/test/json/angi-ScaleOut/free_log_area.json index be586702..a5ceb60e 100644 --- a/test/json/angi-ScaleOut/free_log_area.json +++ b/test/json/angi-ScaleOut/free_log_area.json @@ -3,7 +3,7 @@ "name": "free hana log area on primary site", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -17,8 +17,8 @@ "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "still running", "next": "final40", @@ -29,8 +29,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -42,6 +42,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json index 73d68f2f..c2fa374e 100644 --- a/test/json/angi-ScaleOut/freeze_prim_master_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_master_nfs.json @@ -3,7 +3,7 @@ "name": "freeze sap hana nfs on primary master node", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -39,12 +39,12 @@ "pHost": [ ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -52,30 +52,30 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss ~ (1|2)", - "srr ~ (P|S)", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll ~ (PRIM|SFAIL)" + "lss ~ (1|2)", + "srr ~ (P|S)", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll ~ (PRIM|SFAIL)" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|PRIM)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|PRIM)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", - "roles == master1::worker:" + "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", + "roles == master1::worker:" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145|150)" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145|150)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -89,6 +89,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json index ec89af83..2e0a1585 100644 --- a/test/json/angi-ScaleOut/freeze_prim_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_prim_site_nfs.json @@ -3,7 +3,7 @@ "name": "freeze sap hana nfs on primary site", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -39,12 +39,12 @@ "pHost": [ ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -52,30 +52,30 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss ~ (1|2)", - "srr ~ (P|S)", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll ~ (PRIM|SFAIL)" + "lss ~ (1|2)", + "srr ~ (P|S)", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll ~ (PRIM|SFAIL)" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|PRIM)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|PRIM)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", - "roles == master1::worker:" + "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", + "roles == master1::worker:" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145|150)" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145|150)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -89,6 +89,6 @@ "sHost": "pHostUp", "sWorker": "pWorkerUp", "pWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json index 88802148..8f6fb90e 100644 --- a/test/json/angi-ScaleOut/freeze_secn_site_nfs.json +++ b/test/json/angi-ScaleOut/freeze_secn_site_nfs.json @@ -4,7 +4,7 @@ "todo": "please correct this file", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -17,8 +17,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -38,14 +38,14 @@ "srPoll == SFAIL" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,31 +53,31 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" ], "sSite": [ - "lpt ~ (20|10)", - "lss == 4", - "srr == S", - "srHook ~ (SOK|SWAIT)", - "srPoll ~ (SOK|SFAIL)" + "lpt ~ (20|10)", + "lss == 4", + "srr == S", + "srHook ~ (SOK|SWAIT)", + "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state ~ (DEMOTED|UNDEFINED)", - "roles == master1::worker:", - "score ~ (100|145|150)" + "clone_state ~ (DEMOTED|UNDEFINED)", + "roles == master1::worker:", + "score ~ (100|145|150)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -91,6 +91,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_indexserver.json b/test/json/angi-ScaleOut/kill_prim_indexserver.json index 30222ca0..f9cd45ac 100644 --- a/test/json/angi-ScaleOut/kill_prim_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_indexserver.json @@ -3,7 +3,7 @@ "name": "Kill primary master indexserver", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -39,17 +39,17 @@ "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", - "roles == master1::worker:", - "score ~ (90|70|5|0)" + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", + "roles == master1::worker:", + "score ~ (90|70|5|0)" ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -57,32 +57,32 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 1", - "srr == P", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll == PRIM" + "lss == 1", + "srr == P", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll == PRIM" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|SFAIL)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "roles == master1::worker:", - "score ~ (90|70|5)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "roles == master1::worker:", + "score ~ (90|70|5)" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)", - "srah == T" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)", + "srah == T" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -96,6 +96,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_inst.json b/test/json/angi-ScaleOut/kill_prim_inst.json index 5f6773d5..3443c7cc 100644 --- a/test/json/angi-ScaleOut/kill_prim_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_inst.json @@ -3,7 +3,7 @@ "name": "Kill primary master instance", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -17,8 +17,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -40,17 +40,17 @@ "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", - "roles == master1::worker:", - "score ~ (90|70|5|0)" + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", + "roles == master1::worker:", + "score ~ (90|70|5|0)" ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -58,32 +58,32 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 1", - "srr == P", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll == PRIM" + "lss == 1", + "srr == P", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll == PRIM" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|SFAIL)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "roles ~ master1::worker:", - "score ~ (90|70|5)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "roles ~ master1::worker:", + "score ~ (90|70|5)" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)", - "srah == T" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)", + "srah == T" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -97,6 +97,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_node.json b/test/json/angi-ScaleOut/kill_prim_node.json index d4231d8a..3445fa23 100644 --- a/test/json/angi-ScaleOut/kill_prim_node.json +++ b/test/json/angi-ScaleOut/kill_prim_node.json @@ -3,7 +3,7 @@ "name": "Kill primary master node", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -40,12 +40,12 @@ "pHost": [ ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -53,30 +53,30 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss ~ (1|2)", - "srr ~ (P|S)", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll ~ (PRIM|SFAIL)" + "lss ~ (1|2)", + "srr ~ (P|S)", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll ~ (PRIM|SFAIL)" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|PRIM)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|PRIM)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", - "roles == master1::worker:" + "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", + "roles == master1::worker:" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145|150)" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145|150)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -90,6 +90,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json index f8230853..cedcf2b5 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_indexserver.json @@ -3,7 +3,7 @@ "name": "Kill primary worker indexserver", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -38,16 +38,16 @@ "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", - "score ~ (90|70|5|0)" + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", + "score ~ (90|70|5|0)" ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,31 +56,31 @@ "todo": "pHost+sHost to check site-name", "todo2": "why do we need SFAIL for srHook?", "pSite": [ - "lss == 1", - "srr == P", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll == PRIM" + "lss == 1", + "srr == P", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll == PRIM" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook ~ (PRIM|SFAIL)", - "srPoll ~ (SOK|SFAIL)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook ~ (PRIM|SFAIL)", + "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "score ~ (90|70|5)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "score ~ (90|70|5)" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)", - "srah == T" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)", + "srah == T" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -94,6 +94,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_inst.json b/test/json/angi-ScaleOut/kill_prim_worker_inst.json index f2b4439c..a28116b5 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_inst.json @@ -3,7 +3,7 @@ "name": "Kill primary worker instance", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -17,8 +17,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -39,16 +39,16 @@ "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", - "score ~ (90|70|5|0)" + "clone_state ~ (PROMOTED|DEMOTED|UNDEFINED)", + "score ~ (90|70|5|0)" ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,32 +56,32 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 1", - "srr == P", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll == PRIM" + "lss == 1", + "srr == P", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll == PRIM" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|SFAIL)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "roles == master1::worker:", - "score ~ (90|70|5)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "roles == master1::worker:", + "score ~ (90|70|5)" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)", - "srah == T" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)", + "srah == T" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -95,6 +95,6 @@ "sHost": "pHostUp", "sWorker": "pWorkerUp", "pWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_prim_worker_node.json b/test/json/angi-ScaleOut/kill_prim_worker_node.json index 3157a0b5..ceb77410 100644 --- a/test/json/angi-ScaleOut/kill_prim_worker_node.json +++ b/test/json/angi-ScaleOut/kill_prim_worker_node.json @@ -3,7 +3,7 @@ "name": "Kill primary worker node", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -38,16 +38,16 @@ "srPoll ~ (SOK|SFAIL)" ], "pHost": [ - "clone_state ~ (DEMOTED|UNDEFINED|WAITING4NODES)", - "score ~ (90|70|5)" + "clone_state ~ (DEMOTED|UNDEFINED|WAITING4NODES)", + "score ~ (90|70|5)" ], "sHost": [ - "clone_state ~ (PROMOTED|DEMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state ~ (PROMOTED|DEMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -55,30 +55,30 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss ~ (1|2)", - "srr ~ (P|S)", - "lpt >~ 1000000000:(30|20|10)", - "srHook ~ (PRIM|SWAIT|SREG)", - "srPoll ~ (PRIM|SFAIL)" + "lss ~ (1|2)", + "srr ~ (P|S)", + "lpt >~ 1000000000:(30|20|10)", + "srHook ~ (PRIM|SWAIT|SREG)", + "srPoll ~ (PRIM|SFAIL)" ], "sSite": [ - "lpt >~ 1000000000:30", - "lss == 4", - "srr ~ (S|P)", - "srHook == PRIM", - "srPoll ~ (SOK|PRIM)" + "lpt >~ 1000000000:30", + "lss == 4", + "srr ~ (S|P)", + "srHook == PRIM", + "srPoll ~ (SOK|PRIM)" ], "pHost": [ - "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", - "roles == master1::worker:" + "clone_state ~ (UNDEFINED|DEMOTED|WAITING4NODES)", + "roles == master1::worker:" ], "sHost": [ - "clone_state ~ (DEMOTED|PROMOTED)", - "roles == master1:master:worker:master", - "score ~ (100|145|150)" + "clone_state ~ (DEMOTED|PROMOTED)", + "roles == master1:master:worker:master", + "score ~ (100|145|150)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -92,6 +92,6 @@ "sHost": "pHostUp", "sWorker": "pWorkerUp", "pWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_indexserver.json b/test/json/angi-ScaleOut/kill_secn_indexserver.json index 80ae4be2..7f2350ab 100644 --- a/test/json/angi-ScaleOut/kill_secn_indexserver.json +++ b/test/json/angi-ScaleOut/kill_secn_indexserver.json @@ -3,7 +3,7 @@ "name": "Kill secondary master indexserver", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -38,17 +38,17 @@ "srPoll ~ (SFAIL|SOK)" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,31 +56,31 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" ], "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SFAIL", - "srPoll ~ (SFAIL|SOK)" + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SFAIL", + "srPoll ~ (SFAIL|SOK)" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score ~ (-INFINITY|0|-1)" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score ~ (-INFINITY|0|-1)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -94,6 +94,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_inst.json b/test/json/angi-ScaleOut/kill_secn_inst.json index da619360..5369a591 100644 --- a/test/json/angi-ScaleOut/kill_secn_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_inst.json @@ -3,7 +3,7 @@ "name": "Kill secondary master instance", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -38,17 +38,17 @@ "srPoll ~ (SFAIL|SOK)" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,31 +56,31 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" ], "sSite": [ - "lpt == 10", - "lss ~ (1|2)", - "srr == S", - "srHook ~ (SFAIL|SWAIT)", - "srPoll ~ (SFAIL|SOK)" + "lpt == 10", + "lss ~ (1|2)", + "srr == S", + "srHook ~ (SFAIL|SWAIT)", + "srPoll ~ (SFAIL|SOK)" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "roles == master1::worker:", - "score ~ (-INFINITY|0|-1)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "roles == master1::worker:", + "score ~ (-INFINITY|0|-1)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -93,6 +93,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_node.json b/test/json/angi-ScaleOut/kill_secn_node.json index 495bd61e..5d532953 100644 --- a/test/json/angi-ScaleOut/kill_secn_node.json +++ b/test/json/angi-ScaleOut/kill_secn_node.json @@ -3,7 +3,7 @@ "name": "Kill secondary master node", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -38,12 +38,12 @@ "srPoll == SFAIL" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -51,31 +51,31 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" ], "sSite": [ - "lpt == 10", - "lss ~ (1|2)", - "srr == S", - "srHook ~ (SFAIL|SWAIT)", - "srPoll ~ (SFAIL|SOK)" + "lpt == 10", + "lss ~ (1|2)", + "srr == S", + "srHook ~ (SFAIL|SWAIT)", + "srPoll ~ (SFAIL|SOK)" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "roles == master1::worker:", - "score ~ (-INFINITY|0|-1)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "roles == master1::worker:", + "score ~ (-INFINITY|0|-1)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -88,6 +88,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_worker_inst.json b/test/json/angi-ScaleOut/kill_secn_worker_inst.json index 8d7f8cf7..8daa3df4 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_inst.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_inst.json @@ -3,7 +3,7 @@ "name": "Kill secondary worker instance", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -33,12 +33,12 @@ ], "pHost": "pHostUp", "sHost": [ - "clone_state ~ (DEMOTED|UNDEFINED)", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" + "clone_state ~ (DEMOTED|UNDEFINED)", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -47,20 +47,20 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt == 10", - "lss ~ (1|2)", - "srr == S", - "srHook ~ (SFAIL|SWAIT)", - "srPoll ~ (SFAIL|SOK)" + "lpt == 10", + "lss ~ (1|2)", + "srr == S", + "srHook ~ (SFAIL|SWAIT)", + "srPoll ~ (SFAIL|SOK)" ], "pHost": "pHostUp", "sHost": [ - "clone_state ~ (UNDEFINED|DEMOTED)", - "roles == master1::worker:", - "score ~ (-INFINITY|0|-1)" + "clone_state ~ (UNDEFINED|DEMOTED)", + "roles == master1::worker:", + "score ~ (-INFINITY|0|-1)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -73,6 +73,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/kill_secn_worker_node.json b/test/json/angi-ScaleOut/kill_secn_worker_node.json index fb5e7863..1b32417c 100644 --- a/test/json/angi-ScaleOut/kill_secn_worker_node.json +++ b/test/json/angi-ScaleOut/kill_secn_worker_node.json @@ -3,7 +3,7 @@ "name": "Kill secondary worker node", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -33,10 +33,10 @@ ], "pHost": "pHostUp", "sHost": [ - "clone_state=WAITING4NODES" + "clone_state=WAITING4NODES" ] - }, - { + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -45,20 +45,20 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" ], "pHost": "pHostUp", "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)", - "roles=master1::worker:", - "score=(-INFINITY|0|-1)" + "clone_state=(UNDEFINED|DEMOTED)", + "roles=master1::worker:", + "score=(-INFINITY|0|-1)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -71,6 +71,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json index 18947dfb..7f4409b1 100644 --- a/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/maintenance_cluster_turn_hana.json @@ -3,7 +3,7 @@ "name": "maintenance cluster turn hana", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -31,6 +31,6 @@ "sHost": "pHostUp", "sWorker": "pWorkerUp", "pWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json index 268eb68f..0722acf8 100644 --- a/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut/maintenance_with_standby_nodes.json @@ -4,7 +4,7 @@ "start": "prereq10", "todo": "expectations needs to be fixed - e.g. step20 sHostDown is wrong, because topology will also be stopped. roles will be ::: not master1:...", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites standby_secn_node", "next": "step20", @@ -17,8 +17,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -29,8 +29,8 @@ "sSite": "sSiteDown", "pHost": "pHostUp", "sHost": "sHostDown" - }, - { + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -39,20 +39,20 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SWAIT", + "srPoll == SFAIL" ], "pHost": "pHostUp", "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" ] - }, - { + }, + { "step": "step40", "name": "end recover", "next": "step110", @@ -62,8 +62,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" - }, - { + }, + { "step": "step110", "name": "test prerequitsites standby_prim_node", "next": "step120", @@ -74,8 +74,8 @@ "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp" - }, - { + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -91,12 +91,12 @@ ], "pHost": "pHostDown", "sHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -104,22 +104,22 @@ "post": "opn", "wait": 2, "pSite": [ - "lss == 1", - "srr == P", - "lpt == 10", - "srHook == SWAIT", - "srPoll == SFAIL" + "lss == 1", + "srr == P", + "lpt == 10", + "srHook == SWAIT", + "srPoll == SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 150", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 150", + "standby == on" ], "sHost": "pHostUp" - }, - { + }, + { "step": "final140", "name": "end recover", "next": "END", @@ -133,6 +133,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/nop-false.json b/test/json/angi-ScaleOut/nop-false.json index 3d8385b0..620e0d86 100644 --- a/test/json/angi-ScaleOut/nop-false.json +++ b/test/json/angi-ScaleOut/nop-false.json @@ -3,7 +3,7 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -11,16 +11,16 @@ "wait": 1, "post": "sleep 240", "global": [ - "topology=Nix" - ], + "topology=Nix" + ], "pSite": "pSiteUp", "sSite": "sSiteUp", "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -32,6 +32,6 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/nop.json b/test/json/angi-ScaleOut/nop.json index 79d4264d..a87d22f6 100644 --- a/test/json/angi-ScaleOut/nop.json +++ b/test/json/angi-ScaleOut/nop.json @@ -3,7 +3,7 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -17,8 +17,8 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "final40", "name": "still running", "next": "END", @@ -30,6 +30,6 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster.json b/test/json/angi-ScaleOut/restart_cluster.json index aaedc7b4..71e78328 100644 --- a/test/json/angi-ScaleOut/restart_cluster.json +++ b/test/json/angi-ScaleOut/restart_cluster.json @@ -3,7 +3,7 @@ "name": "stop and restart cluster and hana", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -30,6 +30,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster_hana_running.json b/test/json/angi-ScaleOut/restart_cluster_hana_running.json index 445f7d1f..f48f0584 100644 --- a/test/json/angi-ScaleOut/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut/restart_cluster_hana_running.json @@ -3,7 +3,7 @@ "name": "stop and restart cluster, keep hana running", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -30,6 +30,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json index 1a84090b..9bec9239 100644 --- a/test/json/angi-ScaleOut/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut/restart_cluster_turn_hana.json @@ -3,7 +3,7 @@ "name": "stop cluster, turn hana, start cluster", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "final40", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -31,6 +31,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/standby_prim_node.json b/test/json/angi-ScaleOut/standby_prim_node.json index 511c125c..b438e700 100644 --- a/test/json/angi-ScaleOut/standby_prim_node.json +++ b/test/json/angi-ScaleOut/standby_prim_node.json @@ -3,7 +3,7 @@ "name": "standby primary master node (and online again)", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -38,18 +38,18 @@ "srPoll == SOK" ], "pHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 150", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 150", + "standby == on" ], "sHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score ~ (100|145)" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score ~ (100|145)" ] - }, - { + }, + { "step": "step30", "name": "takeover on secondary", "next": "final40", @@ -57,32 +57,32 @@ "post": "opn", "wait": 2, "pSite": [ - "lss == 1", - "srr == P", - "lpt == 10", - "srHook == SWAIT", - "srPoll == SFAIL" + "lss == 1", + "srr == P", + "lpt == 10", + "srHook == SWAIT", + "srPoll == SFAIL" ], "sSite": [ - "lpt > 1000000000", - "lss == 4", - "srr == P", - "srHook == PRIM", - "srPoll == PRIM" + "lpt > 1000000000", + "lss == 4", + "srr == P", + "srHook == PRIM", + "srPoll == PRIM" ], "pHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 150", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 150", + "standby == on" ], "sHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -96,6 +96,6 @@ "sHost": "pHostUp", "pWorker": "sWorkerUp", "sWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/standby_secn_node.json b/test/json/angi-ScaleOut/standby_secn_node.json index 30acac27..d76426c3 100644 --- a/test/json/angi-ScaleOut/standby_secn_node.json +++ b/test/json/angi-ScaleOut/standby_secn_node.json @@ -3,7 +3,7 @@ "name": "standby secondary master node (and online again)", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -17,8 +17,8 @@ "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - }, - { + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -40,18 +40,18 @@ "srPoll == SFAIL" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state == UNDEFINED", - "roles == master1::worker:", - "score == 100", - "standby == on" + "clone_state == UNDEFINED", + "roles == master1::worker:", + "score == 100", + "standby == on" ] - }, - { + }, + { "step": "step30", "name": "node back online", "next": "final40", @@ -59,31 +59,31 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss == 4", - "srr == P", - "lpt > 1000000000", - "srHook == PRIM", - "srPoll == PRIM" + "lss == 4", + "srr == P", + "lpt > 1000000000", + "srHook == PRIM", + "srPoll == PRIM" ], "sSite": [ - "lpt == 10", - "lss == 1", - "srr == S", - "srHook == SWAIT", - "srPoll == SFAIL" + "lpt == 10", + "lss == 1", + "srr == S", + "srHook == SWAIT", + "srPoll == SFAIL" ], "pHost": [ - "clone_state == PROMOTED", - "roles == master1:master:worker:master", - "score == 150" + "clone_state == PROMOTED", + "roles == master1:master:worker:master", + "score == 150" ], "sHost": [ - "clone_state == DEMOTED", - "roles == master1::worker:", - "score ~ (-INFINITY|0)" + "clone_state == DEMOTED", + "roles == master1::worker:", + "score ~ (-INFINITY|0)" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -96,6 +96,6 @@ "sHost": "sHostUp", "sWorker": "sWorkerUp", "pWorker": "pWorkerUp" - } + } ] } diff --git a/test/json/angi-ScaleOut/standby_secn_worker_node.json b/test/json/angi-ScaleOut/standby_secn_worker_node.json index 95faadcb..301a062d 100644 --- a/test/json/angi-ScaleOut/standby_secn_worker_node.json +++ b/test/json/angi-ScaleOut/standby_secn_worker_node.json @@ -3,7 +3,7 @@ "name": "standby secondary worker node (and online again)", "start": "prereq10", "steps": [ - { + { "step": "prereq10", "name": "test prerequitsites", "next": "step20", @@ -16,8 +16,8 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - }, - { + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -29,13 +29,13 @@ "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": [ - "clone_state == UNDEFINED", - "roles == slave:slave:worker:slave", - "score == -12200", - "standby == on" + "clone_state == UNDEFINED", + "roles == slave:slave:worker:slave", + "score == -12200", + "standby == on" ] - }, - { + }, + { "step": "step30", "name": "node back online", "next": "final40", @@ -47,13 +47,13 @@ "pHost": "pHostUp", "sHost": "sHostUp", "sWorker": [ - "clone_state == DEMOTED", - "roles == slave:slave:worker:slave", - "score == -12200", - "standby == off" + "clone_state == DEMOTED", + "roles == slave:slave:worker:slave", + "score == -12200", + "standby == off" ] - }, - { + }, + { "step": "final40", "name": "end recover", "next": "END", @@ -66,6 +66,6 @@ "sHost": "sHostUp", "pWorker": "pWorkerUp", "sWorker": "sWorkerUp" - } + } ] } From f5da6b5262cc7998fdf0f656efbc2efe08fd7dbb Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 19:04:13 +0100 Subject: [PATCH 092/123] test/json/angi-ScaleOut-BW - fix indents (1,5,9,13,17) of lines --- .../block_manual_takeover.json | 60 +++---- .../defaults+newComparators.json | 78 ++++---- test/json/angi-ScaleOut-BW/defaults.json | 66 +++---- test/json/angi-ScaleOut-BW/free_log_area.json | 70 ++++---- .../kill_prim_indexserver.json | 128 ++++++------- .../json/angi-ScaleOut-BW/kill_prim_inst.json | 132 +++++++------- .../kill_prim_worker_inst.json | 132 +++++++------- .../kill_prim_worker_node.json | 126 ++++++------- .../kill_secn_indexserver.json | 126 ++++++------- .../json/angi-ScaleOut-BW/kill_secn_inst.json | 124 ++++++------- .../json/angi-ScaleOut-BW/kill_secn_node.json | 114 ++++++------ .../kill_secn_worker_inst.json | 124 ++++++------- .../kill_secn_worker_node.json | 120 ++++++------- .../maintenance_cluster_turn_hana.json | 50 +++--- .../maintenance_with_standby_nodes.json | 168 +++++++++--------- test/json/angi-ScaleOut-BW/nop-false.json | 42 ++--- test/json/angi-ScaleOut-BW/nop.json | 38 ++-- .../angi-ScaleOut-BW/restart_cluster.json | 48 ++--- .../restart_cluster_hana_running.json | 48 ++--- .../restart_cluster_turn_hana.json | 50 +++--- .../angi-ScaleOut-BW/standby_prim_node.json | 130 +++++++------- .../angi-ScaleOut-BW/standby_secn_node.json | 126 ++++++------- 22 files changed, 1050 insertions(+), 1050 deletions(-) diff --git a/test/json/angi-ScaleOut-BW/block_manual_takeover.json b/test/json/angi-ScaleOut-BW/block_manual_takeover.json index 4a66bcff..41863d25 100644 --- a/test/json/angi-ScaleOut-BW/block_manual_takeover.json +++ b/test/json/angi-ScaleOut-BW/block_manual_takeover.json @@ -3,40 +3,40 @@ "name": "blocked manual takeover", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "bmt", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 120", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "bmt", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 120", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/defaults+newComparators.json b/test/json/angi-ScaleOut-BW/defaults+newComparators.json index 44699c70..1287f744 100644 --- a/test/json/angi-ScaleOut-BW/defaults+newComparators.json +++ b/test/json/angi-ScaleOut-BW/defaults+newComparators.json @@ -1,66 +1,66 @@ { "checkPtr": { - "comparartorinline": [ + "comparartorinline": [ "alfa!=dassollungleichsein", "lpa_@@sid@@_lpt > 160000", "beta=dassollgleichsein" - ], - "comparatortuple": [ - ("noty", "alfa=ungleich"), - () - ], - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ + ], + "comparatortuple": [ + ("noty", "alfa=ungleich"), + () + ], + "globalUp": [ + "topology=ScaleOut" + ], + "pHostUp": [ + "clone_state=PROMOTED", + "roles=master1:master:worker:master", + "score=150" + ], + "pSiteUp": [ "lpt=1[6-9]........", "lss=4", "srr=P", "srHook=PRIM", "srPoll=PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "lpt=30", "lss=4", - "srr=S", + "srr=S", "srHook=SOK", "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles=master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], + ], + "sHostUp": [ + "clone_state=DEMOTED", + "roles=master1:master:worker:master", + "score=100" + ], + "pHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], "pSiteDown": [ "lpt=1[6-9]........" , "lss=1" , "srr=P" , "srHook=PRIM" , "srPoll=PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] + ], + "sHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] } } diff --git a/test/json/angi-ScaleOut-BW/defaults.json b/test/json/angi-ScaleOut-BW/defaults.json index aabce016..fb1840a6 100644 --- a/test/json/angi-ScaleOut-BW/defaults.json +++ b/test/json/angi-ScaleOut-BW/defaults.json @@ -2,58 +2,58 @@ "opMode": "logreplay", "srMode": "sync", "checkPtr": { - "globalUp": [ - "topology=ScaleOut" - ], - "pHostUp": [ - "clone_state=PROMOTED", - "roles=master1:master:worker:master", - "score=150" - ], - "pSiteUp": [ + "globalUp": [ + "topology=ScaleOut" + ], + "pHostUp": [ + "clone_state=PROMOTED", + "roles=master1:master:worker:master", + "score=150" + ], + "pSiteUp": [ "lpt=1[6-9]........", "lss=4", "srr=P", "srHook=PRIM", "srPoll=PRIM" - ], - "sSiteUp": [ + ], + "sSiteUp": [ "lpt=30", "lss=4", - "srr=S", + "srr=S", "srHook=SOK", "srPoll=SOK" - ], - "sHostUp": [ - "clone_state=DEMOTED", - "roles=master1:master:worker:master", - "score=100" - ], - "pHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], + ], + "sHostUp": [ + "clone_state=DEMOTED", + "roles=master1:master:worker:master", + "score=100" + ], + "pHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], "pSiteDown": [ "lpt=1[6-9]........" , "lss=1" , "srr=P" , "srHook=PRIM" , "srPoll=PRIM" - ], - "sSiteDown": [ + ], + "sSiteDown": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "sHostDown": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] + ], + "sHostDown": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] } } diff --git a/test/json/angi-ScaleOut-BW/free_log_area.json b/test/json/angi-ScaleOut-BW/free_log_area.json index 3de2d553..f1708d42 100644 --- a/test/json/angi-ScaleOut-BW/free_log_area.json +++ b/test/json/angi-ScaleOut-BW/free_log_area.json @@ -3,40 +3,40 @@ "name": "free log area on primary", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "shell test_free_log_area", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step20", - "name": "still running", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 60", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "still running", - "next": "END", - "loop": 1, - "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "shell test_free_log_area", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step20", + "name": "still running", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 60", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "still running", + "next": "END", + "loop": 1, + "wait": 1, + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_prim_indexserver.json b/test/json/angi-ScaleOut-BW/kill_prim_indexserver.json index 3972ac33..e3f7bb18 100644 --- a/test/json/angi-ScaleOut-BW/kill_prim_indexserver.json +++ b/test/json/angi-ScaleOut-BW/kill_prim_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill primary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,43 +54,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=SOK" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=SOK" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_prim_inst.json b/test/json/angi-ScaleOut-BW/kill_prim_inst.json index c4c1fa98..b56e53e4 100644 --- a/test/json/angi-ScaleOut-BW/kill_prim_inst.json +++ b/test/json/angi-ScaleOut-BW/kill_prim_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,26 +29,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,43 +56,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=SOK" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(90|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=SOK" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(90|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_prim_worker_inst.json b/test/json/angi-ScaleOut-BW/kill_prim_worker_inst.json index 38a258bf..25456e24 100644 --- a/test/json/angi-ScaleOut-BW/kill_prim_worker_inst.json +++ b/test/json/angi-ScaleOut-BW/kill_prim_worker_inst.json @@ -3,21 +3,21 @@ "name": "Kill primary worker instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_inst", - "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", - "todo1": "allow something like lss>2, lpt>10000, score!=123", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_inst", + "todo": "allow something like pSite=@@pSite@@ or pSite=%pSite", + "todo1": "allow something like lss>2, lpt>10000, score!=123", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -29,26 +29,26 @@ "lpt=(1[6-9]........|20)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , - "roles=master1::worker:" , - "score=(90|5|0)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(PROMOTED|DEMOTED|UNDEFINED)" , + "roles=master1::worker:" , + "score=(90|5|0)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -56,43 +56,43 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=1" , - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=PRIM" - ], + "lss=1" , + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=SOK" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:", - "score=(90|5)" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145)" , - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=SOK" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:", + "score=(90|5)" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145)" , + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_prim_worker_node.json b/test/json/angi-ScaleOut-BW/kill_prim_worker_node.json index dc96bab2..36fc3c5a 100644 --- a/test/json/angi-ScaleOut-BW/kill_prim_worker_node.json +++ b/test/json/angi-ScaleOut-BW/kill_prim_worker_node.json @@ -3,19 +3,19 @@ "name": "Kill primary worker node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_prim_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_prim_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=(1[6-9]........|20|10)" , "srHook=(PRIM|SWAIT|SREG)" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(1[6-9]........|30)", "lss=4", "srr=(S|P)", "srHook=(PRIM|SOK)", "srPoll=(SOK|SFAIL)" - ], - "pHost": [ - "clone_state=(UNDEFINED|WAITING4NODES)" , - "roles=master1::worker:" , - "score=(70|5)" - ], - "sHost": [ - "clone_state=(PROMOTED|DEMOTED)", - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=(UNDEFINED|WAITING4NODES)" , + "roles=master1::worker:" , + "score=(70|5)" + ], + "sHost": [ + "clone_state=(PROMOTED|DEMOTED)", + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=(1|2)", - "srr=P" , - "lpt=(1[6-9]........|30|20|10)" , - "srHook=(PRIM|SWAIT|SREG)" , - "srPoll=(PRIM|SFAIL)" - ], + "lss=(1|2)", + "srr=P" , + "lpt=(1[6-9]........|30|20|10)" , + "srHook=(PRIM|SWAIT|SREG)" , + "srPoll=(PRIM|SFAIL)" + ], "sSite": [ - "lpt=(1[6-9]........|30)", - "lss=4", - "srr=(S|P)", - "srHook=PRIM", - "srPoll=(SOK|PRIM)" - ], - "pHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" - ], - "sHost": [ - "clone_state=(DEMOTED|PROMOTED)" , - "roles=master1:master:worker:master" , - "score=(100|145|150)", - "srah=T" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 300, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=(1[6-9]........|30)", + "lss=4", + "srr=(S|P)", + "srHook=PRIM", + "srPoll=(SOK|PRIM)" + ], + "pHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" + ], + "sHost": [ + "clone_state=(DEMOTED|PROMOTED)" , + "roles=master1:master:worker:master" , + "score=(100|145|150)", + "srah=T" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 300, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_secn_indexserver.json b/test/json/angi-ScaleOut-BW/kill_secn_indexserver.json index 9d14c833..4f500059 100644 --- a/test/json/angi-ScaleOut-BW/kill_secn_indexserver.json +++ b/test/json/angi-ScaleOut-BW/kill_secn_indexserver.json @@ -3,19 +3,19 @@ "name": "Kill secondary indexserver", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_indexserver", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_indexserver", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,42 +54,42 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SFAIL", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sCCC to be the same as at test begin", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SFAIL", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sCCC to be the same as at test begin", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_secn_inst.json b/test/json/angi-ScaleOut-BW/kill_secn_inst.json index 53f2cfce..ac57eb18 100644 --- a/test/json/angi-ScaleOut-BW/kill_secn_inst.json +++ b/test/json/angi-ScaleOut-BW/kill_secn_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,41 +54,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_secn_node.json b/test/json/angi-ScaleOut-BW/kill_secn_node.json index 3df7a74c..a5febca2 100644 --- a/test/json/angi-ScaleOut-BW/kill_secn_node.json +++ b/test/json/angi-ScaleOut-BW/kill_secn_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary master node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,21 +27,21 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -49,41 +49,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_secn_worker_inst.json b/test/json/angi-ScaleOut-BW/kill_secn_worker_inst.json index 4fdb3665..2516f033 100644 --- a/test/json/angi-ScaleOut-BW/kill_secn_worker_inst.json +++ b/test/json/angi-ScaleOut-BW/kill_secn_worker_inst.json @@ -3,19 +3,19 @@ "name": "Kill secondary worker instance", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_inst", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_inst", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,26 +27,26 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(10|30)", "lss=(1|2)", "srr=S", "srHook=SFAIL", "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -54,41 +54,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/kill_secn_worker_node.json b/test/json/angi-ScaleOut-BW/kill_secn_worker_node.json index f819f8fc..e0696383 100644 --- a/test/json/angi-ScaleOut-BW/kill_secn_worker_node.json +++ b/test/json/angi-ScaleOut-BW/kill_secn_worker_node.json @@ -3,19 +3,19 @@ "name": "Kill secondary worker node", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "kill_secn_worker_node", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "kill_secn_worker_node", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "failure detected", "next": "step30", @@ -27,24 +27,24 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=WAITING4NODES" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=WAITING4NODES" + ] + }, + { "step": "step30", "name": "begin recover", "next": "final40", @@ -52,41 +52,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=(1|2)", - "srr=S", - "srHook=(SFAIL|SWAIT)", - "srPoll=(SFAIL|SOK)" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=(UNDEFINED|DEMOTED)" , - "roles=master1::worker:" , - "score=(-INFINITY|0|-1)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=(1|2)", + "srr=S", + "srHook=(SFAIL|SWAIT)", + "srPoll=(SFAIL|SOK)" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=(UNDEFINED|DEMOTED)" , + "roles=master1::worker:" , + "score=(-INFINITY|0|-1)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/maintenance_cluster_turn_hana.json b/test/json/angi-ScaleOut-BW/maintenance_cluster_turn_hana.json index 15aacfa3..cdf90e80 100644 --- a/test/json/angi-ScaleOut-BW/maintenance_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut-BW/maintenance_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "maintenance_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_maintenance_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_maintenance_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/maintenance_with_standby_nodes.json b/test/json/angi-ScaleOut-BW/maintenance_with_standby_nodes.json index 07cbab5e..f9755425 100644 --- a/test/json/angi-ScaleOut-BW/maintenance_with_standby_nodes.json +++ b/test/json/angi-ScaleOut-BW/maintenance_with_standby_nodes.json @@ -5,19 +5,19 @@ "sid": "HA1", "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites ssn", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites ssn", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "secondary site: node is standby", "next": "step30", @@ -26,10 +26,10 @@ "post": "osn", "pSite": "pSiteUp", "sSite": "sSiteDown", - "pHost": "pHostUp", - "sHost": "sHostDown" - }, - { + "pHost": "pHostUp", + "sHost": "sHostDown" + }, + { "step": "step30", "name": "secondary site: node back online", "next": "step40", @@ -38,43 +38,43 @@ "todo": "pHost+sHost to check site-name", "pSite": "pSiteUp", "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": "pHostUp", - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "step40", - "name": "end recover", - "next": "step110", + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": "pHostUp", + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "step40", + "name": "end recover", + "next": "step110", "loop": 120, "wait": 2, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "step110", - "name": "test prerequitsites spn", - "next": "step120", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "step110", + "name": "test prerequitsites spn", + "next": "step120", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step120", "name": "primary site: node is standby", "next": "step130", @@ -87,15 +87,15 @@ "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": "pHostDown", - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": "pHostDown", + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step130", "name": "takeover on secondary", "next": "final140", @@ -103,33 +103,33 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" ], "sSite": "pSiteUp", "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" ], "sHost": "pHostUp" - }, - { - "step": "final140", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + }, + { + "step": "final140", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/nop-false.json b/test/json/angi-ScaleOut-BW/nop-false.json index a3c5ad48..46924104 100644 --- a/test/json/angi-ScaleOut-BW/nop-false.json +++ b/test/json/angi-ScaleOut-BW/nop-false.json @@ -3,31 +3,31 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": [ - "topology=Nix" - ], - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": [ + "topology=Nix" + ], + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/nop.json b/test/json/angi-ScaleOut-BW/nop.json index e3d9826e..31d4111e 100644 --- a/test/json/angi-ScaleOut-BW/nop.json +++ b/test/json/angi-ScaleOut-BW/nop.json @@ -3,29 +3,29 @@ "name": "no operation - check, wait and check again (stability check)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "sleep 240", - "global": "globalUp", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "sleep 240", + "global": "globalUp", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "final40", "name": "still running", "next": "END", "loop": 1, "wait": 1, - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/restart_cluster.json b/test/json/angi-ScaleOut-BW/restart_cluster.json index 7018fdd2..c59f8e20 100644 --- a/test/json/angi-ScaleOut-BW/restart_cluster.json +++ b/test/json/angi-ScaleOut-BW/restart_cluster.json @@ -3,29 +3,29 @@ "name": "restart_cluster", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/restart_cluster_hana_running.json b/test/json/angi-ScaleOut-BW/restart_cluster_hana_running.json index b7a20ca7..25ebf149 100644 --- a/test/json/angi-ScaleOut-BW/restart_cluster_hana_running.json +++ b/test/json/angi-ScaleOut-BW/restart_cluster_hana_running.json @@ -3,29 +3,29 @@ "name": "restart_cluster_hana_running", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_hana_running", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_hana_running", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/restart_cluster_turn_hana.json b/test/json/angi-ScaleOut-BW/restart_cluster_turn_hana.json index cd398f38..fc1a482a 100644 --- a/test/json/angi-ScaleOut-BW/restart_cluster_turn_hana.json +++ b/test/json/angi-ScaleOut-BW/restart_cluster_turn_hana.json @@ -3,30 +3,30 @@ "name": "restart_cluster_turn_hana", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "final40", - "loop": 1, - "wait": 1, - "post": "shell test_restart_cluster_turn_hana", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "remark": "pXXX and sXXX are now exchanged", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "final40", + "loop": 1, + "wait": 1, + "post": "shell test_restart_cluster_turn_hana", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "remark": "pXXX and sXXX are now exchanged", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/standby_prim_node.json b/test/json/angi-ScaleOut-BW/standby_prim_node.json index 765654ad..1e047088 100644 --- a/test/json/angi-ScaleOut-BW/standby_prim_node.json +++ b/test/json/angi-ScaleOut-BW/standby_prim_node.json @@ -3,19 +3,19 @@ "name": "standby primary node (and online again)", "start": "prereq10", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "spn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "spn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -27,27 +27,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=(30|1[6-9]........)", "lss=4", "srr=S", "srHook=(PRIM|SOK)", "srPoll=SOK" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=(100|145)" - ] - }, - { + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=(100|145)" + ] + }, + { "step": "step30", "name": "takeover on secondary", "next": "final40", @@ -55,43 +55,43 @@ "post": "opn", "wait": 2, "pSite": [ - "lss=1" , - "srr=P" , - "lpt=10" , - "srHook=SWAIT" , - "srPoll=SFAIL" - ], + "lss=1" , + "srr=P" , + "lpt=10" , + "srHook=SWAIT" , + "srPoll=SFAIL" + ], "sSite": [ - "lpt=1[6-9]........", - "lss=4", - "srr=P", - "srHook=PRIM", - "srPoll=PRIM" - ], - "pHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=150" , - "standby=on" - ], - "sHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "todo": "allow pointer to prereq10", - "pSite": "sSiteUp", - "sSite": "pSiteUp", - "pHost": "sHostUp", - "sHost": "pHostUp" - } + "lpt=1[6-9]........", + "lss=4", + "srr=P", + "srHook=PRIM", + "srPoll=PRIM" + ], + "pHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=150" , + "standby=on" + ], + "sHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "todo": "allow pointer to prereq10", + "pSite": "sSiteUp", + "sSite": "pSiteUp", + "pHost": "sHostUp", + "sHost": "pHostUp" + } ] } diff --git a/test/json/angi-ScaleOut-BW/standby_secn_node.json b/test/json/angi-ScaleOut-BW/standby_secn_node.json index 686e3063..ae59404c 100644 --- a/test/json/angi-ScaleOut-BW/standby_secn_node.json +++ b/test/json/angi-ScaleOut-BW/standby_secn_node.json @@ -5,19 +5,19 @@ "sid": "HA1", "mstResource": "ms_SAPHanaCon_HA1_HDB00", "steps": [ - { - "step": "prereq10", - "name": "test prerequitsites", - "next": "step20", - "loop": 1, - "wait": 1, - "post": "ssn", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - }, - { + { + "step": "prereq10", + "name": "test prerequitsites", + "next": "step20", + "loop": 1, + "wait": 1, + "post": "ssn", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + }, + { "step": "step20", "name": "node is standby", "next": "step30", @@ -30,27 +30,27 @@ "lpt=1[6-9]........" , "srHook=PRIM" , "srPoll=PRIM" - ], + ], "sSite": [ "lpt=10", "lss=1", "srr=S", "srHook=SFAIL", "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=UNDEFINED" , - "roles=master1::worker:" , - "score=100" , - "standby=on" - ] - }, - { + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=UNDEFINED" , + "roles=master1::worker:" , + "score=100" , + "standby=on" + ] + }, + { "step": "step30", "name": "node back online", "next": "final40", @@ -58,41 +58,41 @@ "wait": 2, "todo": "pHost+sHost to check site-name", "pSite": [ - "lss=4" , - "srr=P" , - "lpt=1[6-9]........" , - "srHook=PRIM" , - "srPoll=PRIM" - ], + "lss=4" , + "srr=P" , + "lpt=1[6-9]........" , + "srHook=PRIM" , + "srPoll=PRIM" + ], "sSite": [ - "lpt=10", - "lss=1", - "srr=S", - "srHook=SWAIT", - "srPoll=SFAIL" - ], - "pHost": [ - "clone_state=PROMOTED" , - "roles=master1:master:worker:master" , - "score=150" - ], - "sHost": [ - "clone_state=DEMOTED" , - "roles=master1::worker:" , - "score=(-INFINITY|0)" - ] - }, - { - "step": "final40", - "name": "end recover", - "next": "END", - "loop": 120, - "wait": 2, - "post": "cleanup", - "pSite": "pSiteUp", - "sSite": "sSiteUp", - "pHost": "pHostUp", - "sHost": "sHostUp" - } + "lpt=10", + "lss=1", + "srr=S", + "srHook=SWAIT", + "srPoll=SFAIL" + ], + "pHost": [ + "clone_state=PROMOTED" , + "roles=master1:master:worker:master" , + "score=150" + ], + "sHost": [ + "clone_state=DEMOTED" , + "roles=master1::worker:" , + "score=(-INFINITY|0)" + ] + }, + { + "step": "final40", + "name": "end recover", + "next": "END", + "loop": 120, + "wait": 2, + "post": "cleanup", + "pSite": "pSiteUp", + "sSite": "sSiteUp", + "pHost": "pHostUp", + "sHost": "sHostUp" + } ] } From c7428f326a377c4514c89f8929611a42bb2b4b8a Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 19:06:06 +0100 Subject: [PATCH 093/123] fix_indent - take parameter --- test/fix_indent | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/test/fix_indent b/test/fix_indent index 27408956..4bc71530 100644 --- a/test/fix_indent +++ b/test/fix_indent @@ -1,5 +1,5 @@ -file=restart_cluster.json +file="$1" awk '{ p=0 } /^ {3}[^ ]/ { p=1 } From bfb2638be39ab7705e40693398a0aa7805cf9895 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Tue, 16 Jan 2024 19:07:37 +0100 Subject: [PATCH 094/123] fix_indent.txt - notes how2use fix_indent --- test/fix_indent.txt | 6 ++++++ 1 file changed, 6 insertions(+) create mode 100644 test/fix_indent.txt diff --git a/test/fix_indent.txt b/test/fix_indent.txt new file mode 100644 index 00000000..c61355c0 --- /dev/null +++ b/test/fix_indent.txt @@ -0,0 +1,6 @@ + + 1334 for j in *json; do bash ../../fix_indent "$j" ; done + 1335 for f in *json; do mv "$f".ident "$f"; done + 1336 git status + 1337 git pull && git commit -m "test/json/angi-ScaleOut-BW - fix indents (1,5,9,13,17) of lines" . && git push + From 713daed9b78c48adabb526d0c1542168c7e04411 Mon Sep 17 00:00:00 2001 From: Fabian Herschel Date: Wed, 17 Jan 2024 13:11:46 +0100 Subject: [PATCH 095/123] angi: saphana-controller-common-lib - trigger systemd to start new sapstartsrv, if UDS authentication is broken --- ra/saphana-controller-common-lib | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/ra/saphana-controller-common-lib b/ra/saphana-controller-common-lib index 88a08bf5..286037cc 100755 --- a/ra/saphana-controller-common-lib +++ b/ra/saphana-controller-common-lib @@ -447,6 +447,11 @@ function check_sapstartsrv() { super_ocf_log info "ACT: systemd service $systemd_unit_name is active and Unix Domain Socket authentication is working." else super_ocf_log info "ACT: systemd service $systemd_unit_name is active but Unix Domain Socket authentication is NOT working." + "$SYSTEMCTL" kill --kill-who=main --signal=9 "$systemd_unit_name" + "$SYSTEMCTL" is-active --quiet "$systemd_unit_name"; src=$? + if [[ "$src" -ne 0 ]]; then + "$SYSTEMCTL" start "$systemd_unit_name" >/dev/null 2>&1; src=$? + fi fi else super_ocf_log warn "ACT: systemd service $systemd_unit_name is not active, it will be started using systemd" From ec13f8013246e33c02a358cf503f66e93382195f Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 17 Jan 2024 13:26:59 +0100 Subject: [PATCH 096/123] SAPHanaSR-tests-basic-cluster.7: update status --- man-tester/SAPHanaSR-tests-basic-cluster.7 | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/man-tester/SAPHanaSR-tests-basic-cluster.7 b/man-tester/SAPHanaSR-tests-basic-cluster.7 index f8d63c8c..1cd9f603 100644 --- a/man-tester/SAPHanaSR-tests-basic-cluster.7 +++ b/man-tester/SAPHanaSR-tests-basic-cluster.7 @@ -23,11 +23,23 @@ The checks outlined here are by no means complete. Neverteless, at least this checks should be performed before doing functional tests. .PP +\fB*\fR Checking access to update channels and pending patches. +.PP +The accessible update channels are shown. Also pending patches are listed. +Do this on all Linux cluster nodes. +See manual page zypper(8). +.PP +.RS 2 +# zypper lr +.br +# zypper lp +.RE +.PP \fB*\fR Checking systemd integration and OS settings for SAP HANA. .PP Basic OS features time and name resolution as well as HANA disk space are checked. Integration of SAP hostagent and SAP HANA database with systemd are -checked. OS settings for HANA are checked. +checked. OS settings for HANA are checked. Do this on all Linux cluster nodes. Cluster nodes are "node1" and "node2", HANA SID is "HA1", instance number is "85". Do this on all Linux cluster nodes. See manual page chronyc(8) and saptune(8). @@ -47,6 +59,7 @@ See manual page chronyc(8) and saptune(8). \fB*\fR Checking for errors in the system logs. .PP Known error patterns are looked up in the system log. +Do this on all Linux cluster nodes. See manual page cs_show_error_patterns(8). .PP .RS 2 From c13e638062c04026217c938e00620e3183e5a81e Mon Sep 17 00:00:00 2001 From: lpinne Date: Wed, 17 Jan 2024 14:18:01 +0100 Subject: [PATCH 097/123] SAPHanaSR-tests-syntax.5: details --- man-tester/SAPHanaSR-tests-syntax.5 | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/man-tester/SAPHanaSR-tests-syntax.5 b/man-tester/SAPHanaSR-tests-syntax.5 index 7a544ecb..4190151e 100644 --- a/man-tester/SAPHanaSR-tests-syntax.5 +++ b/man-tester/SAPHanaSR-tests-syntax.5 @@ -42,19 +42,29 @@ globalDown not yet implemented .TP pHostUp -host p (expected to be primary before the test starts) is up +host p (expected to be primary before the test starts) is up (scale-out: expected primary master node) +.TP +pWorkerUp +worker node p is up (scale-out) .TP pSiteUp -site p is up +site p is up .TP sSiteUp site s (expected to be secondary before the test starts) is up .TP sHostUp -host s is up +host s is up (scale-out: master node) +.TP +sWorkerUp +worker node s is up (scale-out) .TP pHostDown -host p is down +host p is down (scale-out: master node) +.TP +pWorkerDown +worker node p is down (scale-out) +not yet implemented .TP pSiteDown site p is down @@ -63,7 +73,11 @@ sSiteDown site s is down .TP sHostDown -host s is down +host s is down (scale-out: master node) +.TP +sWorkerDown +worker node s is down (scale-out) +not yet implemented .PP Note: Prefixes "s" and "p" are statically indicating geographical sites, as seen at the beginning of a test. If a takeover happens during that test, the @@ -124,7 +138,8 @@ TODO, the string "None" TODO .TP post -action after step, [ bmt | cleanup | spn | ssn | osn | script