diff --git a/xml/article_geo_clustering.xml b/xml/article_geo_clustering.xml index 998256c1..5979f54c 100644 --- a/xml/article_geo_clustering.xml +++ b/xml/article_geo_clustering.xml @@ -239,7 +239,7 @@ patterns. These patterns are only available after the &productname; is installed. - You can register to the &scc; and install the &hasi; while installing &sles;, + You can register to the &scc; and install &productname; while installing &sles;, or after installation. For more information, see &deploy; for &sles;. @@ -259,7 +259,7 @@ Installing software packages on all nodes - For an automated installation of &sls; &productnumber; and the &hasi;, + For an automated installation of &sls; &productnumber; and &productname; &productnumber;, use &ay; to clone existing nodes. For more information, see . diff --git a/xml/article_installation.xml b/xml/article_installation.xml index 3eac0238..2ea80427 100644 --- a/xml/article_installation.xml +++ b/xml/article_installation.xml @@ -302,8 +302,8 @@ This pattern is only available after the &productname; is installed. - You can register with the &scc; and install the &hasi; while installing &sles;, - or after installation. For more information, see the + You can register to the &scc; and install &productname; while installing &sles;, + or after installation. For more information, see the &deploy; for &sles;. @@ -322,8 +322,8 @@ Installing software packages on all nodes - For an automated installation of &sls; &productnumber; and the &hasi;, - use &ay; to clone existing nodes. For more + For an automated installation of &sls; &productnumber; and &productname; &productnumber;, + use &ay; to clone existing nodes. For more information, see . @@ -336,7 +336,7 @@ Before you can configure SBD with the bootstrap script, you must enable a watchdog on each node. &sls; ships with several kernel modules that provide hardware-specific - watchdog drivers. The &hasi; uses the SBD daemon as the software component + watchdog drivers. &productname; uses the SBD daemon as the software component that feeds the watchdog. diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml index 4a9cd901..247e7c4b 100644 --- a/xml/ha_concepts.xml +++ b/xml/ha_concepts.xml @@ -24,8 +24,8 @@ (load balancing) of individually managed cluster resources). - This chapter introduces the main product features and benefits of the - &hasi;. Inside you will find several example clusters and learn about + This chapter introduces the main product features and benefits of &productname;. + Inside you will find several example clusters and learn about the components making up a cluster. The last section provides an overview of the architecture, describing the individual architecture layers and processes within the cluster. @@ -47,10 +47,11 @@ - Availability as extension + Availability as module or extension - The &hasi; is available as an extension to &sls; &productnumber;. + &ha; is available as module or extension for several products. For details, see + . @@ -64,7 +65,7 @@ Wide range of clustering scenarios - The &hasi; supports the following scenarios: + &productname; supports the following scenarios: @@ -122,11 +123,11 @@ Flexibility - The &hasi; ships with &corosync; messaging and membership layer + &productname; ships with &corosync; messaging and membership layer and Pacemaker Cluster Resource Manager. Using Pacemaker, administrators can continually monitor the health and status of their resources, and manage dependencies. They can automatically stop and start services based on highly - configurable rules and policies. The &hasi; allows you to tailor a + configurable rules and policies. &productname; allows you to tailor a cluster to the specific applications and hardware infrastructure that fit your organization. Time-dependent configuration enables services to automatically migrate back to repaired nodes at specified times. @@ -136,7 +137,7 @@ Storage and data replication - With the &hasi; you can dynamically assign and reassign server + With &productname; you can dynamically assign and reassign server storage as needed. It supports Fibre Channel or iSCSI storage area networks (SANs). Shared disk systems are also supported, but they are not a requirement. &productname; also comes with a cluster-aware file @@ -155,8 +156,8 @@ virtual Linux servers. &sls; &productnumber; ships with &xen;, an open source virtualization hypervisor, and with &kvm; (Kernel-based Virtual Machine). &kvm; is a virtualization software for Linux which is based on - hardware virtualization extensions. The cluster resource manager in the - &hasi; can recognize, monitor, and manage services running within + hardware virtualization extensions. The cluster resource manager in &productname; + can recognize, monitor, and manage services running within virtual servers and services running in physical servers. Guest systems can be managed as services by the cluster. @@ -270,7 +271,7 @@ User-friendly administration tools - The &hasi; ships with a set of powerful tools. Use them for basic installation + &productname; ships with a set of powerful tools. Use them for basic installation and setup of your cluster and for effective configuration and administration: @@ -280,7 +281,7 @@ A graphical user interface for general system installation and - administration. Use it to install the &hasi; on top of &sls; as + administration. Use it to install &productname; on top of &sls; as described in the &haquick;. &yast; also provides the following modules in the &ha; category to help configure your cluster or individual components: @@ -337,7 +338,7 @@ Benefits - The &hasi; allows you to configure up to 32 Linux servers into a + &productname; allows you to configure up to 32 Linux servers into a high-availability cluster (HA cluster). Resources can be dynamically switched or moved to any node in the cluster. Resources can be configured to automatically migrate if a node fails, or they can be @@ -345,9 +346,9 @@ - The &hasi; provides high availability from commodity components. Lower + &productname; provides high availability from commodity components. Lower costs are obtained through the consolidation of applications and - operations onto a cluster. The &hasi; also allows you to centrally + operations onto a cluster. &productname; also allows you to centrally manage the complete cluster. You can adjust resources to meet changing workload requirements (thus, manually load balance the cluster). Allowing clusters of more than two nodes also provides savings @@ -413,7 +414,7 @@ - The following scenario illustrates some benefits the &hasi; can + The following scenario illustrates some benefits &productname; can provide. @@ -480,7 +481,7 @@ - When Web Server 1 failed, the &hasi; software did the following: + When Web Server 1 failed, the &ha; software did the following: @@ -523,13 +524,13 @@ either automatically fail back (move back) to Web Server 1, or they can stay where they are. This depends on how you configured the resources for them. Migrating the services back to Web Server 1 will incur some - down-time. Therefore the &hasi; also allows you to defer the migration until + down-time. Therefore &productname; also allows you to defer the migration until a period when it will cause little or no service interruption. There are advantages and disadvantages to both alternatives. - The &hasi; also provides resource migration capabilities. You can move + &productname; also provides resource migration capabilities. You can move applications, Web sites, etc. to other servers in your cluster as required for system management. @@ -545,7 +546,7 @@ Cluster configurations: storage - Cluster configurations with the &hasi; might or might not include a + Cluster configurations with &productname; might or might not include a shared disk subsystem. The shared disk subsystem can be connected via high-speed Fibre Channel cards, cables, and switches, or it can be configured to use iSCSI. If a node fails, another designated node in @@ -627,7 +628,7 @@ Architecture - This section provides a brief overview of the &hasi; architecture. It + This section provides a brief overview of &productname; architecture. It identifies and provides information on the architectural components, and describes how those components interoperate. @@ -635,7 +636,7 @@ Architecture layers - The &hasi; has a layered architecture. + &productname; has a layered architecture. illustrates the different layers and their associated components. diff --git a/xml/ha_config_basics.xml b/xml/ha_config_basics.xml index 55159a98..906b2bb8 100644 --- a/xml/ha_config_basics.xml +++ b/xml/ha_config_basics.xml @@ -21,7 +21,7 @@ This chapter introduces some basic concepts you need to know when administering your cluster. The following chapters show you how to execute the main configuration and - administration tasks with each of the management tools the &hasi; + administration tasks with each of the management tools &productname; provides. @@ -407,8 +407,8 @@ C = number of cluster nodes - Home page of Pacemaker, the cluster resource manager shipped with the - &hasi;. + Home page of Pacemaker, the cluster resource manager shipped with + &productname;. diff --git a/xml/ha_configuring_resources.xml b/xml/ha_configuring_resources.xml index 42e9e575..9289443c 100644 --- a/xml/ha_configuring_resources.xml +++ b/xml/ha_configuring_resources.xml @@ -80,8 +80,8 @@ start, stop or monitor command. - Typically, resource agents come in the form of shell scripts. The - &hasi; supports the following classes of resource agents: + Typically, resource agents come in the form of shell scripts. &productname; + supports the following classes of resource agents: @@ -198,7 +198,7 @@ - The agents supplied with the &hasi; are written to OCF + The agents supplied with &productname; are written to OCF specifications. @@ -310,7 +310,7 @@ If a resource has specific environment requirements, make sure they are present and identical on all cluster nodes. This kind of configuration - is not managed by the &hasi;. You must do this yourself. + is not managed by &productname;. You must do this yourself. You can create primitive resources using either &hawk2; or &crmsh;. @@ -318,9 +318,9 @@ Do not touch services managed by the cluster - When managing a resource with the &hasi;, the same resource must not + When managing a resource with &productname;, the same resource must not be started or stopped otherwise (outside of the cluster, for example - manually or on boot or reboot). The &hasi; software is responsible + manually or on boot or reboot). The &ha; software is responsible for all service start or stop actions. diff --git a/xml/ha_drbd.xml b/xml/ha_drbd.xml index 6e1b2acf..e642f8ae 100644 --- a/xml/ha_drbd.xml +++ b/xml/ha_drbd.xml @@ -153,9 +153,9 @@ NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE Installing DRBD services - Install the &hasi; on both &sls; machines in your networked + Install the &ha; pattern on both &sls; machines in your networked cluster as described in . Installing - &hasi; also installs the DRBD program files. + the pattern also installs the DRBD program files. @@ -677,8 +677,8 @@ r0 role:Primary Migrating from DRBD 8 to DRBD 9 - Between DRBD 8 (shipped with &sle; &hasi; 12 SP1) and - DRBD 9 (shipped with &sle; &hasi; 12 SP2), the metadata format + Between DRBD 8 (shipped with &productname; 12 SP1) and + DRBD 9 (shipped with &productname;12 SP2), the metadata format has changed. DRBD 9 does not automatically convert previous metadata files to the new format. diff --git a/xml/ha_example_gui_i.xml b/xml/ha_example_gui_i.xml index 9c8a21a4..11fb2d2c 100644 --- a/xml/ha_example_gui_i.xml +++ b/xml/ha_example_gui_i.xml @@ -107,7 +107,7 @@ The name and value are dependent on your hardware configuration and what you chose for the media configuration during the installation of - the &hasi; software. + the &ha; software. diff --git a/xml/ha_fencing.xml b/xml/ha_fencing.xml index 899dc68f..45443b01 100644 --- a/xml/ha_fencing.xml +++ b/xml/ha_fencing.xml @@ -138,7 +138,7 @@ In a Pacemaker cluster, the implementation of node level fencing is &stonith; - (Shoot The Other Node in the Head). The &hasi; + (Shoot The Other Node in the Head). &productname; includes the stonith command line tool, an extensible interface for remotely powering down a node in the cluster. For an overview of the available options, run stonith --help @@ -150,7 +150,7 @@ &stonith; devices To use node level fencing, you first need to have a fencing device. To - get a list of &stonith; devices which are supported by the &hasi;, run + get a list of &stonith; devices which are supported by &productname;, run one of the following commands on any of the nodes: &prompt.root;stonith -L diff --git a/xml/ha_glossary.xml b/xml/ha_glossary.xml index e48c2b94..cad15bfe 100644 --- a/xml/ha_glossary.xml +++ b/xml/ha_glossary.xml @@ -188,7 +188,7 @@ The management entity responsible for coordinating all non-local - interactions in a &ha; cluster. The &hasi; uses Pacemaker as CRM. + interactions in a &ha; cluster. &productname; uses Pacemaker as CRM. The CRM is implemented as pacemaker-controld. It interacts with several components: local resource managers, both on its own node and on the other nodes, @@ -450,7 +450,7 @@ performance will be met during a contractual measurement period. A script acting as a proxy to manage a resource (for example, to start, - stop, or monitor a resource). The &hasi; supports different + stop, or monitor a resource). &productname; supports different kinds of resource agents. For details, see . diff --git a/xml/ha_install.xml b/xml/ha_install.xml index a2706bc8..806464d1 100644 --- a/xml/ha_install.xml +++ b/xml/ha_install.xml @@ -10,7 +10,7 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="cha-ha-install"> - Installing the &hasi; + Installing &productname; If you are setting up a &ha; cluster with &productnamereg; for the first time, the @@ -40,7 +40,7 @@ Manual installation - For the manual installation of the packages for &hasi; refer to + For the manual installation of the packages for &ha; refer to . It leads you through the setup of a basic two-node cluster. diff --git a/xml/ha_lb_lvs.xml b/xml/ha_lb_lvs.xml index 6e572015..01c4fac8 100644 --- a/xml/ha_lb_lvs.xml +++ b/xml/ha_lb_lvs.xml @@ -17,7 +17,7 @@ The following sections give an overview of the main LVS components and - concepts. Then we explain how to set up &lvs; on &hasi;. + concepts. Then we explain how to set up &lvs; on &productname;. diff --git a/xml/ha_loadbalancing.xml b/xml/ha_loadbalancing.xml index 23b747f2..988f9952 100644 --- a/xml/ha_loadbalancing.xml +++ b/xml/ha_loadbalancing.xml @@ -31,7 +31,7 @@ toms 2014-05-27: one large, fast server to outside clients. This apparent single server is called a virtual server. It consists of one or more load balancers dispatching incoming requests and several real servers - running the actual services. With a load balancing setup of &hasi;, you + running the actual services. With a load balancing setup of &productname;, you can build highly scalable and highly available network services, such as Web, cache, mail, FTP, media and VoIP services. @@ -41,7 +41,7 @@ toms 2014-05-27: Conceptual overview - &hasi; supports two technologies for load balancing: &lvs; (LVS) and + &productname; supports two technologies for load balancing: &lvs; (LVS) and &haproxy;. The key difference is &lvs; operates at OSI layer 4 (Transport), configuring the network layer of kernel, while &haproxy; operates at layer 7 (Application), running in user space. Thus &lvs; diff --git a/xml/ha_maintenance.xml b/xml/ha_maintenance.xml index 5d630a5a..e4548b35 100644 --- a/xml/ha_maintenance.xml +++ b/xml/ha_maintenance.xml @@ -297,7 +297,7 @@ attribute. - From this point on, &hasi; will take over cluster management again. + From this point on, &ha; will take over cluster management again. @@ -550,7 +550,7 @@ Node &node2;: standby that resource. - From this point on, the resource will be managed by the &hasi; software + From this point on, the resource will be managed by the &ha; software again. @@ -611,7 +611,7 @@ Node &node2;: standby default value) and apply your changes. - From this point on, the resource will be managed by the &hasi; software + From this point on, the resource will be managed by the &ha; software again. @@ -626,7 +626,7 @@ Node &node2;: standby If the cluster or a node is in maintenance mode, you can use tools external to the cluster stack (for example, systemctl) to manually operate the components that are managed by the cluster as resources. - The &hasi; will not monitor them or attempt to restart them. + The &ha; software will not monitor them or attempt to restart them. If you stop the cluster services on a node, all daemons and processes diff --git a/xml/ha_management.xml b/xml/ha_management.xml index 7779ee8d..8d1d2fb7 100644 --- a/xml/ha_management.xml +++ b/xml/ha_management.xml @@ -44,7 +44,7 @@ from the book, except for a general overview--> - &hasi; ships with a comprehensive set of tools to assists you in + &productname; ships with a comprehensive set of tools to assists you in managing your cluster from the command line. This chapter introduces the tools needed for managing the cluster configuration in the CIB and the cluster resources. Other command line tools for managing resource agents @@ -125,7 +125,7 @@ from the book, except for a general overview--> database (CIB) for consistency and other problems. It can check a file containing the configuration or connect to a running cluster. It reports two classes of problems. Errors must be fixed before the - &hasi; can work properly while warning resolution is up to the + &ha; software can work properly while warning resolution is up to the administrator. crm_verify assists in creating new or modified configurations. You can take a local copy of a CIB in the running cluster, edit it, validate it using diff --git a/xml/ha_managing_resources.xml b/xml/ha_managing_resources.xml index 993ed8f1..9d41e7a1 100644 --- a/xml/ha_managing_resources.xml +++ b/xml/ha_managing_resources.xml @@ -246,14 +246,14 @@ primitive admin_addr IPaddr2 \ Do not touch services managed by the cluster - When managing a resource via the &hasi;, the resource must not be started + When managing a resource via the &ha; software, the resource must not be started or stopped otherwise (outside the cluster, for example manually or on - boot or reboot). The &hasi; software is responsible for all service start + boot or reboot). The &ha; software is responsible for all service start or stop actions. However, if you want to check if the service is configured properly, start - it manually, but make sure that it is stopped again before the &hasi; takes + it manually, but make sure that it is stopped again before the &ha; software takes over. diff --git a/xml/ha_migration.xml b/xml/ha_migration.xml index 3b3e3c84..ef338b4c 100644 --- a/xml/ha_migration.xml +++ b/xml/ha_migration.xml @@ -131,7 +131,7 @@ - The &hasi; has the same supported upgrade paths as the underlying base system. For a complete + &productname; has the same supported upgrade paths as the underlying base system. For a complete overview, see the section Supported Upgrade Paths to &sls; &productnumber; in the @@ -810,7 +810,7 @@ Upgrading from product version 11 to 12: cluster offline upgrade - The &hasi; 12 cluster stack comes with major changes in various + &productname; 12 cluster stack comes with major changes in various components (for example, &corosync.conf;, disk formats of OCFS2). Therefore, a cluster rolling upgrade from any &productname; 11 version is not supported. Instead, all cluster nodes must be offline diff --git a/xml/ha_monitoring_clusters.xml b/xml/ha_monitoring_clusters.xml index a553f453..b3264723 100644 --- a/xml/ha_monitoring_clusters.xml +++ b/xml/ha_monitoring_clusters.xml @@ -33,7 +33,7 @@ Monitoring system health with the <literal>SysInfo</literal> resource agent To prevent a node from running out of disk space and thus being unable to - manage any resources that have been assigned to it, the &hasi; + manage any resources that have been assigned to it, &productname; provides a resource agent, ocf:pacemaker:SysInfo. Use it to monitor a node's health with regard to disk partitions. diff --git a/xml/ha_ocfs2.xml b/xml/ha_ocfs2.xml index 67d621f0..3d114685 100644 --- a/xml/ha_ocfs2.xml +++ b/xml/ha_ocfs2.xml @@ -141,7 +141,7 @@ The &ocfs; Kernel module (ocfs2) is installed - automatically in the &hasi; on &slsreg; &productnumber;. To use + automatically in &productname; &productnumber;. To use &ocfs;, make sure the following packages are installed on each node in the cluster: ocfs2-tools and the matching ocfs2-kmp-* diff --git a/xml/ha_remote_hosts.xml b/xml/ha_remote_hosts.xml index bd69ecb1..831a9796 100644 --- a/xml/ha_remote_hosts.xml +++ b/xml/ha_remote_hosts.xml @@ -43,13 +43,13 @@ By providing support for monitoring plug-ins (formerly named Nagios - plug-ins), the &hasi; now also allows you to monitor services on + plug-ins), &productname; now also allows you to monitor services on remote hosts. You can collect external statuses on the guests without modifying the guest image. For example, VM guests might run Web services or simple network resources that need to be accessible. With the Nagios resource agents, you can now monitor the Web service or the network resource on the guest. If these services are not reachable anymore, - the &hasi; triggers a restart or migration of the respective guest. + &productname; triggers a restart or migration of the respective guest. If your guests depend on a service (for example, an NFS server to be @@ -161,7 +161,7 @@ group g-vm1-and-services vm1 vm1-sshd \ not need to run the cluster stack to become members of the cluster. - The &hasi; can now launch virtual environments (KVM and LXC), plus + &productname; can now launch virtual environments (KVM and LXC), plus the resources that live within those virtual environments without requiring the virtual environments to run &pace; or &corosync;. @@ -173,8 +173,8 @@ group g-vm1-and-services vm1 vm1-sshd \ - The normal (bare-metal) cluster nodes run the - &hasi;. + The normal (bare-metal) cluster nodes run + &productname;. diff --git a/xml/ha_resource_constraints.xml b/xml/ha_resource_constraints.xml index 8088496d..f21f6eb9 100644 --- a/xml/ha_resource_constraints.xml +++ b/xml/ha_resource_constraints.xml @@ -1080,7 +1080,7 @@ the resources diminish in performance (or even fail). - To take this into account, the &hasi; allows you to specify the + To take this into account, &productname; allows you to specify the following parameters: @@ -1126,7 +1126,7 @@ If multiple resources with utilization attributes are grouped or have - colocation constraints, the &hasi; takes that into account. If + colocation constraints, &productname; takes that into account. If possible, the resources is placed on a node that can fulfill all capacity requirements. @@ -1140,7 +1140,7 @@ - The &hasi; also provides the means to detect and configure both node + &productname; also provides the means to detect and configure both node capacity and resource requirements automatically: diff --git a/xml/ha_storage_protection.xml b/xml/ha_storage_protection.xml index 291ae84b..e59320b6 100644 --- a/xml/ha_storage_protection.xml +++ b/xml/ha_storage_protection.xml @@ -381,7 +381,7 @@ stonith-timeout = Timeout (msgwait) + 20% be used as kernel watchdog module. - The &hasi; uses the SBD daemon as the software component that feeds + &productname; uses the SBD daemon as the software component that feeds the watchdog. diff --git a/xml/ha_troubleshooting.xml b/xml/ha_troubleshooting.xml index c136be09..c05ec07c 100644 --- a/xml/ha_troubleshooting.xml +++ b/xml/ha_troubleshooting.xml @@ -45,11 +45,10 @@ The packages needed for configuring and managing a cluster are included in the High Availability installation - pattern, available with the &hasi;. + pattern, available with &productname;. - Check if &hasi; is installed as an extension to &sls; - &productnumber; on each of the cluster nodes and if the + Check if &productname; is installed on each of the cluster nodes and if the High Availability pattern is installed on each of the machines as described in the &haquick;. diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index cedc7c42..91d7c6c1 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -599,7 +599,7 @@ alternatively over the available networks. - When RRP is used, the &hasi; monitors the status of the current + When RRP is used, &productname; monitors the status of the current rings and automatically re-enables redundant rings after faults. Alternatively, check the ring status manually with corosync-cfgtool. View the available options with diff --git a/xml/phrases-decl.ent b/xml/phrases-decl.ent index 2a11150a..391d0990 100644 --- a/xml/phrases-decl.ent +++ b/xml/phrases-decl.ent @@ -80,7 +80,7 @@ "Allow interaction with the in-kernel connection tracking system for enabling stateful packet - inspection for iptables. Used by the &hasi; to synchronize the connection + inspection for iptables. Used by &productname; to synchronize the connection status between cluster nodes.">