diff --git a/xml/article_geo_clustering.xml b/xml/article_geo_clustering.xml index c5202e04..ac36cdf3 100644 --- a/xml/article_geo_clustering.xml +++ b/xml/article_geo_clustering.xml @@ -239,7 +239,7 @@ patterns. These patterns are only available after the &productname; is installed. - You can register to the &scc; and install the &hasi; while installing &sles;, + You can register to the &scc; and install &productname; while installing &sles;, or after installation. For more information, see &deploy; for &sles;. @@ -259,7 +259,7 @@ Installing software packages on all nodes - For an automated installation of &sls; &productnumber; and the &hasi;, + For an automated installation of &sls; &productnumber; and &productname; &productnumber;, use &ay; to clone existing nodes. For more information, see . diff --git a/xml/article_installation.xml b/xml/article_installation.xml index f4f7d5df..9dcd2ac4 100644 --- a/xml/article_installation.xml +++ b/xml/article_installation.xml @@ -306,8 +306,8 @@ This pattern is only available after the &productname; is installed. - You can register to the &scc; and install the &hasi; while installing &sles;, - or after installation. For more information, see the + You can register to the &scc; and install &productname; while installing &sles;, + or after installation. For more information, see the &deploy; for &sles;. @@ -326,8 +326,8 @@ Installing software packages on all nodes - For an automated installation of &sls; &productnumber; and the &hasi;, - use &ay; to clone existing nodes. For more + For an automated installation of &sls; &productnumber; and &productname; &productnumber;, + use &ay; to clone existing nodes. For more information, see . @@ -340,7 +340,7 @@ Before you can configure SBD with the bootstrap script, you must enable a watchdog on each node. &sls; ships with several kernel modules that provide hardware-specific - watchdog drivers. The &hasi; uses the SBD daemon as the software component + watchdog drivers. &productname; uses the SBD daemon as the software component that feeds the watchdog. diff --git a/xml/ha_concepts.xml b/xml/ha_concepts.xml index aa3f0ff3..78da4a5c 100644 --- a/xml/ha_concepts.xml +++ b/xml/ha_concepts.xml @@ -24,8 +24,8 @@ (load balancing) of individually managed cluster resources). - This chapter introduces the main product features and benefits of the - &hasi;. Inside you will find several example clusters and learn about + This chapter introduces the main product features and benefits of &productname;. + Inside you will find several example clusters and learn about the components making up a cluster. The last section provides an overview of the architecture, describing the individual architecture layers and processes within the cluster. @@ -47,10 +47,11 @@ - Availability as extension + Availability as a module or extension - The &hasi; is available as an extension to &sls; &productnumber;. + &ha; is available as a module or extension for several products. For details, see + . @@ -65,7 +66,7 @@ Wide range of clustering scenarios - The &hasi; supports the following scenarios: + &productname; supports the following scenarios: @@ -123,11 +124,11 @@ Flexibility - The &hasi; ships with &corosync; messaging and membership layer + &productname; ships with &corosync; messaging and membership layer and Pacemaker Cluster Resource Manager. Using Pacemaker, administrators can continually monitor the health and status of their resources, and manage dependencies. They can automatically stop and start services based on highly - configurable rules and policies. The &hasi; allows you to tailor a + configurable rules and policies. &productname; allows you to tailor a cluster to the specific applications and hardware infrastructure that fit your organization. Time-dependent configuration enables services to automatically migrate back to repaired nodes at specified times. @@ -137,7 +138,7 @@ Storage and data replication - With the &hasi; you can dynamically assign and reassign server + With &productname; you can dynamically assign and reassign server storage as needed. It supports Fibre Channel or iSCSI storage area networks (SANs). Shared disk systems are also supported, but they are not a requirement. &productname; also comes with a cluster-aware file @@ -156,8 +157,8 @@ virtual Linux servers. &sls; &productnumber; ships with &xen;, an open source virtualization hypervisor, and with &kvm; (Kernel-based Virtual Machine). &kvm; is a virtualization software for Linux which is based on - hardware virtualization extensions. The cluster resource manager in the - &hasi; can recognize, monitor, and manage services running within + hardware virtualization extensions. The cluster resource manager in &productname; + can recognize, monitor, and manage services running within virtual servers and services running in physical servers. Guest systems can be managed as services by the cluster. @@ -270,7 +271,7 @@ User-friendly administration tools - The &hasi; ships with a set of powerful tools. Use them for basic installation + &productname; ships with a set of powerful tools. Use them for basic installation and setup of your cluster and for effective configuration and administration: @@ -280,7 +281,7 @@ A graphical user interface for general system installation and - administration. Use it to install the &hasi; on top of &sls; as + administration. Use it to install &productname; on top of &sls; as described in the &haquick;. &yast; also provides the following modules in the &ha; category to help configure your cluster or individual components: @@ -337,7 +338,7 @@ Benefits - The &hasi; allows you to configure up to 32 Linux servers into a + &productname; allows you to configure up to 32 Linux servers into a high-availability cluster (HA cluster). Resources can be dynamically switched or moved to any node in the cluster. Resources can be configured to automatically migrate if a node fails, or they can be @@ -345,9 +346,9 @@ - The &hasi; provides high availability from commodity components. Lower + &productname; provides high availability from commodity components. Lower costs are obtained through the consolidation of applications and - operations onto a cluster. The &hasi; also allows you to centrally + operations onto a cluster. &productname; also allows you to centrally manage the complete cluster. You can adjust resources to meet changing workload requirements (thus, manually load balance the cluster). Allowing clusters of more than two nodes also provides savings @@ -413,7 +414,7 @@ - The following scenario illustrates some benefits the &hasi; can + The following scenario illustrates some benefits &productname; can provide. @@ -480,7 +481,7 @@ - When Web Server 1 failed, the &hasi; software did the following: + When Web Server 1 failed, the &ha; software did the following: @@ -523,13 +524,13 @@ either automatically fail back (move back) to Web Server 1, or they can stay where they are. This depends on how you configured the resources for them. Migrating the services back to Web Server 1 will incur some - down-time. Therefore the &hasi; also allows you to defer the migration until + down-time. Therefore &productname; also allows you to defer the migration until a period when it will cause little or no service interruption. There are advantages and disadvantages to both alternatives. - The &hasi; also provides resource migration capabilities. You can move + &productname; also provides resource migration capabilities. You can move applications, Web sites, etc. to other servers in your cluster as required for system management. @@ -545,7 +546,7 @@ Cluster configurations: storage - Cluster configurations with the &hasi; might or might not include a + Cluster configurations with &productname; might or might not include a shared disk subsystem. The shared disk subsystem can be connected via high-speed Fibre Channel cards, cables, and switches, or it can be configured to use iSCSI. If a node fails, another designated node in @@ -627,7 +628,7 @@ Architecture - This section provides a brief overview of the &hasi; architecture. It + This section provides a brief overview of &productname; architecture. It identifies and provides information on the architectural components, and describes how those components interoperate. @@ -635,7 +636,7 @@ Architecture layers - The &hasi; has a layered architecture. + &productname; has a layered architecture. illustrates the different layers and their associated components. diff --git a/xml/ha_config_basics.xml b/xml/ha_config_basics.xml index a0f56a7a..a67233d2 100644 --- a/xml/ha_config_basics.xml +++ b/xml/ha_config_basics.xml @@ -21,7 +21,7 @@ This chapter introduces some basic concepts you need to know when administering your cluster. The following chapters show you how to execute the main configuration and - administration tasks with each of the management tools the &hasi; + administration tasks with each of the management tools &productname; provides. @@ -407,8 +407,8 @@ C = number of cluster nodes - Home page of Pacemaker, the cluster resource manager shipped with the - &hasi;. + Home page of Pacemaker, the cluster resource manager shipped with + &productname;. diff --git a/xml/ha_configuring_resources.xml b/xml/ha_configuring_resources.xml index 9430f343..f010362f 100644 --- a/xml/ha_configuring_resources.xml +++ b/xml/ha_configuring_resources.xml @@ -69,140 +69,139 @@ - - Supported resource agent classes - - For each cluster resource you add, you need to define the standard that - the resource agent conforms to. Resource agents abstract the services - they provide and present an accurate status to the cluster, which allows - the cluster to be non-committal about the resources it manages. The - cluster relies on the resource agent to react appropriately when given a - start, stop or monitor command. - - - Typically, resource agents come in the form of shell scripts. The - &hasi; supports the following classes of resource agents: - - - - Open Cluster Framework (OCF) resource agents - - - OCF RA agents are best suited for use with &ha;, especially when - you need promotable clone resources or special monitoring abilities. The - agents are generally located in - /usr/lib/ocf/resource.d/provider/. - Their functionality is similar to that of LSB scripts. However, the - configuration is always done with environmental variables which allow - them to accept and process parameters easily. - OCF specifications have strict definitions of which exit codes must - be returned by actions, see . The - cluster follows these specifications exactly. - - - All OCF Resource Agents are required to have at least the actions - start, stop, - status, monitor, and - meta-data. The meta-data action - retrieves information about how to configure the agent. For example, - to know more about the IPaddr agent by - the provider heartbeat, use the following command: - + + Supported resource agent classes + + For each cluster resource you add, you need to define the standard that + the resource agent conforms to. Resource agents abstract the services + they provide and present an accurate status to the cluster, which allows + the cluster to be non-committal about the resources it manages. The + cluster relies on the resource agent to react appropriately when given a + start, stop or monitor command. + + + Typically, resource agents come in the form of shell scripts. &productname; + supports the following classes of resource agents: + + + + Open Cluster Framework (OCF) resource agents + + + OCF RA agents are best suited for use with &ha;, especially when + you need promotable clone resources or special monitoring abilities. The + agents are generally located in + /usr/lib/ocf/resource.d/provider/. + Their functionality is similar to that of LSB scripts. However, the + configuration is always done with environmental variables that allow + them to accept and process parameters easily. + OCF specifications have strict definitions of which exit codes must + be returned by actions. See . The + cluster follows these specifications exactly. + + + All OCF Resource Agents are required to have at least the actions + start, stop, + status, monitor and + meta-data. The meta-data action + retrieves information about how to configure the agent. For example, + to know more about the IPaddr agent by + the provider heartbeat, use the following command: + OCF_ROOT=/usr/lib/ocf /usr/lib/ocf/resource.d/heartbeat/IPaddr meta-data - - The output is information in XML format, including several sections - (general description, available parameters, available actions for the - agent). - - - Alternatively, use the &crmsh; to view information on OCF resource - agents. For details, see . - - - - - Linux Standards Base (LSB) scripts - - - LSB resource agents are generally provided by the operating - system/distribution and are found in - /etc/init.d. To be used with the cluster, they - must conform to the LSB init script specification. For example, they - must have several actions implemented, which are, at minimum, - start, stop, - restart, reload, - force-reload, and status. For - more information, see - . - - - The configuration of those services is not standardized. If you - intend to use an LSB script with &ha;, make sure that you - understand how the relevant script is configured. Often you can find - information about this in the documentation of the relevant package - in - /usr/share/doc/packages/PACKAGENAME. - - - - - Systemd - - - Starting with &sle; 12, systemd is a replacement for the popular - System V init daemon. Pacemaker can manage systemd services if they - are present. Instead of init scripts, systemd has unit files. - Generally the services (or unit files) are provided by the operating - system. In case you want to convert existing init scripts, find more - information at - . - - - - - Service - - - There are currently many common types of system - services that exist in parallel: LSB (belonging to - System V init), systemd, and (in some - distributions) upstart. Therefore, Pacemaker - supports a special alias which intelligently figures out which one - applies to a given cluster node. This is particularly useful when the - cluster contains a mix of systemd, upstart, and LSB services. - Pacemaker will try to find the named service in the following order: - as an LSB (SYS-V) init script, a systemd unit file, or an Upstart - job. - - - - - Nagios - - - Monitoring plug-ins (formerly called Nagios plug-ins) allow to - monitor services on remote hosts. Pacemaker can do remote monitoring - with the monitoring plug-ins if they are present. For detailed - information, see - . - - - - - &stonith; (fencing) resource agents - - - This class is used exclusively for fencing related resources. For - more information, see . - - - - - - The agents supplied with the &hasi; are written to OCF - specifications. - - + + The output is information in XML format, including several sections + (general description, available parameters, available actions for the + agent). + + + Alternatively, use the &crmsh; to view information on OCF resource + agents. For details, see . + + + + + Linux Standards Base (LSB) scripts + + + LSB resource agents are generally provided by the operating + system/distribution and are found in + /etc/init.d. To be used with the cluster, they + must conform to the LSB init script specification. For example, they + must have several actions implemented, which are, at minimum, + start, stop, + restart, reload, + force-reload and status. For + more information, see + . + + + The configuration of those services is not standardized. If you + intend to use an LSB script with &ha;, make sure that you + understand how the relevant script is configured. You can often find + information about this in the documentation of the relevant package + in + /usr/share/doc/packages/PACKAGENAME. + + + + + systemd + + + &pace; can manage systemd services if they + are present. Instead of init scripts, systemd has unit files. + Generally, the services (or unit files) are provided by the operating + system. In case you want to convert existing init scripts, find more + information at + . + + + + + Service + + + There are currently many types of system + services that exist in parallel: LSB (belonging to + System V init), systemd and (in some + distributions) upstart. Therefore, &pace; + supports a special alias that figures out which one + applies to a given cluster node. This is particularly useful when the + cluster contains a mix of systemd, upstart and LSB services. + &pace; tries to find the named service in the following order: + as an LSB (SYS-V) init script, a systemd unit file or an Upstart + job. + + + + + Nagios + + + Monitoring plug-ins (formerly called Nagios plug-ins) allow to + monitor services on remote hosts. &pace; can do remote monitoring + with the monitoring plug-ins if they are present. For detailed + information, see + . + + + + + &stonith; (fencing) resource agents + + + This class is used exclusively for fencing related resources. For + more information, see . + + + + + + The agents supplied with &productname; are written to OCF + specifications. + + Timeout values @@ -311,7 +310,7 @@ If a resource has specific environment requirements, make sure they are present and identical on all cluster nodes. This kind of configuration - is not managed by the &hasi;. You must do this yourself. + is not managed by &productname;. You must do this yourself. You can create primitive resources using either &hawk2; or &crmsh;. @@ -319,9 +318,9 @@ Do not touch services managed by the cluster - When managing a resource with the &hasi;, the same resource must not + When managing a resource with &productname;, the same resource must not be started or stopped otherwise (outside of the cluster, for example - manually or on boot or reboot). The &hasi; software is responsible + manually or on boot or reboot). The &ha; software is responsible for all service start or stop actions. diff --git a/xml/ha_drbd.xml b/xml/ha_drbd.xml index 77fd50b5..06202d28 100644 --- a/xml/ha_drbd.xml +++ b/xml/ha_drbd.xml @@ -156,9 +156,9 @@ NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE Installing DRBD services - Install the &hasi; on both &sls; machines in your networked + Install the &ha; pattern on both &sls; machines in your networked cluster as described in . Installing - &hasi; also installs the DRBD program files. + the pattern also installs the DRBD program files. @@ -786,8 +786,8 @@ r0 role:Primary Migrating from DRBD 8 to DRBD 9 - Between DRBD 8 (shipped with &sle; &hasi; 12 SP1) and - DRBD 9 (shipped with &sle; &hasi; 12 SP2), the metadata format + Between DRBD 8 (shipped with &productname; 12 SP1) and + DRBD 9 (shipped with &productname;12 SP2), the metadata format has changed. DRBD 9 does not automatically convert previous metadata files to the new format. diff --git a/xml/ha_example_gui_i.xml b/xml/ha_example_gui_i.xml index 9c8a21a4..11fb2d2c 100644 --- a/xml/ha_example_gui_i.xml +++ b/xml/ha_example_gui_i.xml @@ -107,7 +107,7 @@ The name and value are dependent on your hardware configuration and what you chose for the media configuration during the installation of - the &hasi; software. + the &ha; software. diff --git a/xml/ha_fencing.xml b/xml/ha_fencing.xml index 0b4b3302..d2255c06 100644 --- a/xml/ha_fencing.xml +++ b/xml/ha_fencing.xml @@ -138,7 +138,7 @@ In a Pacemaker cluster, the implementation of node level fencing is &stonith; - (Shoot The Other Node in the Head). The &hasi; + (Shoot The Other Node in the Head). &productname; includes the stonith command line tool, an extensible interface for remotely powering down a node in the cluster. For an overview of the available options, run stonith --help @@ -150,7 +150,7 @@ &stonith; devices To use node level fencing, you first need to have a fencing device. To - get a list of &stonith; devices which are supported by the &hasi;, run + get a list of &stonith; devices which are supported by &productname;, run one of the following commands on any of the nodes: &prompt.root;stonith -L diff --git a/xml/ha_glossary.xml b/xml/ha_glossary.xml index c9cfa39c..3a606e37 100644 --- a/xml/ha_glossary.xml +++ b/xml/ha_glossary.xml @@ -188,7 +188,7 @@ The management entity responsible for coordinating all non-local - interactions in a &ha; cluster. The &hasi; uses Pacemaker as CRM. + interactions in a &ha; cluster. &productname; uses Pacemaker as CRM. The CRM is implemented as pacemaker-controld. It interacts with several components: local resource managers, both on its own node and on the other nodes, @@ -450,8 +450,8 @@ performance will be met during a contractual measurement period. A script acting as a proxy to manage a resource (for example, to start, - stop or monitor a resource). The &hasi; supports different - kinds of resource agents: For details, see + stop, or monitor a resource). &productname; supports different + kinds of resource agents. For details, see . diff --git a/xml/ha_install.xml b/xml/ha_install.xml index 2e481aff..c5cc32cd 100644 --- a/xml/ha_install.xml +++ b/xml/ha_install.xml @@ -10,7 +10,7 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="cha-ha-install"> - Installing the &hasi; + Installing &productname; If you are setting up a &ha; cluster with &productnamereg; for the first time, the @@ -40,7 +40,7 @@ Manual installation - For the manual installation of the packages for &hasi; refer to + For the manual installation of the packages for &ha; refer to . It leads you through the setup of a basic two-node cluster. diff --git a/xml/ha_lb_lvs.xml b/xml/ha_lb_lvs.xml index b2a3314f..2c4b6a7d 100644 --- a/xml/ha_lb_lvs.xml +++ b/xml/ha_lb_lvs.xml @@ -17,7 +17,7 @@ The following sections give an overview of the main LVS components and - concepts. Then we explain how to set up &lvs; on &hasi;. + concepts. Then we explain how to set up &lvs; on &productname;. diff --git a/xml/ha_loadbalancing.xml b/xml/ha_loadbalancing.xml index 205c5e5e..f933ccfa 100644 --- a/xml/ha_loadbalancing.xml +++ b/xml/ha_loadbalancing.xml @@ -31,7 +31,7 @@ toms 2014-05-27: one large, fast server to outside clients. This apparent single server is called a virtual server. It consists of one or more load balancers dispatching incoming requests and several real servers - running the actual services. With a load balancing setup of &hasi;, you + running the actual services. With a load balancing setup of &productname;, you can build highly scalable and highly available network services, such as Web, cache, mail, FTP, media and VoIP services. @@ -41,7 +41,7 @@ toms 2014-05-27: Conceptual overview - &hasi; supports two technologies for load balancing: &lvs; (LVS) and + &productname; supports two technologies for load balancing: &lvs; (LVS) and &haproxy;. The key difference is &lvs; operates at OSI layer 4 (Transport), configuring the network layer of kernel, while &haproxy; operates at layer 7 (Application), running in user space. Thus &lvs; diff --git a/xml/ha_maintenance.xml b/xml/ha_maintenance.xml index 512d837a..86652916 100644 --- a/xml/ha_maintenance.xml +++ b/xml/ha_maintenance.xml @@ -285,7 +285,7 @@ attribute. - From this point on, &hasi; will take over cluster management again. + From this point on, &ha; will take over cluster management again. @@ -516,7 +516,7 @@ Node &node2;: standby that resource. - From this point on, the resource will be managed by the &hasi; software + From this point on, the resource will be managed by the &ha; software again. @@ -577,7 +577,7 @@ Node &node2;: standby default value) and apply your changes. - From this point on, the resource will be managed by the &hasi; software + From this point on, the resource will be managed by the &ha; software again. @@ -592,7 +592,7 @@ Node &node2;: standby If the cluster or a node is in maintenance mode, you can use tools external to the cluster stack (for example, systemctl) to manually operate the components that are managed by the cluster as resources. - The &hasi; will not monitor them or attempt to restart them. + The &ha; software will not monitor them or attempt to restart them. If you stop the cluster services on a node, all daemons and processes diff --git a/xml/ha_management.xml b/xml/ha_management.xml index 7779ee8d..8d1d2fb7 100644 --- a/xml/ha_management.xml +++ b/xml/ha_management.xml @@ -44,7 +44,7 @@ from the book, except for a general overview--> - &hasi; ships with a comprehensive set of tools to assists you in + &productname; ships with a comprehensive set of tools to assists you in managing your cluster from the command line. This chapter introduces the tools needed for managing the cluster configuration in the CIB and the cluster resources. Other command line tools for managing resource agents @@ -125,7 +125,7 @@ from the book, except for a general overview--> database (CIB) for consistency and other problems. It can check a file containing the configuration or connect to a running cluster. It reports two classes of problems. Errors must be fixed before the - &hasi; can work properly while warning resolution is up to the + &ha; software can work properly while warning resolution is up to the administrator. crm_verify assists in creating new or modified configurations. You can take a local copy of a CIB in the running cluster, edit it, validate it using diff --git a/xml/ha_managing_resources.xml b/xml/ha_managing_resources.xml index 6e3f7ad6..9414254f 100644 --- a/xml/ha_managing_resources.xml +++ b/xml/ha_managing_resources.xml @@ -255,14 +255,14 @@ primitive admin_addr IPaddr2 \ Do not touch services managed by the cluster - When managing a resource via the &hasi;, the resource must not be started - or stopped otherwise (outside of the cluster, for example manually or on - boot or reboot). The &hasi; software is responsible for all service start + When managing a resource via the &ha; software, the resource must not be started + or stopped otherwise (outside the cluster, for example manually or on + boot or reboot). The &ha; software is responsible for all service start or stop actions. However, if you want to check if the service is configured properly, start - it manually, but make sure that it is stopped again before the &hasi; takes + it manually, but make sure that it is stopped again before the &ha; software takes over. diff --git a/xml/ha_migration.xml b/xml/ha_migration.xml index 93a823b4..04c5ed0e 100644 --- a/xml/ha_migration.xml +++ b/xml/ha_migration.xml @@ -134,7 +134,7 @@ - The &hasi; has the same supported upgrade paths as the underlying base system. For a complete + &productname; has the same supported upgrade paths as the underlying base system. For a complete overview, see the section Supported Upgrade Paths to &sls; &productnumber; in the @@ -753,7 +753,7 @@ Upgrading from product version 11 to 12: cluster offline upgrade - The &hasi; 12 cluster stack comes with major changes in various + &productname; 12 cluster stack comes with major changes in various components (for example, &corosync.conf;, disk formats of OCFS2). Therefore, a cluster rolling upgrade from any &productname; 11 version is not supported. Instead, all cluster nodes must be offline diff --git a/xml/ha_monitoring_clusters.xml b/xml/ha_monitoring_clusters.xml index 0db1367e..07688188 100644 --- a/xml/ha_monitoring_clusters.xml +++ b/xml/ha_monitoring_clusters.xml @@ -33,7 +33,7 @@ Monitoring system health To prevent a node from running out of disk space and thus being unable to - manage any resources that have been assigned to it, the &hasi; + manage any resources that have been assigned to it, &productname; provides a resource agent, ocf:pacemaker:SysInfo. Use it to monitor a node's health with regard to disk partitions. diff --git a/xml/ha_ocfs2.xml b/xml/ha_ocfs2.xml index c75779ed..a2ebefc6 100644 --- a/xml/ha_ocfs2.xml +++ b/xml/ha_ocfs2.xml @@ -141,7 +141,7 @@ The &ocfs; Kernel module (ocfs2) is installed - automatically in the &hasi; on &slsreg; &productnumber;. To use + automatically in &productname; &productnumber;. To use &ocfs;, make sure the following packages are installed on each node in the cluster: ocfs2-tools and the matching ocfs2-kmp-* diff --git a/xml/ha_remote_hosts.xml b/xml/ha_remote_hosts.xml index e9135f52..072c2d29 100644 --- a/xml/ha_remote_hosts.xml +++ b/xml/ha_remote_hosts.xml @@ -43,14 +43,13 @@ By providing support for monitoring plug-ins (formerly named Nagios - plug-ins), the &hasi; now also allows you to monitor services on + plug-ins), &productname; now also allows you to monitor services on remote hosts. You can collect external statuses on the guests without modifying the guest image. For example, VM guests might run Web services or simple network resources that need to be accessible. With the Nagios resource agents, you can now monitor the Web service or the network - resource on the guest. In case these services are not reachable anymore, - the &hasi; will trigger a restart or migration of the respective - guest. + resource on the guest. If these services are not reachable anymore, + &productname; triggers a restart or migration of the respective guest. If your guests depend on a service (for example, an NFS server to be @@ -162,7 +161,7 @@ group g-vm1-and-services vm1 vm1-sshd \ not need to run the cluster stack to become members of the cluster. - The &hasi; can now launch virtual environments (KVM and LXC), plus + &productname; can now launch virtual environments (KVM and LXC), plus the resources that live within those virtual environments without requiring the virtual environments to run &pace; or &corosync;. @@ -174,8 +173,8 @@ group g-vm1-and-services vm1 vm1-sshd \ - The normal (bare-metal) cluster nodes run the - &hasi;. + The normal (bare-metal) cluster nodes run + &productname;. diff --git a/xml/ha_resource_constraints.xml b/xml/ha_resource_constraints.xml index d4fea29c..c237888a 100644 --- a/xml/ha_resource_constraints.xml +++ b/xml/ha_resource_constraints.xml @@ -1083,7 +1083,7 @@ the resources diminish in performance (or even fail). - To take this into account, the &hasi; allows you to specify the + To take this into account, &productname; allows you to specify the following parameters: @@ -1121,7 +1121,7 @@ A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource's requirements. The nature of the - capacities is completely irrelevant for the &hasi;; it only makes + capacities is completely irrelevant for the &ha; software; it only makes sure that all capacity requirements of a resource are satisfied before moving a resource to a node. @@ -1134,8 +1134,8 @@ If multiple resources with utilization attributes are grouped or have - colocation constraints, the &hasi; takes that into account. If - possible, the resources will be placed on a node that can fulfill + colocation constraints, &productname; takes that into account. If + possible, the resources is placed on a node that can fulfill all capacity requirements. @@ -1148,7 +1148,7 @@ - The &hasi; also provides means to detect and configure both node + &productname; also provides the means to detect and configure both node capacity and resource requirements automatically: @@ -1180,7 +1180,7 @@ - Apart from detecting the minimal requirements, the &hasi; also allows + Apart from detecting the minimal requirements, the &ha; software also allows to monitor the current utilization via the VirtualDomain resource agent. It detects CPU and RAM use of the virtual machine. To use this feature, configure a diff --git a/xml/ha_storage_protection.xml b/xml/ha_storage_protection.xml index 8f45fc3f..f837abd8 100644 --- a/xml/ha_storage_protection.xml +++ b/xml/ha_storage_protection.xml @@ -389,7 +389,7 @@ stonith-timeout = Timeout (msgwait) + 20% watchdog module. - The &hasi; uses the SBD daemon as the software component that feeds + &productname; uses the SBD daemon as the software component that feeds the watchdog. diff --git a/xml/ha_troubleshooting.xml b/xml/ha_troubleshooting.xml index 2b0ba64e..34b41a77 100644 --- a/xml/ha_troubleshooting.xml +++ b/xml/ha_troubleshooting.xml @@ -45,11 +45,10 @@ The packages needed for configuring and managing a cluster are included in the High Availability installation - pattern, available with the &hasi;. + pattern, available with &productname;. - Check if &hasi; is installed as an extension to &sls; - &productnumber; on each of the cluster nodes and if the + Check if &productname; is installed on each of the cluster nodes and if the High Availability pattern is installed on each of the machines as described in the &haquick;. diff --git a/xml/ha_yast_cluster.xml b/xml/ha_yast_cluster.xml index f0e5501c..668e3f0b 100644 --- a/xml/ha_yast_cluster.xml +++ b/xml/ha_yast_cluster.xml @@ -599,7 +599,7 @@ alternatively over the available networks. - When RRP is used, the &hasi; monitors the status of the current + When RRP is used, &productname; monitors the status of the current rings and automatically re-enables redundant rings after faults. Alternatively, check the ring status manually with corosync-cfgtool. View the available options with diff --git a/xml/phrases-decl.ent b/xml/phrases-decl.ent index 08ce01cf..dc5f08e8 100644 --- a/xml/phrases-decl.ent +++ b/xml/phrases-decl.ent @@ -84,7 +84,7 @@ "Allow interaction with the in-kernel connection tracking system for enabling stateful packet - inspection for iptables. Used by the &hasi; to synchronize the connection + inspection for iptables. Used by &productname; to synchronize the connection status between cluster nodes."> - - + + - +