Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/html/SLE-HA-geo-quick/index.html b/SLEHA12SP5/html/SLE-HA-geo-quick/index.html index 5323e94e3..7b19e4d1f 100644 --- a/SLEHA12SP5/html/SLE-HA-geo-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html b/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html index ad5dc8135..747642070 100644 --- a/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html +++ b/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring and Managing Cluster Resources with Hawk2
- 6.1 Hawk2 Requirements
- 6.2 Logging In
- 6.3 Hawk2 Overview: Main Elements
- 6.4 Configuring Global Cluster Options
- 6.5 Configuring Cluster Resources
- 6.6 Configuring Constraints
- 6.7 Managing Cluster Resources
- 6.8 Monitoring Clusters
- 6.9 Using the Batch Mode
- 6.10 Viewing the Cluster History
- 6.11 Verifying Cluster Health
- 7 Configuring and Managing Cluster Resources (Command Line)
- 8 Adding or Modifying Resource Agents
- 9 Fencing and STONITH
- 10 Storage Protection and SBD
- 10.1 Conceptual Overview
- 10.2 Overview of Manually Setting Up SBD
- 10.3 Requirements
- 10.4 Number of SBD Devices
- 10.5 Calculation of Timeouts
- 10.6 Setting Up the Watchdog
- 10.7 Setting Up SBD with Devices
- 10.8 Setting Up Diskless SBD
- 10.9 Testing SBD and Fencing
- 10.10 Additional Mechanisms for Storage Protection
- 10.11 For More Information
- 11 Access Control Lists
- 12 Network Device Bonding
- 13 Load Balancing
- 14 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 23 Executing Maintenance Tasks
- 23.1 Implications of Taking Down a Cluster Node
- 23.2 Different Options for Maintenance Tasks
- 23.3 Preparing and Finishing Maintenance Work
- 23.4 Putting the Cluster into Maintenance Mode
- 23.5 Putting a Node into Maintenance Mode
- 23.6 Putting a Node into Standby Mode
- 23.7 Putting a Resource into Maintenance Mode
- 23.8 Putting a Resource into Unmanaged Mode
- 23.9 Rebooting a Cluster Node While In Maintenance Mode
- 24 Upgrading Your Cluster and Updating Software Packages
- 23 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Group Resource
- 6.1 Hawk2—Cluster Configuration
- 6.2 Hawk2—Wizard for Apache Web Server
- 6.3 Hawk2—Primitive Resource
- 6.4 Hawk2—Editing A Primitive Resource
- 6.5 Hawk2—STONITH Resource
- 6.6 Hawk2—Resource Group
- 6.7 Hawk2—Clone Resource
- 6.8 Hawk2—Multi-state Resource
- 6.9 Hawk2—Tag
- 6.10 Hawk2—Resource Details
- 6.11 Hawk2—Location Constraint
- 6.12 Hawk2—Colocation Constraint
- 6.13 Hawk2—Order Constraint
- 6.14 Hawk2—Two Resource Sets in a Colocation Constraint
- 6.15 Hawk2—Cluster Status
- 6.16 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 6.17 Hawk2 Batch Mode Activated
- 6.18 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.19 Hawk2—History Explorer Main View
- 13.1 YaST IP Load Balancing—Global Parameters
- 13.2 YaST IP Load Balancing—Virtual Services
- 18.1 Position of DRBD within Linux
- 18.2 Resource Configuration
- 18.3 Resource Stacking
- 19.1 Setup of iSCSI with cLVM
- 21.1 Structure of a CTDB Cluster
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 Resource Group for a Web Server
- 5.4 A Resource Set for Location Constraints
- 5.5 A Chain of Colocated Resources
- 5.6 A Chain of Ordered Resources
- 5.7 A Chain of Ordered Resources Expressed as Resource Set
- 5.8 Migration Threshold—Process Flow
- 5.9 Example Configuration for Load-Balanced Placing
- 5.10 Configuring Resources for Monitoring Plug-ins
- 7.1 A Simple crmsh Shell Script
- 9.1 Configuration of an IBM RSA Lights-out Device
- 9.2 Configuration of a UPS Fencing Device
- 9.3 Configuration of a Kdump Device
- 10.1 Formula for Timeout Calculation
- 11.1 Excerpt of a Cluster Configuration in XML
- 13.1 Simple ldirectord Configuration
- 18.1 Configuration of a Three-Node Stacked DRBD Resource
- 22.1 Using an NFS Server to Store the File Backup
- 22.2 Backing up Btrfs subvolumes with
tar
- 22.3 Using Third-Party Backup Tools Like EMC NetWorker
- 22.4 Backing up multipath devices
- 22.5 Booting your system with UEFI
- 22.6 Creating a recovery system with a basic
tar
backup - 22.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 diff --git a/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html b/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html index 5fbcf512d..f5bbd9a60 100644 --- a/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html +++ b/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html @@ -298,40 +298,44 @@ lines.
Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.
- The Kdump plug-in must be used in concert with another, real STONITH
- device, for example, external/ipmi
. For the fencing
- mechanism to work properly, you must specify that Kdump is checked before
- a real STONITH device is triggered. Use crm configure
- fencing_topology
to specify the order of the fencing devices as
+ The Kdump plug-in must be used together with another, real STONITH
+ device, for example, external/ipmi
. It does
+ not work with SBD as the STONITH device. For the fencing
+ mechanism to work properly, you must specify the order of the fencing devices
+ so that Kdump is checked before a real STONITH device is triggered, as
shown in the following procedure.
- Use the
stonith:fence_kdump
resource agent (provided - by the packagefence-agents
) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -configure - primitive st-kdump stonith:fence_kdump \ - params nodename="alice "\ 1 + Use the
stonith:fence_kdump
fence agent. + A configuration example is shown below. For more information, + seecrm ra info stonith:fence_kdump
. +#
crm configure
+crm(live)configure#
primitive st-kdump stonith:fence_kdump \ + params nodename="alice "\
1 +pcmk_host_list="alice" \ pcmk_host_check="static-list" \ pcmk_reboot_action="off" \ pcmk_monitor_action="metadata" \ pcmk_reboot_retries="1" \ - timeout="60" -commit
- Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -
- The fencing action will be started after the timeout of the resource. -
- In
/etc/sysconfig/kdump
on each node, configure -KDUMP_POSTSCRIPT
to send a notification to all nodes - when the Kdump process is finished. For example: -KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"
- The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +
crm(live)configure#
commit
+ Name of the node to listen for a message from
fence_kdump_send
. + Configure more STONITH resources for other nodes if needed. ++ Defines how long to wait for a message from
fence_kdump_send
. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received,fence_kdump
+ times out, which indicates that the fence operation failed. The next STONITH device + in thefencing_topology
eventually fences the node. ++ On each node, configure
fence_kdump_send
to send a message to + all nodes when the Kdump process is finished. In/etc/sysconfig/kdump
, + edit theKDUMP_POSTSCRIPT
line. For example: +KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"
+ Replace NODELIST with the host names of all the cluster nodes.
Run either
systemctl restart kdump.service
ormkdumprd
. Either of these commands will detect that/etc/sysconfig/kdump
@@ -343,10 +347,11 @@To have Kdump checked before triggering a real fencing mechanism (like
external/ipmi
), - use a configuration similar to the following:fencing_topology \ + use a configuration similar to the following:
crm(live)configure#
fencing_topology \ alice: kdump-node1 ipmi-node1 \ - bob: kdump-node2 ipmi-node2
For more details on
fencing_topology
: -crm configure help fencing_topology
9.4 Monitoring Fencing Devices #
+ bob: kdump-node2 ipmi-node2
+crm(live)configure#
commit
For more details on fencing_topology
:
+
crm(live)configure#
help fencing_topology
9.4 Monitoring Fencing Devices #
Like any other resource, the STONITH class agents also support the monitoring operation for checking status.
diff --git a/SLEHA12SP5/html/SLE-HA-guide/index.html b/SLEHA12SP5/html/SLE-HA-guide/index.html index ad5dc8135..747642070 100644 --- a/SLEHA12SP5/html/SLE-HA-guide/index.html +++ b/SLEHA12SP5/html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring and Managing Cluster Resources with Hawk2
- 6.1 Hawk2 Requirements
- 6.2 Logging In
- 6.3 Hawk2 Overview: Main Elements
- 6.4 Configuring Global Cluster Options
- 6.5 Configuring Cluster Resources
- 6.6 Configuring Constraints
- 6.7 Managing Cluster Resources
- 6.8 Monitoring Clusters
- 6.9 Using the Batch Mode
- 6.10 Viewing the Cluster History
- 6.11 Verifying Cluster Health
- 7 Configuring and Managing Cluster Resources (Command Line)
- 8 Adding or Modifying Resource Agents
- 9 Fencing and STONITH
- 10 Storage Protection and SBD
- 10.1 Conceptual Overview
- 10.2 Overview of Manually Setting Up SBD
- 10.3 Requirements
- 10.4 Number of SBD Devices
- 10.5 Calculation of Timeouts
- 10.6 Setting Up the Watchdog
- 10.7 Setting Up SBD with Devices
- 10.8 Setting Up Diskless SBD
- 10.9 Testing SBD and Fencing
- 10.10 Additional Mechanisms for Storage Protection
- 10.11 For More Information
- 11 Access Control Lists
- 12 Network Device Bonding
- 13 Load Balancing
- 14 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 23 Executing Maintenance Tasks
- 23.1 Implications of Taking Down a Cluster Node
- 23.2 Different Options for Maintenance Tasks
- 23.3 Preparing and Finishing Maintenance Work
- 23.4 Putting the Cluster into Maintenance Mode
- 23.5 Putting a Node into Maintenance Mode
- 23.6 Putting a Node into Standby Mode
- 23.7 Putting a Resource into Maintenance Mode
- 23.8 Putting a Resource into Unmanaged Mode
- 23.9 Rebooting a Cluster Node While In Maintenance Mode
- 24 Upgrading Your Cluster and Updating Software Packages
- 23 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Group Resource
- 6.1 Hawk2—Cluster Configuration
- 6.2 Hawk2—Wizard for Apache Web Server
- 6.3 Hawk2—Primitive Resource
- 6.4 Hawk2—Editing A Primitive Resource
- 6.5 Hawk2—STONITH Resource
- 6.6 Hawk2—Resource Group
- 6.7 Hawk2—Clone Resource
- 6.8 Hawk2—Multi-state Resource
- 6.9 Hawk2—Tag
- 6.10 Hawk2—Resource Details
- 6.11 Hawk2—Location Constraint
- 6.12 Hawk2—Colocation Constraint
- 6.13 Hawk2—Order Constraint
- 6.14 Hawk2—Two Resource Sets in a Colocation Constraint
- 6.15 Hawk2—Cluster Status
- 6.16 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 6.17 Hawk2 Batch Mode Activated
- 6.18 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.19 Hawk2—History Explorer Main View
- 13.1 YaST IP Load Balancing—Global Parameters
- 13.2 YaST IP Load Balancing—Virtual Services
- 18.1 Position of DRBD within Linux
- 18.2 Resource Configuration
- 18.3 Resource Stacking
- 19.1 Setup of iSCSI with cLVM
- 21.1 Structure of a CTDB Cluster
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 Resource Group for a Web Server
- 5.4 A Resource Set for Location Constraints
- 5.5 A Chain of Colocated Resources
- 5.6 A Chain of Ordered Resources
- 5.7 A Chain of Ordered Resources Expressed as Resource Set
- 5.8 Migration Threshold—Process Flow
- 5.9 Example Configuration for Load-Balanced Placing
- 5.10 Configuring Resources for Monitoring Plug-ins
- 7.1 A Simple crmsh Shell Script
- 9.1 Configuration of an IBM RSA Lights-out Device
- 9.2 Configuration of a UPS Fencing Device
- 9.3 Configuration of a Kdump Device
- 10.1 Formula for Timeout Calculation
- 11.1 Excerpt of a Cluster Configuration in XML
- 13.1 Simple ldirectord Configuration
- 18.1 Configuration of a Three-Node Stacked DRBD Resource
- 22.1 Using an NFS Server to Store the File Backup
- 22.2 Backing up Btrfs subvolumes with
tar
- 22.3 Using Third-Party Backup Tools Like EMC NetWorker
- 22.4 Backing up multipath devices
- 22.5 Booting your system with UEFI
- 22.6 Creating a recovery system with a basic
tar
backup - 22.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 diff --git a/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html b/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html index 56470ad25..37db1de28 100644 --- a/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html +++ b/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/html/SLE-HA-install-quick/index.html b/SLEHA12SP5/html/SLE-HA-install-quick/index.html index 56470ad25..37db1de28 100644 --- a/SLEHA12SP5/html/SLE-HA-install-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html b/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html index db0db4b9a..24b06721e 100644 --- a/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html +++ b/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html b/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html index db0db4b9a..24b06721e 100644 --- a/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html b/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html index 83ad03064..14caa7462 100644 --- a/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html +++ b/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
. Remote in the pacemaker_remote
term
diff --git a/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html b/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html
index 83ad03064..14caa7462 100644
--- a/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html
+++ b/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html
@@ -105,7 +105,7 @@
useBR: false
});
-
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
. Remote in the pacemaker_remote
term
diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html b/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html
index ad2f3c016..4c3a78ccf 100644
--- a/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html
+++ b/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html
@@ -115,7 +115,7 @@
cluster resources and how to transfer them to other cluster site in case
of changes. It also describes how to manage Geo clusters from command
line and with Hawk and how to upgrade them to the latest product version.
-
- 1 Challenges for Geo Clusters
- 2 Conceptual Overview
- 3 Requirements
- 4 Setting Up the Booth Services
- 5 Synchronizing Configuration Files Across All Sites and Arbitrators
- 6 Configuring Cluster Resources and Constraints
- 7 Setting Up IP Relocation via DNS Update
- 8 Managing Geo Clusters
- 9 Troubleshooting
- 10 Upgrading to the Latest Product Version
- 11 For More Information
- A GNU licenses
Copyright © 2006–2024 diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html b/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html index ad2f3c016..4c3a78ccf 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html @@ -115,7 +115,7 @@ cluster resources and how to transfer them to other cluster site in case of changes. It also describes how to manage Geo clusters from command line and with Hawk and how to upgrade them to the latest product version. -
- 1 Challenges for Geo Clusters
- 2 Conceptual Overview
- 3 Requirements
- 4 Setting Up the Booth Services
- 5 Synchronizing Configuration Files Across All Sites and Arbitrators
- 6 Configuring Cluster Resources and Constraints
- 7 Setting Up IP Relocation via DNS Update
- 8 Managing Geo Clusters
- 9 Troubleshooting
- 10 Upgrading to the Latest Product Version
- 11 For More Information
- A GNU licenses
Copyright © 2006–2024 diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html b/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html index 1bb767b4e..03f500351 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html index 1bb767b4e..03f500351 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html b/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html index 0a9cdf644..a3cc2e764 100644 --- a/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring and Managing Cluster Resources with Hawk2
- 6.1 Hawk2 Requirements
- 6.2 Logging In
- 6.3 Hawk2 Overview: Main Elements
- 6.4 Configuring Global Cluster Options
- 6.5 Configuring Cluster Resources
- 6.6 Configuring Constraints
- 6.7 Managing Cluster Resources
- 6.8 Monitoring Clusters
- 6.9 Using the Batch Mode
- 6.10 Viewing the Cluster History
- 6.11 Verifying Cluster Health
- 7 Configuring and Managing Cluster Resources (Command Line)
- 8 Adding or Modifying Resource Agents
- 9 Fencing and STONITH
- 10 Storage Protection and SBD
- 10.1 Conceptual Overview
- 10.2 Overview of Manually Setting Up SBD
- 10.3 Requirements
- 10.4 Number of SBD Devices
- 10.5 Calculation of Timeouts
- 10.6 Setting Up the Watchdog
- 10.7 Setting Up SBD with Devices
- 10.8 Setting Up Diskless SBD
- 10.9 Testing SBD and Fencing
- 10.10 Additional Mechanisms for Storage Protection
- 10.11 For More Information
- 11 Access Control Lists
- 12 Network Device Bonding
- 13 Load Balancing
- 14 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 23 Executing Maintenance Tasks
- 23.1 Implications of Taking Down a Cluster Node
- 23.2 Different Options for Maintenance Tasks
- 23.3 Preparing and Finishing Maintenance Work
- 23.4 Putting the Cluster into Maintenance Mode
- 23.5 Putting a Node into Maintenance Mode
- 23.6 Putting a Node into Standby Mode
- 23.7 Putting a Resource into Maintenance Mode
- 23.8 Putting a Resource into Unmanaged Mode
- 23.9 Rebooting a Cluster Node While In Maintenance Mode
- 24 Upgrading Your Cluster and Updating Software Packages
- 23 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Group Resource
- 6.1 Hawk2—Cluster Configuration
- 6.2 Hawk2—Wizard for Apache Web Server
- 6.3 Hawk2—Primitive Resource
- 6.4 Hawk2—Editing A Primitive Resource
- 6.5 Hawk2—STONITH Resource
- 6.6 Hawk2—Resource Group
- 6.7 Hawk2—Clone Resource
- 6.8 Hawk2—Multi-state Resource
- 6.9 Hawk2—Tag
- 6.10 Hawk2—Resource Details
- 6.11 Hawk2—Location Constraint
- 6.12 Hawk2—Colocation Constraint
- 6.13 Hawk2—Order Constraint
- 6.14 Hawk2—Two Resource Sets in a Colocation Constraint
- 6.15 Hawk2—Cluster Status
- 6.16 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 6.17 Hawk2 Batch Mode Activated
- 6.18 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.19 Hawk2—History Explorer Main View
- 13.1 YaST IP Load Balancing—Global Parameters
- 13.2 YaST IP Load Balancing—Virtual Services
- 18.1 Position of DRBD within Linux
- 18.2 Resource Configuration
- 18.3 Resource Stacking
- 19.1 Setup of iSCSI with cLVM
- 21.1 Structure of a CTDB Cluster
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 Resource Group for a Web Server
- 5.4 A Resource Set for Location Constraints
- 5.5 A Chain of Colocated Resources
- 5.6 A Chain of Ordered Resources
- 5.7 A Chain of Ordered Resources Expressed as Resource Set
- 5.8 Migration Threshold—Process Flow
- 5.9 Example Configuration for Load-Balanced Placing
- 5.10 Configuring Resources for Monitoring Plug-ins
- 7.1 A Simple crmsh Shell Script
- 9.1 Configuration of an IBM RSA Lights-out Device
- 9.2 Configuration of a UPS Fencing Device
- 9.3 Configuration of a Kdump Device
- 10.1 Formula for Timeout Calculation
- 11.1 Excerpt of a Cluster Configuration in XML
- 13.1 Simple ldirectord Configuration
- 18.1 Configuration of a Three-Node Stacked DRBD Resource
- 22.1 Using an NFS Server to Store the File Backup
- 22.2 Backing up Btrfs subvolumes with
tar
- 22.3 Using Third-Party Backup Tools Like EMC NetWorker
- 22.4 Backing up multipath devices
- 22.5 Booting your system with UEFI
- 22.6 Creating a recovery system with a basic
tar
backup - 22.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 @@ -6570,40 +6570,44 @@ lines.
Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.
- The Kdump plug-in must be used in concert with another, real STONITH
- device, for example, external/ipmi
. For the fencing
- mechanism to work properly, you must specify that Kdump is checked before
- a real STONITH device is triggered. Use crm configure
- fencing_topology
to specify the order of the fencing devices as
+ The Kdump plug-in must be used together with another, real STONITH
+ device, for example, external/ipmi
. It does
+ not work with SBD as the STONITH device. For the fencing
+ mechanism to work properly, you must specify the order of the fencing devices
+ so that Kdump is checked before a real STONITH device is triggered, as
shown in the following procedure.
- Use the
stonith:fence_kdump
resource agent (provided - by the packagefence-agents
) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -configure - primitive st-kdump stonith:fence_kdump \ - params nodename="alice "\ 1 + Use the
stonith:fence_kdump
fence agent. + A configuration example is shown below. For more information, + seecrm ra info stonith:fence_kdump
. +#
crm configure
+crm(live)configure#
primitive st-kdump stonith:fence_kdump \ + params nodename="alice "\
1 +pcmk_host_list="alice" \ pcmk_host_check="static-list" \ pcmk_reboot_action="off" \ pcmk_monitor_action="metadata" \ pcmk_reboot_retries="1" \ - timeout="60" -commit
- Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -
- The fencing action will be started after the timeout of the resource. -
- In
/etc/sysconfig/kdump
on each node, configure -KDUMP_POSTSCRIPT
to send a notification to all nodes - when the Kdump process is finished. For example: -KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"
- The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +
crm(live)configure#
commit
+ Name of the node to listen for a message from
fence_kdump_send
. + Configure more STONITH resources for other nodes if needed. ++ Defines how long to wait for a message from
fence_kdump_send
. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received,fence_kdump
+ times out, which indicates that the fence operation failed. The next STONITH device + in thefencing_topology
eventually fences the node. ++ On each node, configure
fence_kdump_send
to send a message to + all nodes when the Kdump process is finished. In/etc/sysconfig/kdump
, + edit theKDUMP_POSTSCRIPT
line. For example: +KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"
+ Replace NODELIST with the host names of all the cluster nodes.
Run either
systemctl restart kdump.service
ormkdumprd
. Either of these commands will detect that/etc/sysconfig/kdump
@@ -6615,10 +6619,11 @@To have Kdump checked before triggering a real fencing mechanism (like
external/ipmi
), - use a configuration similar to the following:fencing_topology \ + use a configuration similar to the following:
crm(live)configure#
fencing_topology \ alice: kdump-node1 ipmi-node1 \ - bob: kdump-node2 ipmi-node2
For more details on
fencing_topology
: -crm configure help fencing_topology
9.4 Monitoring Fencing Devices #
+ bob: kdump-node2 ipmi-node2
+crm(live)configure#
commit
For more details on fencing_topology
:
+
crm(live)configure#
help fencing_topology
9.4 Monitoring Fencing Devices #
Like any other resource, the STONITH class agents also support the monitoring operation for checking status.
diff --git a/SLEHA12SP5/single-html/SLE-HA-guide/index.html b/SLEHA12SP5/single-html/SLE-HA-guide/index.html index 0a9cdf644..a3cc2e764 100644 --- a/SLEHA12SP5/single-html/SLE-HA-guide/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring and Managing Cluster Resources with Hawk2
- 6.1 Hawk2 Requirements
- 6.2 Logging In
- 6.3 Hawk2 Overview: Main Elements
- 6.4 Configuring Global Cluster Options
- 6.5 Configuring Cluster Resources
- 6.6 Configuring Constraints
- 6.7 Managing Cluster Resources
- 6.8 Monitoring Clusters
- 6.9 Using the Batch Mode
- 6.10 Viewing the Cluster History
- 6.11 Verifying Cluster Health
- 7 Configuring and Managing Cluster Resources (Command Line)
- 8 Adding or Modifying Resource Agents
- 9 Fencing and STONITH
- 10 Storage Protection and SBD
- 10.1 Conceptual Overview
- 10.2 Overview of Manually Setting Up SBD
- 10.3 Requirements
- 10.4 Number of SBD Devices
- 10.5 Calculation of Timeouts
- 10.6 Setting Up the Watchdog
- 10.7 Setting Up SBD with Devices
- 10.8 Setting Up Diskless SBD
- 10.9 Testing SBD and Fencing
- 10.10 Additional Mechanisms for Storage Protection
- 10.11 For More Information
- 11 Access Control Lists
- 12 Network Device Bonding
- 13 Load Balancing
- 14 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 23 Executing Maintenance Tasks
- 23.1 Implications of Taking Down a Cluster Node
- 23.2 Different Options for Maintenance Tasks
- 23.3 Preparing and Finishing Maintenance Work
- 23.4 Putting the Cluster into Maintenance Mode
- 23.5 Putting a Node into Maintenance Mode
- 23.6 Putting a Node into Standby Mode
- 23.7 Putting a Resource into Maintenance Mode
- 23.8 Putting a Resource into Unmanaged Mode
- 23.9 Rebooting a Cluster Node While In Maintenance Mode
- 24 Upgrading Your Cluster and Updating Software Packages
- 23 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Group Resource
- 6.1 Hawk2—Cluster Configuration
- 6.2 Hawk2—Wizard for Apache Web Server
- 6.3 Hawk2—Primitive Resource
- 6.4 Hawk2—Editing A Primitive Resource
- 6.5 Hawk2—STONITH Resource
- 6.6 Hawk2—Resource Group
- 6.7 Hawk2—Clone Resource
- 6.8 Hawk2—Multi-state Resource
- 6.9 Hawk2—Tag
- 6.10 Hawk2—Resource Details
- 6.11 Hawk2—Location Constraint
- 6.12 Hawk2—Colocation Constraint
- 6.13 Hawk2—Order Constraint
- 6.14 Hawk2—Two Resource Sets in a Colocation Constraint
- 6.15 Hawk2—Cluster Status
- 6.16 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 6.17 Hawk2 Batch Mode Activated
- 6.18 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.19 Hawk2—History Explorer Main View
- 13.1 YaST IP Load Balancing—Global Parameters
- 13.2 YaST IP Load Balancing—Virtual Services
- 18.1 Position of DRBD within Linux
- 18.2 Resource Configuration
- 18.3 Resource Stacking
- 19.1 Setup of iSCSI with cLVM
- 21.1 Structure of a CTDB Cluster
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 Resource Group for a Web Server
- 5.4 A Resource Set for Location Constraints
- 5.5 A Chain of Colocated Resources
- 5.6 A Chain of Ordered Resources
- 5.7 A Chain of Ordered Resources Expressed as Resource Set
- 5.8 Migration Threshold—Process Flow
- 5.9 Example Configuration for Load-Balanced Placing
- 5.10 Configuring Resources for Monitoring Plug-ins
- 7.1 A Simple crmsh Shell Script
- 9.1 Configuration of an IBM RSA Lights-out Device
- 9.2 Configuration of a UPS Fencing Device
- 9.3 Configuration of a Kdump Device
- 10.1 Formula for Timeout Calculation
- 11.1 Excerpt of a Cluster Configuration in XML
- 13.1 Simple ldirectord Configuration
- 18.1 Configuration of a Three-Node Stacked DRBD Resource
- 22.1 Using an NFS Server to Store the File Backup
- 22.2 Backing up Btrfs subvolumes with
tar
- 22.3 Using Third-Party Backup Tools Like EMC NetWorker
- 22.4 Backing up multipath devices
- 22.5 Booting your system with UEFI
- 22.6 Creating a recovery system with a basic
tar
backup - 22.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 @@ -6570,40 +6570,44 @@ lines.
Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.
- The Kdump plug-in must be used in concert with another, real STONITH
- device, for example, external/ipmi
. For the fencing
- mechanism to work properly, you must specify that Kdump is checked before
- a real STONITH device is triggered. Use crm configure
- fencing_topology
to specify the order of the fencing devices as
+ The Kdump plug-in must be used together with another, real STONITH
+ device, for example, external/ipmi
. It does
+ not work with SBD as the STONITH device. For the fencing
+ mechanism to work properly, you must specify the order of the fencing devices
+ so that Kdump is checked before a real STONITH device is triggered, as
shown in the following procedure.
- Use the
stonith:fence_kdump
resource agent (provided - by the packagefence-agents
) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -configure - primitive st-kdump stonith:fence_kdump \ - params nodename="alice "\ 1 + Use the
stonith:fence_kdump
fence agent. + A configuration example is shown below. For more information, + seecrm ra info stonith:fence_kdump
. +#
crm configure
+crm(live)configure#
primitive st-kdump stonith:fence_kdump \ + params nodename="alice "\
1 +pcmk_host_list="alice" \ pcmk_host_check="static-list" \ pcmk_reboot_action="off" \ pcmk_monitor_action="metadata" \ pcmk_reboot_retries="1" \ - timeout="60" -commit
- Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -
- The fencing action will be started after the timeout of the resource. -
- In
/etc/sysconfig/kdump
on each node, configure -KDUMP_POSTSCRIPT
to send a notification to all nodes - when the Kdump process is finished. For example: -KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"
- The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +
crm(live)configure#
commit
+ Name of the node to listen for a message from
fence_kdump_send
. + Configure more STONITH resources for other nodes if needed. ++ Defines how long to wait for a message from
fence_kdump_send
. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received,fence_kdump
+ times out, which indicates that the fence operation failed. The next STONITH device + in thefencing_topology
eventually fences the node. ++ On each node, configure
fence_kdump_send
to send a message to + all nodes when the Kdump process is finished. In/etc/sysconfig/kdump
, + edit theKDUMP_POSTSCRIPT
line. For example: +KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"
+ Replace NODELIST with the host names of all the cluster nodes.
Run either
systemctl restart kdump.service
ormkdumprd
. Either of these commands will detect that/etc/sysconfig/kdump
@@ -6615,10 +6619,11 @@To have Kdump checked before triggering a real fencing mechanism (like
external/ipmi
), - use a configuration similar to the following:fencing_topology \ + use a configuration similar to the following:
crm(live)configure#
fencing_topology \ alice: kdump-node1 ipmi-node1 \ - bob: kdump-node2 ipmi-node2
For more details on
fencing_topology
: -crm configure help fencing_topology
9.4 Monitoring Fencing Devices #
+ bob: kdump-node2 ipmi-node2
+crm(live)configure#
commit
For more details on fencing_topology
:
+
crm(live)configure#
help fencing_topology
9.4 Monitoring Fencing Devices #
Like any other resource, the STONITH class agents also support the monitoring operation for checking status.
diff --git a/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html b/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html index a6a023fbe..f9c709d48 100644 --- a/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html index a6a023fbe..f9c709d48 100644 --- a/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html index 1e3fc97e1..996315521 100644 --- a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html index 1e3fc97e1..996315521 100644 --- a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
SUSE Linux Enterprise High Availability 12 SP5
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html index aed6803b9..1e1a2adcb 100644 --- a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
. Remote in the pacemaker_remote
term
diff --git a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html
index aed6803b9..1e1a2adcb 100644
--- a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html
+++ b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html
@@ -105,7 +105,7 @@
useBR: false
});
-
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
SUSE Linux Enterprise High Availability 12 SP5
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
. Remote in the pacemaker_remote
term