Geo Clustering Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/html/SLE-HA-geo-quick/index.html b/SLEHA15SP2/html/SLE-HA-geo-quick/index.html index 6da5e57c8..2ec019b12 100644 --- a/SLEHA15SP2/html/SLE-HA-geo-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html b/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html index a4e971ec8..21bd0fe88 100644 --- a/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html +++ b/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring Cluster Resources
- 6.1 Types of Resources
- 6.2 Supported Resource Agent Classes
- 6.3 Timeout Values
- 6.4 Creating Primitive Resources
- 6.5 Creating Resource Groups
- 6.6 Creating Clone Resources
- 6.7 Creating Promotable Clones (Multi-state Resources)
- 6.8 Creating Resource Templates
- 6.9 Creating STONITH Resources
- 6.10 Configuring Resource Monitoring
- 6.11 Loading Resources from a File
- 6.12 Resource Options (Meta Attributes)
- 6.13 Instance Attributes (Parameters)
- 6.14 Resource Operations
- 7 Configuring Resource Constraints
- 7.1 Types of Constraints
- 7.2 Scores and Infinity
- 7.3 Resource Templates and Constraints
- 7.4 Adding Location Constraints
- 7.5 Adding Colocation Constraints
- 7.6 Adding Order Constraints
- 7.7 Using Resource Sets to Define Constraints
- 7.8 Specifying Resource Failover Nodes
- 7.9 Specifying Resource Failback Nodes (Resource Stickiness)
- 7.10 Placing Resources Based on Their Load Impact
- 7.11 For More Information
- 8 Managing Cluster Resources
- 9 Managing Services on Remote Hosts
- 10 Adding or Modifying Resource Agents
- 11 Monitoring Clusters
- 12 Fencing and STONITH
- 13 Storage Protection and SBD
- 13.1 Conceptual Overview
- 13.2 Overview of Manually Setting Up SBD
- 13.3 Requirements
- 13.4 Number of SBD Devices
- 13.5 Calculation of Timeouts
- 13.6 Setting Up the Watchdog
- 13.7 Setting Up SBD with Devices
- 13.8 Setting Up Diskless SBD
- 13.9 Testing SBD and Fencing
- 13.10 Additional Mechanisms for Storage Protection
- 13.11 For More Information
- 14 QDevice and QNetd
- 15 Access Control Lists
- 16 Network Device Bonding
- 17 Load Balancing
- 18 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 27 Executing Maintenance Tasks
- 27.1 Preparing and Finishing Maintenance Work
- 27.2 Different Options for Maintenance Tasks
- 27.3 Putting the Cluster into Maintenance Mode
- 27.4 Putting a Node into Maintenance Mode
- 27.5 Putting a Node into Standby Mode
- 27.6 Stopping the Cluster Services on a Node
- 27.7 Putting a Resource into Maintenance Mode
- 27.8 Putting a Resource into Unmanaged Mode
- 27.9 Rebooting a Cluster Node While in Maintenance Mode
- 28 Upgrading Your Cluster and Updating Software Packages
- 27 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Hawk2—Cluster Configuration
- 5.2 Hawk2—Wizard for Apache Web Server
- 5.3 Hawk2 Batch Mode Activated
- 5.4 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.1 Hawk2—Primitive Resource
- 6.2 Group Resource
- 6.3 Hawk2—Resource Group
- 6.4 Hawk2—Clone Resource
- 6.5 Hawk2—Multi-state Resource
- 6.6 Hawk2—STONITH Resource
- 6.7 Hawk2—Resource Details
- 7.1 Hawk2—Location Constraint
- 7.2 Hawk2—Colocation Constraint
- 7.3 Hawk2—Order Constraint
- 7.4 Hawk2—Two Resource Sets in a Colocation Constraint
- 8.1 Hawk2—Editing A Primitive Resource
- 8.2 Hawk2—Tag
- 11.1 Hawk2—Cluster Status
- 11.2 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 11.3 Hawk2—History Explorer Main View
- 17.1 YaST IP Load Balancing—Global Parameters
- 17.2 YaST IP Load Balancing—Virtual Services
- 22.1 Position of DRBD within Linux
- 22.2 Resource Configuration
- 22.3 Resource Stacking
- 22.4 Showing a Good Connection by
drbdmon
- 22.5 Showing a Bad Connection by
drbdmon
- 23.1 Setup of a Shared Disk with Cluster LVM
- 25.1 Structure of a CTDB Cluster
- 2.1 System Roles and Installed Patterns
- 5.1 Common Parameters
- 6.1 Resource Operation Properties
- 10.1 Failure Recovery Types
- 10.2 OCF Return Codes
- 12.1 Classes of fencing
- 13.1 Commonly used watchdog drivers
- 15.1 Operator Role—Access Types and XPath Expressions
- 20.1 OCFS2 Utilities
- 20.2 Important OCFS2 Parameters
- 21.1 GFS2 Utilities
- 21.2 Important GFS2 Parameters
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 A Simple crmsh Shell Script
- 6.1 Resource Group for a Web Server
- 7.1 A Resource Set for Location Constraints
- 7.2 A Chain of Colocated Resources
- 7.3 A Chain of Ordered Resources
- 7.4 A Chain of Ordered Resources Expressed as Resource Set
- 7.5 Migration Threshold—Process Flow
- 9.1 Configuring Resources for Monitoring Plug-ins
- 12.1 Configuration of an IBM RSA Lights-out Device
- 12.2 Configuration of a UPS Fencing Device
- 12.3 Configuration of a Kdump Device
- 13.1 Formula for Timeout Calculation
- 14.1 Status of QDevice
- 14.2 Status of QNetd Server
- 15.1 Excerpt of a Cluster Configuration in XML
- 17.1 Simple ldirectord Configuration
- 22.1 Configuration of a Three-Node Stacked DRBD Resource
- 26.1 Using an NFS Server to Store the File Backup
- 26.2 Backing up Btrfs subvolumes with
tar
- 26.3 Using Third-Party Backup Tools Like EMC NetWorker
- 26.4 Backing up multipath devices
- 26.5 Booting your system with UEFI
- 26.6 Creating a recovery system with a basic
tar
backup - 26.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 diff --git a/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html b/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html index 78492b48d..b15525b22 100644 --- a/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html +++ b/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html @@ -305,40 +305,44 @@ lines.
Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.
- The Kdump plug-in must be used in concert with another, real STONITH
- device, for example, external/ipmi
. For the fencing
- mechanism to work properly, you must specify that Kdump is checked before
- a real STONITH device is triggered. Use crm configure
- fencing_topology
to specify the order of the fencing devices as
+ The Kdump plug-in must be used together with another, real STONITH
+ device, for example, external/ipmi
. It does
+ not work with SBD as the STONITH device. For the fencing
+ mechanism to work properly, you must specify the order of the fencing devices
+ so that Kdump is checked before a real STONITH device is triggered, as
shown in the following procedure.
- Use the
stonith:fence_kdump
resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -configure - primitive st-kdump stonith:fence_kdump \ - params nodename="alice "\ 1 + Use the
stonith:fence_kdump
fence agent. + A configuration example is shown below. For more information, + seecrm ra info stonith:fence_kdump
. +#
crm configure
+crm(live)configure#
primitive st-kdump stonith:fence_kdump \ + params nodename="alice "\
1 +pcmk_host_list="alice" \ pcmk_host_check="static-list" \ pcmk_reboot_action="off" \ pcmk_monitor_action="metadata" \ pcmk_reboot_retries="1" \ - timeout="60" -commit
- Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -
- The fencing action will be started after the timeout of the resource. -
- In
/etc/sysconfig/kdump
on each node, configure -KDUMP_POSTSCRIPT
to send a notification to all nodes - when the Kdump process is finished. For example: -KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"
- The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +
crm(live)configure#
commit
+ Name of the node to listen for a message from
fence_kdump_send
. + Configure more STONITH resources for other nodes if needed. ++ Defines how long to wait for a message from
fence_kdump_send
. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received,fence_kdump
+ times out, which indicates that the fence operation failed. The next STONITH device + in thefencing_topology
eventually fences the node. ++ On each node, configure
fence_kdump_send
to send a message to + all nodes when the Kdump process is finished. In/etc/sysconfig/kdump
, + edit theKDUMP_POSTSCRIPT
line. For example: +KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"
+ Replace NODELIST with the host names of all the cluster nodes.
Run either
systemctl restart kdump.service
ormkdumprd
. Either of these commands will detect that/etc/sysconfig/kdump
@@ -348,12 +352,13 @@ Open a port in the firewall for thefence_kdump
resource. The default port is7410
.- To achieve that Kdump is checked before triggering a real fencing + To have Kdump checked before triggering a real fencing mechanism (like
external/ipmi
), - use a configuration similar to the following:fencing_topology \ + use a configuration similar to the following:
crm(live)configure#
fencing_topology \ alice: kdump-node1 ipmi-node1 \ - bob: kdump-node2 ipmi-node2
For more details on
fencing_topology
: -crm configure help fencing_topology
12.4 Monitoring Fencing Devices #
+ bob: kdump-node2 ipmi-node2
+crm(live)configure#
commit
For more details on fencing_topology
:
+
crm(live)configure#
help fencing_topology
12.4 Monitoring Fencing Devices #
Like any other resource, the STONITH class agents also support the monitoring operation for checking status.
diff --git a/SLEHA15SP2/html/SLE-HA-guide/index.html b/SLEHA15SP2/html/SLE-HA-guide/index.html index a4e971ec8..21bd0fe88 100644 --- a/SLEHA15SP2/html/SLE-HA-guide/index.html +++ b/SLEHA15SP2/html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring Cluster Resources
- 6.1 Types of Resources
- 6.2 Supported Resource Agent Classes
- 6.3 Timeout Values
- 6.4 Creating Primitive Resources
- 6.5 Creating Resource Groups
- 6.6 Creating Clone Resources
- 6.7 Creating Promotable Clones (Multi-state Resources)
- 6.8 Creating Resource Templates
- 6.9 Creating STONITH Resources
- 6.10 Configuring Resource Monitoring
- 6.11 Loading Resources from a File
- 6.12 Resource Options (Meta Attributes)
- 6.13 Instance Attributes (Parameters)
- 6.14 Resource Operations
- 7 Configuring Resource Constraints
- 7.1 Types of Constraints
- 7.2 Scores and Infinity
- 7.3 Resource Templates and Constraints
- 7.4 Adding Location Constraints
- 7.5 Adding Colocation Constraints
- 7.6 Adding Order Constraints
- 7.7 Using Resource Sets to Define Constraints
- 7.8 Specifying Resource Failover Nodes
- 7.9 Specifying Resource Failback Nodes (Resource Stickiness)
- 7.10 Placing Resources Based on Their Load Impact
- 7.11 For More Information
- 8 Managing Cluster Resources
- 9 Managing Services on Remote Hosts
- 10 Adding or Modifying Resource Agents
- 11 Monitoring Clusters
- 12 Fencing and STONITH
- 13 Storage Protection and SBD
- 13.1 Conceptual Overview
- 13.2 Overview of Manually Setting Up SBD
- 13.3 Requirements
- 13.4 Number of SBD Devices
- 13.5 Calculation of Timeouts
- 13.6 Setting Up the Watchdog
- 13.7 Setting Up SBD with Devices
- 13.8 Setting Up Diskless SBD
- 13.9 Testing SBD and Fencing
- 13.10 Additional Mechanisms for Storage Protection
- 13.11 For More Information
- 14 QDevice and QNetd
- 15 Access Control Lists
- 16 Network Device Bonding
- 17 Load Balancing
- 18 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 27 Executing Maintenance Tasks
- 27.1 Preparing and Finishing Maintenance Work
- 27.2 Different Options for Maintenance Tasks
- 27.3 Putting the Cluster into Maintenance Mode
- 27.4 Putting a Node into Maintenance Mode
- 27.5 Putting a Node into Standby Mode
- 27.6 Stopping the Cluster Services on a Node
- 27.7 Putting a Resource into Maintenance Mode
- 27.8 Putting a Resource into Unmanaged Mode
- 27.9 Rebooting a Cluster Node While in Maintenance Mode
- 28 Upgrading Your Cluster and Updating Software Packages
- 27 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Hawk2—Cluster Configuration
- 5.2 Hawk2—Wizard for Apache Web Server
- 5.3 Hawk2 Batch Mode Activated
- 5.4 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.1 Hawk2—Primitive Resource
- 6.2 Group Resource
- 6.3 Hawk2—Resource Group
- 6.4 Hawk2—Clone Resource
- 6.5 Hawk2—Multi-state Resource
- 6.6 Hawk2—STONITH Resource
- 6.7 Hawk2—Resource Details
- 7.1 Hawk2—Location Constraint
- 7.2 Hawk2—Colocation Constraint
- 7.3 Hawk2—Order Constraint
- 7.4 Hawk2—Two Resource Sets in a Colocation Constraint
- 8.1 Hawk2—Editing A Primitive Resource
- 8.2 Hawk2—Tag
- 11.1 Hawk2—Cluster Status
- 11.2 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 11.3 Hawk2—History Explorer Main View
- 17.1 YaST IP Load Balancing—Global Parameters
- 17.2 YaST IP Load Balancing—Virtual Services
- 22.1 Position of DRBD within Linux
- 22.2 Resource Configuration
- 22.3 Resource Stacking
- 22.4 Showing a Good Connection by
drbdmon
- 22.5 Showing a Bad Connection by
drbdmon
- 23.1 Setup of a Shared Disk with Cluster LVM
- 25.1 Structure of a CTDB Cluster
- 2.1 System Roles and Installed Patterns
- 5.1 Common Parameters
- 6.1 Resource Operation Properties
- 10.1 Failure Recovery Types
- 10.2 OCF Return Codes
- 12.1 Classes of fencing
- 13.1 Commonly used watchdog drivers
- 15.1 Operator Role—Access Types and XPath Expressions
- 20.1 OCFS2 Utilities
- 20.2 Important OCFS2 Parameters
- 21.1 GFS2 Utilities
- 21.2 Important GFS2 Parameters
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 A Simple crmsh Shell Script
- 6.1 Resource Group for a Web Server
- 7.1 A Resource Set for Location Constraints
- 7.2 A Chain of Colocated Resources
- 7.3 A Chain of Ordered Resources
- 7.4 A Chain of Ordered Resources Expressed as Resource Set
- 7.5 Migration Threshold—Process Flow
- 9.1 Configuring Resources for Monitoring Plug-ins
- 12.1 Configuration of an IBM RSA Lights-out Device
- 12.2 Configuration of a UPS Fencing Device
- 12.3 Configuration of a Kdump Device
- 13.1 Formula for Timeout Calculation
- 14.1 Status of QDevice
- 14.2 Status of QNetd Server
- 15.1 Excerpt of a Cluster Configuration in XML
- 17.1 Simple ldirectord Configuration
- 22.1 Configuration of a Three-Node Stacked DRBD Resource
- 26.1 Using an NFS Server to Store the File Backup
- 26.2 Backing up Btrfs subvolumes with
tar
- 26.3 Using Third-Party Backup Tools Like EMC NetWorker
- 26.4 Backing up multipath devices
- 26.5 Booting your system with UEFI
- 26.6 Creating a recovery system with a basic
tar
backup - 26.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 diff --git a/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html b/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html index 02728cab0..b5fa7dbc6 100644 --- a/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html +++ b/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/html/SLE-HA-install-quick/index.html b/SLEHA15SP2/html/SLE-HA-install-quick/index.html index 02728cab0..b5fa7dbc6 100644 --- a/SLEHA15SP2/html/SLE-HA-install-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html b/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html index 417336487..54769cce7 100644 --- a/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html +++ b/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html b/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html index 417336487..54769cce7 100644 --- a/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html b/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html index 669c1ad88..22e106b77 100644 --- a/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html +++ b/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
.
Remote in pacemaker_remote
diff --git a/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html b/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html
index 669c1ad88..22e106b77 100644
--- a/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html
+++ b/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html
@@ -105,7 +105,7 @@
useBR: false
});
-
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
.
Remote in pacemaker_remote
diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html b/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html
index 74a0ab852..4f4b41081 100644
--- a/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html
+++ b/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html
@@ -111,7 +111,7 @@
configuration of the required cluster resources (and how to transfer them to
other sites in case of changes). Learn how to monitor and manage Geo clusters
from command line or with the Hawk2 Web interface.
-
- 1 Challenges for Geo Clusters
- 2 Conceptual Overview
- 3 Requirements
- 4 Setting Up the Booth Services
- 5 Synchronizing Configuration Files Across All Sites and Arbitrators
- 6 Configuring Cluster Resources and Constraints
- 7 Setting Up IP Relocation via DNS Update
- 8 Managing Geo Clusters
- 9 Troubleshooting
- 10 Upgrading to the Latest Product Version
- 11 For More Information
- A GNU licenses
Copyright © 2006–2024 diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html b/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html index 74a0ab852..4f4b41081 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html @@ -111,7 +111,7 @@ configuration of the required cluster resources (and how to transfer them to other sites in case of changes). Learn how to monitor and manage Geo clusters from command line or with the Hawk2 Web interface. -
- 1 Challenges for Geo Clusters
- 2 Conceptual Overview
- 3 Requirements
- 4 Setting Up the Booth Services
- 5 Synchronizing Configuration Files Across All Sites and Arbitrators
- 6 Configuring Cluster Resources and Constraints
- 7 Setting Up IP Relocation via DNS Update
- 8 Managing Geo Clusters
- 9 Troubleshooting
- 10 Upgrading to the Latest Product Version
- 11 For More Information
- A GNU licenses
Copyright © 2006–2024 diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html index e5835fd0a..8c7ee889d 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html index e5835fd0a..8c7ee889d 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Geo Clustering Quick Start #
Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html b/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html index e70a377e7..c61c50c4f 100644 --- a/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring Cluster Resources
- 6.1 Types of Resources
- 6.2 Supported Resource Agent Classes
- 6.3 Timeout Values
- 6.4 Creating Primitive Resources
- 6.5 Creating Resource Groups
- 6.6 Creating Clone Resources
- 6.7 Creating Promotable Clones (Multi-state Resources)
- 6.8 Creating Resource Templates
- 6.9 Creating STONITH Resources
- 6.10 Configuring Resource Monitoring
- 6.11 Loading Resources from a File
- 6.12 Resource Options (Meta Attributes)
- 6.13 Instance Attributes (Parameters)
- 6.14 Resource Operations
- 7 Configuring Resource Constraints
- 7.1 Types of Constraints
- 7.2 Scores and Infinity
- 7.3 Resource Templates and Constraints
- 7.4 Adding Location Constraints
- 7.5 Adding Colocation Constraints
- 7.6 Adding Order Constraints
- 7.7 Using Resource Sets to Define Constraints
- 7.8 Specifying Resource Failover Nodes
- 7.9 Specifying Resource Failback Nodes (Resource Stickiness)
- 7.10 Placing Resources Based on Their Load Impact
- 7.11 For More Information
- 8 Managing Cluster Resources
- 9 Managing Services on Remote Hosts
- 10 Adding or Modifying Resource Agents
- 11 Monitoring Clusters
- 12 Fencing and STONITH
- 13 Storage Protection and SBD
- 13.1 Conceptual Overview
- 13.2 Overview of Manually Setting Up SBD
- 13.3 Requirements
- 13.4 Number of SBD Devices
- 13.5 Calculation of Timeouts
- 13.6 Setting Up the Watchdog
- 13.7 Setting Up SBD with Devices
- 13.8 Setting Up Diskless SBD
- 13.9 Testing SBD and Fencing
- 13.10 Additional Mechanisms for Storage Protection
- 13.11 For More Information
- 14 QDevice and QNetd
- 15 Access Control Lists
- 16 Network Device Bonding
- 17 Load Balancing
- 18 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 27 Executing Maintenance Tasks
- 27.1 Preparing and Finishing Maintenance Work
- 27.2 Different Options for Maintenance Tasks
- 27.3 Putting the Cluster into Maintenance Mode
- 27.4 Putting a Node into Maintenance Mode
- 27.5 Putting a Node into Standby Mode
- 27.6 Stopping the Cluster Services on a Node
- 27.7 Putting a Resource into Maintenance Mode
- 27.8 Putting a Resource into Unmanaged Mode
- 27.9 Rebooting a Cluster Node While in Maintenance Mode
- 28 Upgrading Your Cluster and Updating Software Packages
- 27 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Hawk2—Cluster Configuration
- 5.2 Hawk2—Wizard for Apache Web Server
- 5.3 Hawk2 Batch Mode Activated
- 5.4 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.1 Hawk2—Primitive Resource
- 6.2 Group Resource
- 6.3 Hawk2—Resource Group
- 6.4 Hawk2—Clone Resource
- 6.5 Hawk2—Multi-state Resource
- 6.6 Hawk2—STONITH Resource
- 6.7 Hawk2—Resource Details
- 7.1 Hawk2—Location Constraint
- 7.2 Hawk2—Colocation Constraint
- 7.3 Hawk2—Order Constraint
- 7.4 Hawk2—Two Resource Sets in a Colocation Constraint
- 8.1 Hawk2—Editing A Primitive Resource
- 8.2 Hawk2—Tag
- 11.1 Hawk2—Cluster Status
- 11.2 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 11.3 Hawk2—History Explorer Main View
- 17.1 YaST IP Load Balancing—Global Parameters
- 17.2 YaST IP Load Balancing—Virtual Services
- 22.1 Position of DRBD within Linux
- 22.2 Resource Configuration
- 22.3 Resource Stacking
- 22.4 Showing a Good Connection by
drbdmon
- 22.5 Showing a Bad Connection by
drbdmon
- 23.1 Setup of a Shared Disk with Cluster LVM
- 25.1 Structure of a CTDB Cluster
- 2.1 System Roles and Installed Patterns
- 5.1 Common Parameters
- 6.1 Resource Operation Properties
- 10.1 Failure Recovery Types
- 10.2 OCF Return Codes
- 12.1 Classes of fencing
- 13.1 Commonly used watchdog drivers
- 15.1 Operator Role—Access Types and XPath Expressions
- 20.1 OCFS2 Utilities
- 20.2 Important OCFS2 Parameters
- 21.1 GFS2 Utilities
- 21.2 Important GFS2 Parameters
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 A Simple crmsh Shell Script
- 6.1 Resource Group for a Web Server
- 7.1 A Resource Set for Location Constraints
- 7.2 A Chain of Colocated Resources
- 7.3 A Chain of Ordered Resources
- 7.4 A Chain of Ordered Resources Expressed as Resource Set
- 7.5 Migration Threshold—Process Flow
- 9.1 Configuring Resources for Monitoring Plug-ins
- 12.1 Configuration of an IBM RSA Lights-out Device
- 12.2 Configuration of a UPS Fencing Device
- 12.3 Configuration of a Kdump Device
- 13.1 Formula for Timeout Calculation
- 14.1 Status of QDevice
- 14.2 Status of QNetd Server
- 15.1 Excerpt of a Cluster Configuration in XML
- 17.1 Simple ldirectord Configuration
- 22.1 Configuration of a Three-Node Stacked DRBD Resource
- 26.1 Using an NFS Server to Store the File Backup
- 26.2 Backing up Btrfs subvolumes with
tar
- 26.3 Using Third-Party Backup Tools Like EMC NetWorker
- 26.4 Backing up multipath devices
- 26.5 Booting your system with UEFI
- 26.6 Creating a recovery system with a basic
tar
backup - 26.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 @@ -6120,40 +6120,44 @@ lines.
Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.
- The Kdump plug-in must be used in concert with another, real STONITH
- device, for example, external/ipmi
. For the fencing
- mechanism to work properly, you must specify that Kdump is checked before
- a real STONITH device is triggered. Use crm configure
- fencing_topology
to specify the order of the fencing devices as
+ The Kdump plug-in must be used together with another, real STONITH
+ device, for example, external/ipmi
. It does
+ not work with SBD as the STONITH device. For the fencing
+ mechanism to work properly, you must specify the order of the fencing devices
+ so that Kdump is checked before a real STONITH device is triggered, as
shown in the following procedure.
- Use the
stonith:fence_kdump
resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -configure - primitive st-kdump stonith:fence_kdump \ - params nodename="alice "\ 1 + Use the
stonith:fence_kdump
fence agent. + A configuration example is shown below. For more information, + seecrm ra info stonith:fence_kdump
. +#
crm configure
+crm(live)configure#
primitive st-kdump stonith:fence_kdump \ + params nodename="alice "\
1 +pcmk_host_list="alice" \ pcmk_host_check="static-list" \ pcmk_reboot_action="off" \ pcmk_monitor_action="metadata" \ pcmk_reboot_retries="1" \ - timeout="60" -commit
- Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -
- The fencing action will be started after the timeout of the resource. -
- In
/etc/sysconfig/kdump
on each node, configure -KDUMP_POSTSCRIPT
to send a notification to all nodes - when the Kdump process is finished. For example: -KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"
- The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +
crm(live)configure#
commit
+ Name of the node to listen for a message from
fence_kdump_send
. + Configure more STONITH resources for other nodes if needed. ++ Defines how long to wait for a message from
fence_kdump_send
. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received,fence_kdump
+ times out, which indicates that the fence operation failed. The next STONITH device + in thefencing_topology
eventually fences the node. ++ On each node, configure
fence_kdump_send
to send a message to + all nodes when the Kdump process is finished. In/etc/sysconfig/kdump
, + edit theKDUMP_POSTSCRIPT
line. For example: +KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"
+ Replace NODELIST with the host names of all the cluster nodes.
Run either
systemctl restart kdump.service
ormkdumprd
. Either of these commands will detect that/etc/sysconfig/kdump
@@ -6163,12 +6167,13 @@ Open a port in the firewall for thefence_kdump
resource. The default port is7410
.- To achieve that Kdump is checked before triggering a real fencing + To have Kdump checked before triggering a real fencing mechanism (like
external/ipmi
), - use a configuration similar to the following:fencing_topology \ + use a configuration similar to the following:
crm(live)configure#
fencing_topology \ alice: kdump-node1 ipmi-node1 \ - bob: kdump-node2 ipmi-node2
For more details on
fencing_topology
: -crm configure help fencing_topology
12.4 Monitoring Fencing Devices #
+ bob: kdump-node2 ipmi-node2
+crm(live)configure#
commit
For more details on fencing_topology
:
+
crm(live)configure#
help fencing_topology
12.4 Monitoring Fencing Devices #
Like any other resource, the STONITH class agents also support the monitoring operation for checking status.
diff --git a/SLEHA15SP2/single-html/SLE-HA-guide/index.html b/SLEHA15SP2/single-html/SLE-HA-guide/index.html index e70a377e7..c61c50c4f 100644 --- a/SLEHA15SP2/single-html/SLE-HA-guide/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.
- About This Guide
- I Installation and Setup
- II Configuration and Administration
- 5 Configuration and Administration Basics
- 6 Configuring Cluster Resources
- 6.1 Types of Resources
- 6.2 Supported Resource Agent Classes
- 6.3 Timeout Values
- 6.4 Creating Primitive Resources
- 6.5 Creating Resource Groups
- 6.6 Creating Clone Resources
- 6.7 Creating Promotable Clones (Multi-state Resources)
- 6.8 Creating Resource Templates
- 6.9 Creating STONITH Resources
- 6.10 Configuring Resource Monitoring
- 6.11 Loading Resources from a File
- 6.12 Resource Options (Meta Attributes)
- 6.13 Instance Attributes (Parameters)
- 6.14 Resource Operations
- 7 Configuring Resource Constraints
- 7.1 Types of Constraints
- 7.2 Scores and Infinity
- 7.3 Resource Templates and Constraints
- 7.4 Adding Location Constraints
- 7.5 Adding Colocation Constraints
- 7.6 Adding Order Constraints
- 7.7 Using Resource Sets to Define Constraints
- 7.8 Specifying Resource Failover Nodes
- 7.9 Specifying Resource Failback Nodes (Resource Stickiness)
- 7.10 Placing Resources Based on Their Load Impact
- 7.11 For More Information
- 8 Managing Cluster Resources
- 9 Managing Services on Remote Hosts
- 10 Adding or Modifying Resource Agents
- 11 Monitoring Clusters
- 12 Fencing and STONITH
- 13 Storage Protection and SBD
- 13.1 Conceptual Overview
- 13.2 Overview of Manually Setting Up SBD
- 13.3 Requirements
- 13.4 Number of SBD Devices
- 13.5 Calculation of Timeouts
- 13.6 Setting Up the Watchdog
- 13.7 Setting Up SBD with Devices
- 13.8 Setting Up Diskless SBD
- 13.9 Testing SBD and Fencing
- 13.10 Additional Mechanisms for Storage Protection
- 13.11 For More Information
- 14 QDevice and QNetd
- 15 Access Control Lists
- 16 Network Device Bonding
- 17 Load Balancing
- 18 Geo Clusters (Multi-Site Clusters)
- III Storage and Data Replication
- IV Maintenance and Upgrade
- 27 Executing Maintenance Tasks
- 27.1 Preparing and Finishing Maintenance Work
- 27.2 Different Options for Maintenance Tasks
- 27.3 Putting the Cluster into Maintenance Mode
- 27.4 Putting a Node into Maintenance Mode
- 27.5 Putting a Node into Standby Mode
- 27.6 Stopping the Cluster Services on a Node
- 27.7 Putting a Resource into Maintenance Mode
- 27.8 Putting a Resource into Unmanaged Mode
- 27.9 Rebooting a Cluster Node While in Maintenance Mode
- 28 Upgrading Your Cluster and Updating Software Packages
- 27 Executing Maintenance Tasks
- V Appendix
- Glossary
- E GNU licenses
- 1.1 Three-Server Cluster
- 1.2 Three-Server Cluster after One Server Fails
- 1.3 Typical Fibre Channel Cluster Configuration
- 1.4 Typical iSCSI Cluster Configuration
- 1.5 Typical Cluster Configuration Without Shared Storage
- 1.6 Architecture
- 4.1 YaST Cluster—Multicast Configuration
- 4.2 YaST Cluster—Unicast Configuration
- 4.3 YaST Cluster—Security
- 4.4 YaST Cluster—
conntrackd
- 4.5 YaST Cluster—Services
- 4.6 YaST —Csync2
- 5.1 Hawk2—Cluster Configuration
- 5.2 Hawk2—Wizard for Apache Web Server
- 5.3 Hawk2 Batch Mode Activated
- 5.4 Hawk2 Batch Mode—Injected Invents and Configuration Changes
- 6.1 Hawk2—Primitive Resource
- 6.2 Group Resource
- 6.3 Hawk2—Resource Group
- 6.4 Hawk2—Clone Resource
- 6.5 Hawk2—Multi-state Resource
- 6.6 Hawk2—STONITH Resource
- 6.7 Hawk2—Resource Details
- 7.1 Hawk2—Location Constraint
- 7.2 Hawk2—Colocation Constraint
- 7.3 Hawk2—Order Constraint
- 7.4 Hawk2—Two Resource Sets in a Colocation Constraint
- 8.1 Hawk2—Editing A Primitive Resource
- 8.2 Hawk2—Tag
- 11.1 Hawk2—Cluster Status
- 11.2 Hawk2 Dashboard with One Cluster Site (
amsterdam
) - 11.3 Hawk2—History Explorer Main View
- 17.1 YaST IP Load Balancing—Global Parameters
- 17.2 YaST IP Load Balancing—Virtual Services
- 22.1 Position of DRBD within Linux
- 22.2 Resource Configuration
- 22.3 Resource Stacking
- 22.4 Showing a Good Connection by
drbdmon
- 22.5 Showing a Bad Connection by
drbdmon
- 23.1 Setup of a Shared Disk with Cluster LVM
- 25.1 Structure of a CTDB Cluster
- 2.1 System Roles and Installed Patterns
- 5.1 Common Parameters
- 6.1 Resource Operation Properties
- 10.1 Failure Recovery Types
- 10.2 OCF Return Codes
- 12.1 Classes of fencing
- 13.1 Commonly used watchdog drivers
- 15.1 Operator Role—Access Types and XPath Expressions
- 20.1 OCFS2 Utilities
- 20.2 Important OCFS2 Parameters
- 21.1 GFS2 Utilities
- 21.2 Important GFS2 Parameters
- 5.1 Excerpt of Corosync Configuration for a Two-Node Cluster
- 5.2 Excerpt of Corosync Configuration for an N-Node Cluster
- 5.3 A Simple crmsh Shell Script
- 6.1 Resource Group for a Web Server
- 7.1 A Resource Set for Location Constraints
- 7.2 A Chain of Colocated Resources
- 7.3 A Chain of Ordered Resources
- 7.4 A Chain of Ordered Resources Expressed as Resource Set
- 7.5 Migration Threshold—Process Flow
- 9.1 Configuring Resources for Monitoring Plug-ins
- 12.1 Configuration of an IBM RSA Lights-out Device
- 12.2 Configuration of a UPS Fencing Device
- 12.3 Configuration of a Kdump Device
- 13.1 Formula for Timeout Calculation
- 14.1 Status of QDevice
- 14.2 Status of QNetd Server
- 15.1 Excerpt of a Cluster Configuration in XML
- 17.1 Simple ldirectord Configuration
- 22.1 Configuration of a Three-Node Stacked DRBD Resource
- 26.1 Using an NFS Server to Store the File Backup
- 26.2 Backing up Btrfs subvolumes with
tar
- 26.3 Using Third-Party Backup Tools Like EMC NetWorker
- 26.4 Backing up multipath devices
- 26.5 Booting your system with UEFI
- 26.6 Creating a recovery system with a basic
tar
backup - 26.7 Creating a recovery system with a third-party backup
- A.1 Stopped Resources
Copyright © 2006–2024 @@ -6120,40 +6120,44 @@ lines.
Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.
- The Kdump plug-in must be used in concert with another, real STONITH
- device, for example, external/ipmi
. For the fencing
- mechanism to work properly, you must specify that Kdump is checked before
- a real STONITH device is triggered. Use crm configure
- fencing_topology
to specify the order of the fencing devices as
+ The Kdump plug-in must be used together with another, real STONITH
+ device, for example, external/ipmi
. It does
+ not work with SBD as the STONITH device. For the fencing
+ mechanism to work properly, you must specify the order of the fencing devices
+ so that Kdump is checked before a real STONITH device is triggered, as
shown in the following procedure.
- Use the
stonith:fence_kdump
resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -configure - primitive st-kdump stonith:fence_kdump \ - params nodename="alice "\ 1 + Use the
stonith:fence_kdump
fence agent. + A configuration example is shown below. For more information, + seecrm ra info stonith:fence_kdump
. +#
crm configure
+crm(live)configure#
primitive st-kdump stonith:fence_kdump \ + params nodename="alice "\
1 +pcmk_host_list="alice" \ pcmk_host_check="static-list" \ pcmk_reboot_action="off" \ pcmk_monitor_action="metadata" \ pcmk_reboot_retries="1" \ - timeout="60" -commit
- Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -
- The fencing action will be started after the timeout of the resource. -
- In
/etc/sysconfig/kdump
on each node, configure -KDUMP_POSTSCRIPT
to send a notification to all nodes - when the Kdump process is finished. For example: -KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"
- The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +
crm(live)configure#
commit
+ Name of the node to listen for a message from
fence_kdump_send
. + Configure more STONITH resources for other nodes if needed. ++ Defines how long to wait for a message from
fence_kdump_send
. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received,fence_kdump
+ times out, which indicates that the fence operation failed. The next STONITH device + in thefencing_topology
eventually fences the node. ++ On each node, configure
fence_kdump_send
to send a message to + all nodes when the Kdump process is finished. In/etc/sysconfig/kdump
, + edit theKDUMP_POSTSCRIPT
line. For example: +KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"
+ Replace NODELIST with the host names of all the cluster nodes.
Run either
systemctl restart kdump.service
ormkdumprd
. Either of these commands will detect that/etc/sysconfig/kdump
@@ -6163,12 +6167,13 @@ Open a port in the firewall for thefence_kdump
resource. The default port is7410
.- To achieve that Kdump is checked before triggering a real fencing + To have Kdump checked before triggering a real fencing mechanism (like
external/ipmi
), - use a configuration similar to the following:fencing_topology \ + use a configuration similar to the following:
crm(live)configure#
fencing_topology \ alice: kdump-node1 ipmi-node1 \ - bob: kdump-node2 ipmi-node2
For more details on
fencing_topology
: -crm configure help fencing_topology
12.4 Monitoring Fencing Devices #
+ bob: kdump-node2 ipmi-node2
+crm(live)configure#
commit
For more details on fencing_topology
:
+
crm(live)configure#
help fencing_topology
12.4 Monitoring Fencing Devices #
Like any other resource, the STONITH class agents also support the monitoring operation for checking status.
diff --git a/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html index f0a186caa..451801df0 100644 --- a/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html index f0a186caa..451801df0 100644 --- a/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Installation and Setup Quick Start #
This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html index 30b24cf39..e84ba51cc 100644 --- a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html index 30b24cf39..e84ba51cc 100644 --- a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Highly Available NFS Storage with DRBD and Pacemaker #
This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html index 033356b73..265287022 100644 --- a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
.
Remote in pacemaker_remote
diff --git a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html
index 033356b73..265287022 100644
--- a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html
+++ b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html
@@ -105,7 +105,7 @@
useBR: false
});
-
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Pacemaker Remote Quick Start #
This document guides you through the setup of a High Availability cluster with a remote
node or a guest node, managed by Pacemaker and pacemaker_remote
.
Remote in pacemaker_remote