From ce3a3fbc4412f2f921f27e44a96e088ad1927904 Mon Sep 17 00:00:00 2001 From: SUSE Docs Bot Date: Tue, 24 Sep 2024 03:25:27 +0000 Subject: [PATCH] Automatic rebuild after doc-sleha commit a289903790491b583add832476c57792c872562a --- .../html/SLE-HA-geo-guide/book-sleha-geo.html | 2 +- SLEHA12SP5/html/SLE-HA-geo-guide/index.html | 2 +- .../SLE-HA-geo-quick/art-ha-geo-quick.html | 2 +- SLEHA12SP5/html/SLE-HA-geo-quick/index.html | 2 +- SLEHA12SP5/html/SLE-HA-guide/book-sleha.html | 2 +- .../html/SLE-HA-guide/cha-ha-fencing.html | 65 +++++++++--------- SLEHA12SP5/html/SLE-HA-guide/index.html | 2 +- .../art-ha-install-quick.html | 2 +- .../html/SLE-HA-install-quick/index.html | 2 +- .../SLE-HA-nfs-quick/art-ha-quick-nfs.html | 2 +- SLEHA12SP5/html/SLE-HA-nfs-quick/index.html | 2 +- .../art-sle-ha-pmremote.html | 2 +- .../html/SLE-HA-pmremote-quick/index.html | 2 +- .../book-sleha-geo_draft.html | 2 +- .../single-html/SLE-HA-geo-guide/index.html | 2 +- .../art-ha-geo-quick_draft.html | 2 +- .../single-html/SLE-HA-geo-quick/index.html | 2 +- .../SLE-HA-guide/book-sleha_draft.html | 67 ++++++++++--------- .../single-html/SLE-HA-guide/index.html | 67 ++++++++++--------- .../art-ha-install-quick_draft.html | 2 +- .../SLE-HA-install-quick/index.html | 2 +- .../art-ha-quick-nfs_draft.html | 2 +- .../single-html/SLE-HA-nfs-quick/index.html | 2 +- .../art-sle-ha-pmremote_draft.html | 2 +- .../SLE-HA-pmremote-quick/index.html | 2 +- 25 files changed, 129 insertions(+), 114 deletions(-) diff --git a/SLEHA12SP5/html/SLE-HA-geo-guide/book-sleha-geo.html b/SLEHA12SP5/html/SLE-HA-geo-guide/book-sleha-geo.html index babd6fe17..d48f18342 100644 --- a/SLEHA12SP5/html/SLE-HA-geo-guide/book-sleha-geo.html +++ b/SLEHA12SP5/html/SLE-HA-geo-guide/book-sleha-geo.html @@ -115,7 +115,7 @@ cluster resources and how to transfer them to other cluster site in case of changes. It also describes how to manage Geo clusters from command line and with Hawk and how to upgrade them to the latest product version. -

Publication Date: September 13, 2024 +

Publication Date: September 24, 2024
List of Figures
List of Tables
List of Examples

Copyright © 2006–2024 diff --git a/SLEHA12SP5/html/SLE-HA-geo-guide/index.html b/SLEHA12SP5/html/SLE-HA-geo-guide/index.html index babd6fe17..d48f18342 100644 --- a/SLEHA12SP5/html/SLE-HA-geo-guide/index.html +++ b/SLEHA12SP5/html/SLE-HA-geo-guide/index.html @@ -115,7 +115,7 @@ cluster resources and how to transfer them to other cluster site in case of changes. It also describes how to manage Geo clusters from command line and with Hawk and how to upgrade them to the latest product version. -

Publication Date: September 13, 2024 +

Publication Date: September 24, 2024
List of Figures
List of Tables
List of Examples

Copyright © 2006–2024 diff --git a/SLEHA12SP5/html/SLE-HA-geo-quick/art-ha-geo-quick.html b/SLEHA12SP5/html/SLE-HA-geo-quick/art-ha-geo-quick.html index 5323e94e3..7b19e4d1f 100644 --- a/SLEHA12SP5/html/SLE-HA-geo-quick/art-ha-geo-quick.html +++ b/SLEHA12SP5/html/SLE-HA-geo-quick/art-ha-geo-quick.html @@ -105,7 +105,7 @@ useBR: false }); -

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 12 SP5

Geo Clustering Quick Start

SUSE Linux Enterprise High Availability 12 SP5

Publication Date: September 13, 2024 +

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 12 SP5

Geo Clustering Quick Start

SUSE Linux Enterprise High Availability 12 SP5

Publication Date: September 24, 2024

Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/html/SLE-HA-geo-quick/index.html b/SLEHA12SP5/html/SLE-HA-geo-quick/index.html index 5323e94e3..7b19e4d1f 100644 --- a/SLEHA12SP5/html/SLE-HA-geo-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 12 SP5

Geo Clustering Quick Start

SUSE Linux Enterprise High Availability 12 SP5

Publication Date: September 13, 2024 +

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 12 SP5

Geo Clustering Quick Start

SUSE Linux Enterprise High Availability 12 SP5

Publication Date: September 24, 2024

Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html b/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html index ad5dc8135..747642070 100644 --- a/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html +++ b/SLEHA12SP5/html/SLE-HA-guide/book-sleha.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.

Publication Date: - September 13, 2024 + September 24, 2024

Copyright © 2006–2024 diff --git a/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html b/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html index 5fbcf512d..f5bbd9a60 100644 --- a/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html +++ b/SLEHA12SP5/html/SLE-HA-guide/cha-ha-fencing.html @@ -298,40 +298,44 @@ lines.

Example 9.3: Configuration of a Kdump Device

Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.

- The Kdump plug-in must be used in concert with another, real STONITH - device, for example, external/ipmi. For the fencing - mechanism to work properly, you must specify that Kdump is checked before - a real STONITH device is triggered. Use crm configure - fencing_topology to specify the order of the fencing devices as + The Kdump plug-in must be used together with another, real STONITH + device, for example, external/ipmi. It does + not work with SBD as the STONITH device. For the fencing + mechanism to work properly, you must specify the order of the fencing devices + so that Kdump is checked before a real STONITH device is triggered, as shown in the following procedure.

  1. - Use the stonith:fence_kdump resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -

    configure
    -  primitive st-kdump stonith:fence_kdump \
    -    params nodename="alice "\ 1
    +     Use the stonith:fence_kdump fence agent.
    +     A configuration example is shown below. For more information,
    +     see crm ra info stonith:fence_kdump.
    +    

    # crm configure
    +crm(live)configure# primitive st-kdump stonith:fence_kdump \
    +    params nodename="alice "\ 1
    +    pcmk_host_list="alice" \
         pcmk_host_check="static-list" \
         pcmk_reboot_action="off" \
         pcmk_monitor_action="metadata" \
         pcmk_reboot_retries="1" \
    -    timeout="60"
    -commit

    1

    - Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -

    - The fencing action will be started after the timeout of the resource. -

  2. - In /etc/sysconfig/kdump on each node, configure - KDUMP_POSTSCRIPT to send a notification to all nodes - when the Kdump process is finished. For example: -

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"

    - The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +crm(live)configure# commit

1

+ Name of the node to listen for a message from fence_kdump_send. + Configure more STONITH resources for other nodes if needed. +

2

+ Defines how long to wait for a message from fence_kdump_send. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received, fence_kdump + times out, which indicates that the fence operation failed. The next STONITH device + in the fencing_topology eventually fences the node. +

  • + On each node, configure fence_kdump_send to send a message to + all nodes when the Kdump process is finished. In /etc/sysconfig/kdump, + edit the KDUMP_POSTSCRIPT line. For example: +

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"

    + Replace NODELIST with the host names of all the cluster nodes.

  • Run either systemctl restart kdump.service or mkdumprd. Either of these commands will detect that /etc/sysconfig/kdump @@ -343,10 +347,11 @@

  • To have Kdump checked before triggering a real fencing mechanism (like external/ipmi), - use a configuration similar to the following:

    fencing_topology \
    +     use a configuration similar to the following:

    crm(live)configure# fencing_topology \
       alice: kdump-node1 ipmi-node1 \
    -  bob: kdump-node2 ipmi-node2

    For more details on fencing_topology: -

    crm configure help fencing_topology
  • 9.4 Monitoring Fencing Devices

    + bob: kdump-node2 ipmi-node2 +crm(live)configure# commit

    For more details on fencing_topology: +

    crm(live)configure# help fencing_topology

    9.4 Monitoring Fencing Devices

    Like any other resource, the STONITH class agents also support the monitoring operation for checking status.

    diff --git a/SLEHA12SP5/html/SLE-HA-guide/index.html b/SLEHA12SP5/html/SLE-HA-guide/index.html index ad5dc8135..747642070 100644 --- a/SLEHA12SP5/html/SLE-HA-guide/index.html +++ b/SLEHA12SP5/html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.

    Publication Date: - September 13, 2024 + September 24, 2024

    Copyright © 2006–2024 diff --git a/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html b/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html index 56470ad25..37db1de28 100644 --- a/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html +++ b/SLEHA12SP5/html/SLE-HA-install-quick/art-ha-install-quick.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/html/SLE-HA-install-quick/index.html b/SLEHA12SP5/html/SLE-HA-install-quick/index.html index 56470ad25..37db1de28 100644 --- a/SLEHA12SP5/html/SLE-HA-install-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html b/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html index db0db4b9a..24b06721e 100644 --- a/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html +++ b/SLEHA12SP5/html/SLE-HA-nfs-quick/art-ha-quick-nfs.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html b/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html index db0db4b9a..24b06721e 100644 --- a/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html b/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html index 83ad03064..14caa7462 100644 --- a/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html +++ b/SLEHA12SP5/html/SLE-HA-pmremote-quick/art-sle-ha-pmremote.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in the pacemaker_remote term diff --git a/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html b/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html index 83ad03064..14caa7462 100644 --- a/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html +++ b/SLEHA12SP5/html/SLE-HA-pmremote-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in the pacemaker_remote term diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html b/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html index ad2f3c016..4c3a78ccf 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html @@ -115,7 +115,7 @@ cluster resources and how to transfer them to other cluster site in case of changes. It also describes how to manage Geo clusters from command line and with Hawk and how to upgrade them to the latest product version. -

    Publication Date: September 13, 2024 +

    Publication Date: September 24, 2024

    Copyright © 2006–2024 diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html b/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html index ad2f3c016..4c3a78ccf 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-guide/index.html @@ -115,7 +115,7 @@ cluster resources and how to transfer them to other cluster site in case of changes. It also describes how to manage Geo clusters from command line and with Hawk and how to upgrade them to the latest product version. -

    Publication Date: September 13, 2024 +

    Publication Date: September 24, 2024

    Copyright © 2006–2024 diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html b/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html index 1bb767b4e..03f500351 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-quick/art-ha-geo-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Geo Clustering Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Geo Clustering Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html index 1bb767b4e..03f500351 100644 --- a/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Geo Clustering Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Geo Clustering Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    Geo clustering allows you to have multiple, geographically dispersed sites with a local cluster each. diff --git a/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html b/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html index 0a9cdf644..a3cc2e764 100644 --- a/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-guide/book-sleha_draft.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.

    Publication Date: - September 13, 2024 + September 24, 2024

    Copyright © 2006–2024 @@ -6570,40 +6570,44 @@ lines.

    Example 9.3: Configuration of a Kdump Device

    Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.

    - The Kdump plug-in must be used in concert with another, real STONITH - device, for example, external/ipmi. For the fencing - mechanism to work properly, you must specify that Kdump is checked before - a real STONITH device is triggered. Use crm configure - fencing_topology to specify the order of the fencing devices as + The Kdump plug-in must be used together with another, real STONITH + device, for example, external/ipmi. It does + not work with SBD as the STONITH device. For the fencing + mechanism to work properly, you must specify the order of the fencing devices + so that Kdump is checked before a real STONITH device is triggered, as shown in the following procedure.

    1. - Use the stonith:fence_kdump resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -

      configure
      -  primitive st-kdump stonith:fence_kdump \
      -    params nodename="alice "\ 1
      +     Use the stonith:fence_kdump fence agent.
      +     A configuration example is shown below. For more information,
      +     see crm ra info stonith:fence_kdump.
      +    

      # crm configure
      +crm(live)configure# primitive st-kdump stonith:fence_kdump \
      +    params nodename="alice "\ 1
      +    pcmk_host_list="alice" \
           pcmk_host_check="static-list" \
           pcmk_reboot_action="off" \
           pcmk_monitor_action="metadata" \
           pcmk_reboot_retries="1" \
      -    timeout="60"
      -commit

      1

      - Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -

      - The fencing action will be started after the timeout of the resource. -

    2. - In /etc/sysconfig/kdump on each node, configure - KDUMP_POSTSCRIPT to send a notification to all nodes - when the Kdump process is finished. For example: -

      KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"

      - The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +crm(live)configure# commit

    1

    + Name of the node to listen for a message from fence_kdump_send. + Configure more STONITH resources for other nodes if needed. +

    2

    + Defines how long to wait for a message from fence_kdump_send. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received, fence_kdump + times out, which indicates that the fence operation failed. The next STONITH device + in the fencing_topology eventually fences the node. +

  • + On each node, configure fence_kdump_send to send a message to + all nodes when the Kdump process is finished. In /etc/sysconfig/kdump, + edit the KDUMP_POSTSCRIPT line. For example: +

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"

    + Replace NODELIST with the host names of all the cluster nodes.

  • Run either systemctl restart kdump.service or mkdumprd. Either of these commands will detect that /etc/sysconfig/kdump @@ -6615,10 +6619,11 @@

  • To have Kdump checked before triggering a real fencing mechanism (like external/ipmi), - use a configuration similar to the following:

    fencing_topology \
    +     use a configuration similar to the following:

    crm(live)configure# fencing_topology \
       alice: kdump-node1 ipmi-node1 \
    -  bob: kdump-node2 ipmi-node2

    For more details on fencing_topology: -

    crm configure help fencing_topology
  • 9.4 Monitoring Fencing Devices

    + bob: kdump-node2 ipmi-node2 +crm(live)configure# commit

    For more details on fencing_topology: +

    crm(live)configure# help fencing_topology

    9.4 Monitoring Fencing Devices

    Like any other resource, the STONITH class agents also support the monitoring operation for checking status.

    diff --git a/SLEHA12SP5/single-html/SLE-HA-guide/index.html b/SLEHA12SP5/single-html/SLE-HA-guide/index.html index 0a9cdf644..a3cc2e764 100644 --- a/SLEHA12SP5/single-html/SLE-HA-guide/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface (GUI) and a command line interface (CLI). For performing key tasks, both approaches (GUI and CLI) are covered in detail in this guide. Thus, administrators can choose the appropriate tool that matches their needs.

    Publication Date: - September 13, 2024 + September 24, 2024

    Copyright © 2006–2024 @@ -6570,40 +6570,44 @@ lines.

    Example 9.3: Configuration of a Kdump Device

    Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.

    - The Kdump plug-in must be used in concert with another, real STONITH - device, for example, external/ipmi. For the fencing - mechanism to work properly, you must specify that Kdump is checked before - a real STONITH device is triggered. Use crm configure - fencing_topology to specify the order of the fencing devices as + The Kdump plug-in must be used together with another, real STONITH + device, for example, external/ipmi. It does + not work with SBD as the STONITH device. For the fencing + mechanism to work properly, you must specify the order of the fencing devices + so that Kdump is checked before a real STONITH device is triggered, as shown in the following procedure.

    1. - Use the stonith:fence_kdump resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -

      configure
      -  primitive st-kdump stonith:fence_kdump \
      -    params nodename="alice "\ 1
      +     Use the stonith:fence_kdump fence agent.
      +     A configuration example is shown below. For more information,
      +     see crm ra info stonith:fence_kdump.
      +    

      # crm configure
      +crm(live)configure# primitive st-kdump stonith:fence_kdump \
      +    params nodename="alice "\ 1
      +    pcmk_host_list="alice" \
           pcmk_host_check="static-list" \
           pcmk_reboot_action="off" \
           pcmk_monitor_action="metadata" \
           pcmk_reboot_retries="1" \
      -    timeout="60"
      -commit

      1

      - Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -

      - The fencing action will be started after the timeout of the resource. -

    2. - In /etc/sysconfig/kdump on each node, configure - KDUMP_POSTSCRIPT to send a notification to all nodes - when the Kdump process is finished. For example: -

      KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"

      - The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +crm(live)configure# commit

    1

    + Name of the node to listen for a message from fence_kdump_send. + Configure more STONITH resources for other nodes if needed. +

    2

    + Defines how long to wait for a message from fence_kdump_send. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received, fence_kdump + times out, which indicates that the fence operation failed. The next STONITH device + in the fencing_topology eventually fences the node. +

  • + On each node, configure fence_kdump_send to send a message to + all nodes when the Kdump process is finished. In /etc/sysconfig/kdump, + edit the KDUMP_POSTSCRIPT line. For example: +

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"

    + Replace NODELIST with the host names of all the cluster nodes.

  • Run either systemctl restart kdump.service or mkdumprd. Either of these commands will detect that /etc/sysconfig/kdump @@ -6615,10 +6619,11 @@

  • To have Kdump checked before triggering a real fencing mechanism (like external/ipmi), - use a configuration similar to the following:

    fencing_topology \
    +     use a configuration similar to the following:

    crm(live)configure# fencing_topology \
       alice: kdump-node1 ipmi-node1 \
    -  bob: kdump-node2 ipmi-node2

    For more details on fencing_topology: -

    crm configure help fencing_topology
  • 9.4 Monitoring Fencing Devices

    + bob: kdump-node2 ipmi-node2 +crm(live)configure# commit

    For more details on fencing_topology: +

    crm(live)configure# help fencing_topology

    9.4 Monitoring Fencing Devices

    Like any other resource, the STONITH class agents also support the monitoring operation for checking status.

    diff --git a/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html b/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html index a6a023fbe..f9c709d48 100644 --- a/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-install-quick/art-ha-install-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html index a6a023fbe..f9c709d48 100644 --- a/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Installation and Setup Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html index 1e3fc97e1..996315521 100644 --- a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/art-ha-quick-nfs_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html index 1e3fc97e1..996315521 100644 --- a/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Highly Available NFS Storage with DRBD and Pacemaker

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components of diff --git a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html index aed6803b9..1e1a2adcb 100644 --- a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html +++ b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/art-sle-ha-pmremote_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in the pacemaker_remote term diff --git a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html index aed6803b9..1e1a2adcb 100644 --- a/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html +++ b/SLEHA12SP5/single-html/SLE-HA-pmremote-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 13, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 12 SP5

    Pacemaker Remote Quick Start

    SUSE Linux Enterprise High Availability 12 SP5

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in the pacemaker_remote term