diff --git a/SLEHA15SP2/html/SLE-HA-geo-guide/book-sleha-geo.html b/SLEHA15SP2/html/SLE-HA-geo-guide/book-sleha-geo.html index c84b61521..79d671d94 100644 --- a/SLEHA15SP2/html/SLE-HA-geo-guide/book-sleha-geo.html +++ b/SLEHA15SP2/html/SLE-HA-geo-guide/book-sleha-geo.html @@ -111,7 +111,7 @@ configuration of the required cluster resources (and how to transfer them to other sites in case of changes). Learn how to monitor and manage Geo clusters from command line or with the Hawk2 Web interface. -

Revision History: Geo Clustering Guide
Publication Date: September 19, 2024 +

Revision History: Geo Clustering Guide
Publication Date: September 24, 2024
List of Figures
List of Examples

Copyright © 2006–2024 diff --git a/SLEHA15SP2/html/SLE-HA-geo-guide/index.html b/SLEHA15SP2/html/SLE-HA-geo-guide/index.html index c84b61521..79d671d94 100644 --- a/SLEHA15SP2/html/SLE-HA-geo-guide/index.html +++ b/SLEHA15SP2/html/SLE-HA-geo-guide/index.html @@ -111,7 +111,7 @@ configuration of the required cluster resources (and how to transfer them to other sites in case of changes). Learn how to monitor and manage Geo clusters from command line or with the Hawk2 Web interface. -

Publication Date: September 19, 2024 +

Publication Date: September 24, 2024
List of Figures
List of Examples

Copyright © 2006–2024 diff --git a/SLEHA15SP2/html/SLE-HA-geo-quick/art-sleha-geo-quick.html b/SLEHA15SP2/html/SLE-HA-geo-quick/art-sleha-geo-quick.html index 6da5e57c8..2ec019b12 100644 --- a/SLEHA15SP2/html/SLE-HA-geo-quick/art-sleha-geo-quick.html +++ b/SLEHA15SP2/html/SLE-HA-geo-quick/art-sleha-geo-quick.html @@ -105,7 +105,7 @@ useBR: false }); -

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 15 SP2

Geo Clustering Quick Start

Publication Date: September 19, 2024 +

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 15 SP2

Geo Clustering Quick Start

Publication Date: September 24, 2024

Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/html/SLE-HA-geo-quick/index.html b/SLEHA15SP2/html/SLE-HA-geo-quick/index.html index 6da5e57c8..2ec019b12 100644 --- a/SLEHA15SP2/html/SLE-HA-geo-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 15 SP2

Geo Clustering Quick Start

Publication Date: September 19, 2024 +

This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

SUSE Linux Enterprise High Availability 15 SP2

Geo Clustering Quick Start

Publication Date: September 24, 2024

Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html b/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html index a4e971ec8..21bd0fe88 100644 --- a/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html +++ b/SLEHA15SP2/html/SLE-HA-guide/book-sleha-guide.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.

Publication Date: - September 19, 2024 + September 24, 2024

Copyright © 2006–2024 diff --git a/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html b/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html index 78492b48d..b15525b22 100644 --- a/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html +++ b/SLEHA15SP2/html/SLE-HA-guide/cha-ha-fencing.html @@ -305,40 +305,44 @@ lines.

Example 12.3: Configuration of a Kdump Device

Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.

- The Kdump plug-in must be used in concert with another, real STONITH - device, for example, external/ipmi. For the fencing - mechanism to work properly, you must specify that Kdump is checked before - a real STONITH device is triggered. Use crm configure - fencing_topology to specify the order of the fencing devices as + The Kdump plug-in must be used together with another, real STONITH + device, for example, external/ipmi. It does + not work with SBD as the STONITH device. For the fencing + mechanism to work properly, you must specify the order of the fencing devices + so that Kdump is checked before a real STONITH device is triggered, as shown in the following procedure.

  1. - Use the stonith:fence_kdump resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -

    configure
    -  primitive st-kdump stonith:fence_kdump \
    -    params nodename="alice "\ 1
    +     Use the stonith:fence_kdump fence agent.
    +     A configuration example is shown below. For more information,
    +     see crm ra info stonith:fence_kdump.
    +    

    # crm configure
    +crm(live)configure# primitive st-kdump stonith:fence_kdump \
    +    params nodename="alice "\ 1
    +    pcmk_host_list="alice" \
         pcmk_host_check="static-list" \
         pcmk_reboot_action="off" \
         pcmk_monitor_action="metadata" \
         pcmk_reboot_retries="1" \
    -    timeout="60"
    -commit

    1

    - Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -

    - The fencing action will be started after the timeout of the resource. -

  2. - In /etc/sysconfig/kdump on each node, configure - KDUMP_POSTSCRIPT to send a notification to all nodes - when the Kdump process is finished. For example: -

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"

    - The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +crm(live)configure# commit

1

+ Name of the node to listen for a message from fence_kdump_send. + Configure more STONITH resources for other nodes if needed. +

2

+ Defines how long to wait for a message from fence_kdump_send. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received, fence_kdump + times out, which indicates that the fence operation failed. The next STONITH device + in the fencing_topology eventually fences the node. +

  • + On each node, configure fence_kdump_send to send a message to + all nodes when the Kdump process is finished. In /etc/sysconfig/kdump, + edit the KDUMP_POSTSCRIPT line. For example: +

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"

    + Replace NODELIST with the host names of all the cluster nodes.

  • Run either systemctl restart kdump.service or mkdumprd. Either of these commands will detect that /etc/sysconfig/kdump @@ -348,12 +352,13 @@ Open a port in the firewall for the fence_kdump resource. The default port is 7410.

  • - To achieve that Kdump is checked before triggering a real fencing + To have Kdump checked before triggering a real fencing mechanism (like external/ipmi), - use a configuration similar to the following:

    fencing_topology \
    +     use a configuration similar to the following:

    crm(live)configure# fencing_topology \
       alice: kdump-node1 ipmi-node1 \
    -  bob: kdump-node2 ipmi-node2

    For more details on fencing_topology: -

    crm configure help fencing_topology
  • 12.4 Monitoring Fencing Devices

    + bob: kdump-node2 ipmi-node2 +crm(live)configure# commit

    For more details on fencing_topology: +

    crm(live)configure# help fencing_topology

    12.4 Monitoring Fencing Devices

    Like any other resource, the STONITH class agents also support the monitoring operation for checking status.

    diff --git a/SLEHA15SP2/html/SLE-HA-guide/index.html b/SLEHA15SP2/html/SLE-HA-guide/index.html index a4e971ec8..21bd0fe88 100644 --- a/SLEHA15SP2/html/SLE-HA-guide/index.html +++ b/SLEHA15SP2/html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.

    Publication Date: - September 19, 2024 + September 24, 2024
    List of Figures
    List of Tables
    List of Examples

    Copyright © 2006–2024 diff --git a/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html b/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html index 02728cab0..b5fa7dbc6 100644 --- a/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html +++ b/SLEHA15SP2/html/SLE-HA-install-quick/art-sleha-install-quick.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/html/SLE-HA-install-quick/index.html b/SLEHA15SP2/html/SLE-HA-install-quick/index.html index 02728cab0..b5fa7dbc6 100644 --- a/SLEHA15SP2/html/SLE-HA-install-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html b/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html index 417336487..54769cce7 100644 --- a/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html +++ b/SLEHA15SP2/html/SLE-HA-nfs-quick/art-sleha-nfs-quick.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html b/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html index 417336487..54769cce7 100644 --- a/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html b/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html index 669c1ad88..22e106b77 100644 --- a/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html +++ b/SLEHA15SP2/html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in pacemaker_remote diff --git a/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html b/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html index 669c1ad88..22e106b77 100644 --- a/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html +++ b/SLEHA15SP2/html/SLE-HA-pmremote-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in pacemaker_remote diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html b/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html index 74a0ab852..4f4b41081 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-guide/book-sleha-geo_draft.html @@ -111,7 +111,7 @@ configuration of the required cluster resources (and how to transfer them to other sites in case of changes). Learn how to monitor and manage Geo clusters from command line or with the Hawk2 Web interface. -

    Publication Date: September 19, 2024 +

    Publication Date: September 24, 2024

    Copyright © 2006–2024 diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html b/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html index 74a0ab852..4f4b41081 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-guide/index.html @@ -111,7 +111,7 @@ configuration of the required cluster resources (and how to transfer them to other sites in case of changes). Learn how to monitor and manage Geo clusters from command line or with the Hawk2 Web interface. -

    Publication Date: September 19, 2024 +

    Publication Date: September 24, 2024

    Copyright © 2006–2024 diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html index e5835fd0a..8c7ee889d 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-quick/art-sleha-geo-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Geo Clustering Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Geo Clustering Quick Start

    Publication Date: September 24, 2024

    Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html index e5835fd0a..8c7ee889d 100644 --- a/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-geo-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Geo Clustering Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Geo Clustering Quick Start

    Publication Date: September 24, 2024

    Geo clustering protects workloads across globally distributed data centers. This document guides you through the basic setup of a diff --git a/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html b/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html index e70a377e7..c61c50c4f 100644 --- a/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-guide/book-sleha-guide_draft.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.

    Publication Date: - September 19, 2024 + September 24, 2024

    Copyright © 2006–2024 @@ -6120,40 +6120,44 @@ lines.

    Example 12.3: Configuration of a Kdump Device

    Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.

    - The Kdump plug-in must be used in concert with another, real STONITH - device, for example, external/ipmi. For the fencing - mechanism to work properly, you must specify that Kdump is checked before - a real STONITH device is triggered. Use crm configure - fencing_topology to specify the order of the fencing devices as + The Kdump plug-in must be used together with another, real STONITH + device, for example, external/ipmi. It does + not work with SBD as the STONITH device. For the fencing + mechanism to work properly, you must specify the order of the fencing devices + so that Kdump is checked before a real STONITH device is triggered, as shown in the following procedure.

    1. - Use the stonith:fence_kdump resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -

      configure
      -  primitive st-kdump stonith:fence_kdump \
      -    params nodename="alice "\ 1
      +     Use the stonith:fence_kdump fence agent.
      +     A configuration example is shown below. For more information,
      +     see crm ra info stonith:fence_kdump.
      +    

      # crm configure
      +crm(live)configure# primitive st-kdump stonith:fence_kdump \
      +    params nodename="alice "\ 1
      +    pcmk_host_list="alice" \
           pcmk_host_check="static-list" \
           pcmk_reboot_action="off" \
           pcmk_monitor_action="metadata" \
           pcmk_reboot_retries="1" \
      -    timeout="60"
      -commit

      1

      - Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -

      - The fencing action will be started after the timeout of the resource. -

    2. - In /etc/sysconfig/kdump on each node, configure - KDUMP_POSTSCRIPT to send a notification to all nodes - when the Kdump process is finished. For example: -

      KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"

      - The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +crm(live)configure# commit

    1

    + Name of the node to listen for a message from fence_kdump_send. + Configure more STONITH resources for other nodes if needed. +

    2

    + Defines how long to wait for a message from fence_kdump_send. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received, fence_kdump + times out, which indicates that the fence operation failed. The next STONITH device + in the fencing_topology eventually fences the node. +

  • + On each node, configure fence_kdump_send to send a message to + all nodes when the Kdump process is finished. In /etc/sysconfig/kdump, + edit the KDUMP_POSTSCRIPT line. For example: +

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"

    + Replace NODELIST with the host names of all the cluster nodes.

  • Run either systemctl restart kdump.service or mkdumprd. Either of these commands will detect that /etc/sysconfig/kdump @@ -6163,12 +6167,13 @@ Open a port in the firewall for the fence_kdump resource. The default port is 7410.

  • - To achieve that Kdump is checked before triggering a real fencing + To have Kdump checked before triggering a real fencing mechanism (like external/ipmi), - use a configuration similar to the following:

    fencing_topology \
    +     use a configuration similar to the following:

    crm(live)configure# fencing_topology \
       alice: kdump-node1 ipmi-node1 \
    -  bob: kdump-node2 ipmi-node2

    For more details on fencing_topology: -

    crm configure help fencing_topology
  • 12.4 Monitoring Fencing Devices

    + bob: kdump-node2 ipmi-node2 +crm(live)configure# commit

    For more details on fencing_topology: +

    crm(live)configure# help fencing_topology

    12.4 Monitoring Fencing Devices

    Like any other resource, the STONITH class agents also support the monitoring operation for checking status.

    diff --git a/SLEHA15SP2/single-html/SLE-HA-guide/index.html b/SLEHA15SP2/single-html/SLE-HA-guide/index.html index e70a377e7..c61c50c4f 100644 --- a/SLEHA15SP2/single-html/SLE-HA-guide/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-guide/index.html @@ -111,7 +111,7 @@ interface and a command line interface (CLI). For performing key tasks, both approaches are covered in this guide. Thus, you can choose the appropriate tool that matches your needs.

    Publication Date: - September 19, 2024 + September 24, 2024

    Copyright © 2006–2024 @@ -6120,40 +6120,44 @@ lines.

    Example 12.3: Configuration of a Kdump Device

    Kdump belongs to the Special Fencing Devices and is in fact the opposite of a fencing device. The plug-in checks if a Kernel dump is in progress on a node. If so, it - returns true, and acts as if the node has been fenced. + returns true and acts as if the node has been fenced, + because the node will reboot after the Kdump is complete. + If not, it returns a failure and the next fencing device is triggered.

    - The Kdump plug-in must be used in concert with another, real STONITH - device, for example, external/ipmi. For the fencing - mechanism to work properly, you must specify that Kdump is checked before - a real STONITH device is triggered. Use crm configure - fencing_topology to specify the order of the fencing devices as + The Kdump plug-in must be used together with another, real STONITH + device, for example, external/ipmi. It does + not work with SBD as the STONITH device. For the fencing + mechanism to work properly, you must specify the order of the fencing devices + so that Kdump is checked before a real STONITH device is triggered, as shown in the following procedure.

    1. - Use the stonith:fence_kdump resource agent (provided - by the package fence-agents) - to monitor all nodes with the Kdump function enabled. Find a - configuration example for the resource below: -

      configure
      -  primitive st-kdump stonith:fence_kdump \
      -    params nodename="alice "\ 1
      +     Use the stonith:fence_kdump fence agent.
      +     A configuration example is shown below. For more information,
      +     see crm ra info stonith:fence_kdump.
      +    

      # crm configure
      +crm(live)configure# primitive st-kdump stonith:fence_kdump \
      +    params nodename="alice "\ 1
      +    pcmk_host_list="alice" \
           pcmk_host_check="static-list" \
           pcmk_reboot_action="off" \
           pcmk_monitor_action="metadata" \
           pcmk_reboot_retries="1" \
      -    timeout="60"
      -commit

      1

      - Name of the node to be monitored. If you need to monitor more than one - node, configure more STONITH resources. To prevent a specific node - from using a fencing device, add location constraints. -

      - The fencing action will be started after the timeout of the resource. -

    2. - In /etc/sysconfig/kdump on each node, configure - KDUMP_POSTSCRIPT to send a notification to all nodes - when the Kdump process is finished. For example: -

      KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i INTERVAL -p PORT -c 1 alice bob charlie"

      - The node that does a Kdump will restart automatically after Kdump has - finished. + timeout="60"2 +crm(live)configure# commit

    1

    + Name of the node to listen for a message from fence_kdump_send. + Configure more STONITH resources for other nodes if needed. +

    2

    + Defines how long to wait for a message from fence_kdump_send. + If a message is received, then a Kdump is in progress and the fencing mechanism + considers the node to be fenced. If no message is received, fence_kdump + times out, which indicates that the fence operation failed. The next STONITH device + in the fencing_topology eventually fences the node. +

  • + On each node, configure fence_kdump_send to send a message to + all nodes when the Kdump process is finished. In /etc/sysconfig/kdump, + edit the KDUMP_POSTSCRIPT line. For example: +

    KDUMP_POSTSCRIPT="/usr/lib/fence_kdump_send -i 10 -p 7410 -c 1 NODELIST"

    + Replace NODELIST with the host names of all the cluster nodes.

  • Run either systemctl restart kdump.service or mkdumprd. Either of these commands will detect that /etc/sysconfig/kdump @@ -6163,12 +6167,13 @@ Open a port in the firewall for the fence_kdump resource. The default port is 7410.

  • - To achieve that Kdump is checked before triggering a real fencing + To have Kdump checked before triggering a real fencing mechanism (like external/ipmi), - use a configuration similar to the following:

    fencing_topology \
    +     use a configuration similar to the following:

    crm(live)configure# fencing_topology \
       alice: kdump-node1 ipmi-node1 \
    -  bob: kdump-node2 ipmi-node2

    For more details on fencing_topology: -

    crm configure help fencing_topology
  • 12.4 Monitoring Fencing Devices

    + bob: kdump-node2 ipmi-node2 +crm(live)configure# commit

    For more details on fencing_topology: +

    crm(live)configure# help fencing_topology

    12.4 Monitoring Fencing Devices

    Like any other resource, the STONITH class agents also support the monitoring operation for checking status.

    diff --git a/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html index f0a186caa..451801df0 100644 --- a/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-install-quick/art-sleha-install-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html index f0a186caa..451801df0 100644 --- a/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-install-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Installation and Setup Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a very basic two-node cluster, using the bootstrap scripts provided by the diff --git a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html index 30b24cf39..e84ba51cc 100644 --- a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/art-sleha-nfs-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html index 30b24cf39..e84ba51cc 100644 --- a/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-nfs-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Highly Available NFS Storage with DRBD and Pacemaker

    Publication Date: September 24, 2024

    This document describes how to set up highly available NFS storage in a two-node cluster, using the following components: diff --git a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html index 033356b73..265287022 100644 --- a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html +++ b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/art-sleha-pmremote-quick_draft.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in pacemaker_remote diff --git a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html index 033356b73..265287022 100644 --- a/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html +++ b/SLEHA15SP2/single-html/SLE-HA-pmremote-quick/index.html @@ -105,7 +105,7 @@ useBR: false }); -

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 19, 2024 +

    This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

    SUSE Linux Enterprise High Availability 15 SP2

    Pacemaker Remote Quick Start

    Publication Date: September 24, 2024

    This document guides you through the setup of a High Availability cluster with a remote node or a guest node, managed by Pacemaker and pacemaker_remote. Remote in pacemaker_remote