Red Hat Cluster Suite Deployment Tool (Version #1) Users Guide

Rapid installation and configuration of clusters

Updated Nov. 21, 2005

  1. Motivation

    This project grew out of an attempt to view cluster deployment from a service oriented point of view. For example, what if a user was able to look at a list of services running on a blade server, and select a "Cluster This Application" button next to the service, and it just worked? What is the minimal amount of user interaction needed to get a two-, three-, or four-node cluster off the ground from package install to having the cluster up and running?

    The result is a cluster deployment tool that offers rapid automated installation and configuration of cluster packages in exchange for adhering to a set of mild restrictions that match the typical user configuration experience for the majority of our customers.

  2. What it does

    The Development Tool(DT) uses five UI screens to set up the following:
  3. Requirements

    1. The machine on which the DT is running cannot be used in the cluster. This restriction may be relaxed later, but for now, the DT package must be downloaded to and run on the machine that will perform the installation.
    2. For this version of DT, only RHEL4 is supported. In addition, as the DT uses RHN to download the proper packages for each cluster node, all intended cluster nodes must have Cluster Suite/GFS entitlements in place. The DT was built to work on RHEL4 and FC4. Currently, there is an issue with the DT and the version of LVM2 shipping with FC4, so support for FC installs is suspended until FC5 is released in early 2006.

      Note: If the intended cluster nodes already have Cluster Suite packages on them, DT will ensure that the packages are the latest. In addition, if an intended node is already part of a running cluster, the user is notified and the node cannot be added. If an intended node is part of a cluster that is not currently running, then the configuration file (cluster.conf) will be overwritten without notification.

    3. All intended cluster nodes must have access to the same shared storage. Storage is checked on each node, and SCSI IDs are compared to verify that external storage is the same storage visible from the other nodes. After specifying all nodes for a cluster, if more than one LUN is visible to the nodes, the user is offered a selection menu for choosing which storage to use for services running in the cluster. If no shared storage is found, the cluster may still be deployed, but services such as httpd or NFS would have to be configured outside of the DT.

      Note: For this release, if an intended node has access to more than one shared storage device, all of its shared storage must be visible to all intended nodes in the cluster. For example, if node 1 can see two SAN devices, then all intended nodes must be able to see them as well. Using the DT, only one device may be selected. If additional storage is required for services on the cluster, it can be configured outside of the DT afterwards.

    4. Multipath devices may not be used for shared storage needs yet. Also, to use iSCSI or GNBD, they must be configured on each node ahead of time.
    5. The intended nodes must have sshd enabled on them, and the user must know the root password of the intended nodes. In addition, the intended nodes must be network-accessible in terms of connectivity, static addresses or hostnames, and firewall protection.
    6. All intended cluster nodes must use the same method for fencing. Currently, the following fencing types are supported:
      1. ILO
      2. WTI
      3. APC
      4. Bladecenter
      5. IPMI
    7. How to use this tool

      1. The package cs-deploy-tool.rpm needs to be pulled down from http://people.redhat.com/rkenna/gfs-deploy-tool/. After installing the rpm, run /usr/bin/cs-deploy-tool. You will be greeted with the following welcome screen:

        Click OK to continue

      2. Choose a fence method from the drop down box, and then configure the fence method in the next properties dialog. This dialog will vary depending on which fence method is chosen.

      3. After the fence type is chosen and configured, the cluster installation screen is launched. At the Cluster Name textbox, pick a name for the cluster (or leave the default if it is of no matter) and then begin adding nodes. Nodes are added by entering hostname or IP address in the empty text field, and then clicking Add. Depending on the fence method chosen, a small properties field may need to be completed when the node is added. For example, if an APC or WTI power switch has been chosen for fencing, the switch port needs to be designated for each node.

        Below is what the Cluster Installation window looks like initially:

        After each node is added, you are prompted for the password for the node, and then the node is probed for status, shared storage visibility, and other information. This could take up to 30 seconds per node.

        After the second node is added, a screen is displayed for selecting shared storage. All storage visible to the first two nodes is displayed. Check just one storage device. The following example shows a typical Shared Storage Configuration screen with a selection made:

      4. Optional: Set up services to run on the cluster. Each service should have its own Logical Volume and GFS file system and mountpoint established for it. In the case of httpd, the mountpoint field is below the virtual IP address in the configuration section. A checked Enabled box indicates to the DT to install apache with these settings.

        In the example screen below, a generic service is also configured. A generic service is one other than httpd or NFS. For assistance in deploying a generic service, please contact cluster-list@redhat.com.

        In the case of an NFS configuration, each NFS export gets its own Logical Volume and GFS. A typical NFS configuration screen is shown below:

        Note: It is not imperitive that services be set up with DT. Services and fencing can both be set up after running the DT tool with system-config-cluster. The configuration may be modified later using system-config-cluster as well. The DT is a valuable tool for quickly getting a cluster and optional services up and running.

      5. To finalize the deployment, click the Install button. Here is what the install button initiates in detail:

        1. Cluster rpms (rgmanager, ccs, magma, magma-plugins, fence, cman, dlm, GFS, lvm2-cluster, s-c-cluster, cman/dlm/GFS-kernel) are installed or updated. SMP and hugemem are installed if needed.
        2. If services were configured, httpd or NFS are installed and configured.
        3. The cluster.conf file without a services section is generated and copied to all nodes.
        4. On each node: "chkconfig ccsd/cman/fenced/rgmanager/clvmd on" is run.
        5. Each node is rebooted and cluster services start.
        6. A Volume Group is created on selected shared storage.
        7. Logical Volumes are created as follows:
          • If httpd is configured, a Logical Volume is created for it of the size specified.
          • If NFS is configured, a Logical Volume is created for each NFS Export.
          • If storage for a generic service is specified, a Logical Volume is created.
        8. GFS is created on all new Logical Volumes.
        9. If httpd was configured:
          1. A welcome page is saved into /<httpd mount point>/www/html/index.html.
          2. A sample cgi-bin script is saved into /<httpd mount point>/www/cgi-bin/sample.
          3. The line "Include /<httpd mount point>/conf.d/*.conf" is saved into /etc/httpd/conf.d/zz_cluster_includes.conf on all nodes.
          4. Ownership of saved files is set to apache:apache.
          5. "service apache stop" is executed on all nodes, followed by "chkconfig --del httpd" (so that only rgmanager starts them, as directed in cluster manual).
          6. SELinux is disabled on all nodes (apache runs in SELinux sandbox, GFS doesn't support SELinux contexts). At the end of installation, there is a popup that spells out all post-install messages; DT notifies the user that SELinux has been disabled on all nodes.
          7. Shared httpd configuration file (that sets up virtual server, docroot, and cgi-bin root) is saved in /<httpd mount point>/conf.d/clustered_www.conf. Any config changes for httpd (for all nodes) can be made here.
        10. If NFS was configured, "service nfs start", "service nfslock start", "chkconfig nfs on" and "chkconfig nfslock on" is executed on all nodes.
        11. If either httpd or NFS was configured, or a generic service was configured, a cluster.conf file with <service> section is propogated to all nodes.
        12. No special action is taken for generic services. Users of this mechanism must ensure that their application resides on all necessary nodes, and that access to shared storage is available to them.
        13. The installation is now complete. If you wish to modify or fine tune any cluster or service settings, you can use system-config-cluster.

      Please direct all questions and comments to cluster-list@redhat.com