- The package cs-deploy-tool.rpm needs to be pulled down from
http://people.redhat.com/rkenna/gfs-deploy-tool/.
After installing the rpm, run /usr/bin/cs-deploy-tool. You will be greeted
with the following welcome screen:
Click OK to continue
-
Choose a fence method
from the drop down box, and then configure the fence method in the next
properties dialog. This dialog will vary depending on which fence method
is chosen.
-
After the fence type is chosen and configured, the cluster installation screen is launched.
At the Cluster Name textbox, pick a name for the cluster (or leave the default if it is of no matter)
and then begin adding nodes. Nodes are added by entering hostname or IP address
in the empty text field, and then clicking Add. Depending on the fence method
chosen, a small properties field may need to be completed when the node is added. For example, if an APC or WTI power switch has been chosen for fencing, the switch port needs to be designated for each node.
Below is what the Cluster Installation window looks like initially:
After each node is added, you are prompted for the password for the node,
and then the node is probed for status, shared storage visibility, and other information. This
could take up to 30 seconds per node.
After the second node is added, a screen is displayed for
selecting shared storage. All storage visible to the first two nodes is displayed. Check just one storage device.
The following example shows a typical Shared Storage Configuration screen with a selection made:
-
Optional: Set
up services to run on the cluster. Each service should have its
own Logical Volume and GFS file system and mountpoint established for it. In
the case of httpd, the mountpoint field is below the virtual IP address in the
configuration section. A checked Enabled box indicates to the
DT to install apache with these settings.
In the example screen below, a generic service is also configured. A generic service is one other than httpd or NFS. For assistance in deploying a generic service, please contact cluster-list@redhat.com.
In the case of an NFS configuration, each NFS export gets its own Logical Volume and GFS. A typical NFS configuration screen is shown below:
Note: It is not imperitive that services be set up with DT. Services and
fencing can both be set up after running the DT tool with system-config-cluster. The configuration may be modified later using system-config-cluster as well. The DT is a valuable tool for quickly getting a cluster and optional services up and running.
-
To finalize the deployment, click the Install button. Here is what the install
button initiates in detail:
- Cluster rpms (rgmanager, ccs, magma, magma-plugins, fence, cman, dlm, GFS, lvm2-cluster, s-c-cluster, cman/dlm/GFS-kernel) are installed or updated. SMP and hugemem are installed if needed.
- If services were configured, httpd or NFS are installed and configured.
- The cluster.conf file without a services section is generated and copied to all nodes.
- On each node: "chkconfig ccsd/cman/fenced/rgmanager/clvmd on" is run.
- Each node is rebooted and cluster services start.
- A Volume Group is created on selected shared storage.
- Logical Volumes are created as follows:
- If httpd is configured, a Logical Volume is created for it of the size specified.
- If NFS is configured, a Logical Volume is created for each NFS Export.
- If storage for a generic service is specified, a Logical Volume is created.
- GFS is created on all new Logical Volumes.
- If httpd was configured:
- A welcome page is saved into /<httpd mount point>/www/html/index.html.
- A sample cgi-bin script is saved into /<httpd mount point>/www/cgi-bin/sample.
- The line "Include /<httpd mount point>/conf.d/*.conf" is saved into /etc/httpd/conf.d/zz_cluster_includes.conf on all nodes.
- Ownership of saved files is set to apache:apache.
- "service apache stop" is executed on all nodes, followed by "chkconfig --del httpd" (so that only rgmanager starts them, as directed in cluster manual).
- SELinux is disabled on all nodes (apache runs in SELinux sandbox, GFS doesn't support SELinux contexts). At the end of installation, there is a popup that spells out all post-install messages; DT notifies the user that SELinux has been disabled on all nodes.
- Shared httpd configuration file (that sets up virtual server, docroot, and cgi-bin root) is saved in /<httpd mount point>/conf.d/clustered_www.conf. Any config changes for httpd (for all nodes) can be made here.
- If NFS was configured, "service nfs start", "service nfslock start", "chkconfig nfs on" and "chkconfig nfslock on" is executed on all nodes.
- If either httpd or NFS was configured, or a generic service was configured, a cluster.conf file with <service> section is propogated to all nodes.
- No special action is taken for generic services. Users of this mechanism must ensure that their application resides on all necessary nodes, and that access to shared storage is available to them.
- The installation is now complete. If you wish to modify or fine tune any cluster or service settings, you can use system-config-cluster.