Skip Headers

Oracle® Application Server 10g Installation Guide
10g (9.0.4) for hp HP-UX PA-RISC (64-bit) and Linux x86
Part No. B10842-03
  Go To Documentation Library
Home
Go To Table Of Contents
Contents
Go To Index
Index

Previous Next  

J Installing Oracle Cluster Management Software on Linux

This appendix provides information about installing Oracle Cluster Management Software on Linux, which is a requirement for installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment.

This appendix includes the following topics:

J.1 Overview

The Oracle Cluster Management Software allows you to create a cluster of Linux systems for an OracleAS Active Failover Cluster environment, which is described in Section 9.3, "OracleAS Active Failover Cluster".


Note:

The cluster created by Oracle Cluster Management Software is not a general-purpose cluster. Oracle supports this cluster only within an OracleAS Active Failover Cluster environment.

The Oracle Cluster Management Software is required both during the installation and at runtime of OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment. During the installation, the option to perform the installation in an OracleAS Active Failover Cluster environment is only available if:

At runtime, certain OracleAS Infrastructure 10g components (such as the Metadata Repository) will start only if an Oracle Cluster Management Software instance exists.

Once Oracle Cluster Management Software has been installed, all cluster nodes are marked as Oracle Cluster Management Software nodes. Being marked as a cluster node is a requirement for installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment but it can affect the ability to install other Oracle products. If Oracle Cluster Management Software is installed and you plan to install an Oracle product other than OracleAS Infrastructure 10g, Oracle recommends unmarking the nodes as an Oracle Cluster Management Software cluster node before you attempt the planned installation and to use a different /etc/oraInst.loc file from the one used for the Oracle Cluster Management Software installation. Unmarking is described in Section J.8, "Deinstallation Steps". Unmarking only affects the installation, not runtime, of Oracle products and does not affect the OracleAS Active Failover Cluster environment after it has been successfully installed.


Note:

Oracle recommends that you store installation related files for Oracle Cluster Management Software in an empty directory. If the /etc/oraInst.loc file exists on the nodes where Oracle Cluster Management Software is being installed, rename the /etc/oraInst.loc file to /etc/oraInst.loc.orig and specify an empty directory to store installation related files during the Oracle Cluster Management Software installation. The installer will create a new /etc/oraInst.loc file pointing to this empty directory.

The version of Oracle Cluster Management Software included with this release is the same version as that included with Oracle9i Release 1 (9.0.1).

J.2 Requirements

The requirements for installing the Oracle Cluster Management Software are:

J.3 Pre-installation Steps

Complete the following sections before installing Oracle Cluster Management Software:

J.3.1 Set Up Node Equivalency for User, Group and Directory


Note:

For more information on setting up node equivalency, see Section 9.3.1.5, "Create Identical Users and Groups on All Nodes in the Cluster".

Set up the same user, group, and directories for the installation and temporary files on each node of the cluster. For example, if you want to install Oracle Cluster Management Software on a two node cluster using the oracle user, the oinstall group and you want to install into the /mnt/app/oracle/OraInfra_904 directory using values set in the TMP and TMPDIR environment variables to store temporary files:


Note:

The examples in this chapter assumes you are installing the Oracle Cluster Management Software as the oracle user. If you install the software as a different user, substitute that username for all instances of the oracle user in this chapter.

  1. Check that the oracle user exists on each node.

  2. Check that the oracle user belongs to the oinstall group on each node.

  3. Check that the oracle user has permissions to write in the /mnt/app/oracle/OraInfra_904 directory on each node.

  4. Check that the /mnt/app/oracle/OraInfra_904 directory is within the filesystem setup for the Oracle Cluster Management Software installation on each node.

  5. Check that the directories specified in the TMP or TMPDIR environment variables exist and contain sufficient space for temporary files on each node.


    Note:

    The TMP and TMPDIR environment variable requirement refers to the shell of the oracle user who performs the installation. These directories need to exist on the installation node as well and on all the other nodes of the cluster.

J.3.2 Set Up User Equivalency


Note:

For more information on setting up user equivalency, see Section 9.3.1.6, "Set Up User Equivalence".

To set up user equivalency:

  1. Add the node names of all the nodes in the cluster to the .rhosts file in the home directory of the user who will perform the Oracle Cluster Management Software installation. Remember to include the node name of the local node, that is, the node on which you are modifying the .rhosts file.

  2. Repeat step 1 for each node of the planned cluster.

For example, if you want to install Oracle Cluster Management Software on a two node cluster, node1 and node2, using the oracle user, make sure the following entries exist in the .rhosts file in the home directory of the oracle user on both node1 and node2:

node1
node2

J.3.3 Check Remote Copy and Remote Shell Capability

Make sure that remote shell and remote copy work on each node of the cluster. For example, to check remote copy and remote shell capability from node2 to node1 of a two node cluster for the oracle user:


Note:

Oracle Cluster Management Software is also supported if the nodes are configured to use Secure Shell (scp and ssh). Substitute scp for rcp, and ssh for rsh in the procedure if the nodes are configured to use scp and ssh.

  1. Make sure the following files exist:

    /usr/bin/rcp
    /usr/bin/rsh
    
    
  2. Log in as the oracle user on node1.

  3. Enter the following command on node1:

    $ echo hello > /tmp/testfile
    
    
  4. Log in as the oracle user on node2.

  5. Enter the following commands on node2:

    $ cd /tmp$ /usr/bin/rsh node1 ls /tmp
    
    

    The output should list the contents of the /tmp directory on node1 without pausing for a response.

  6. Enter the following commands on node2:

    $ cd /tmp
    $ /usr/bin/rcp node1:/tmp/testfile .$ cat /tmp/testfile
    
    

    The output of the cat command should list the string hello and the rcp command should not prompt you for a username or password.

  7. Repeat steps similar to steps 1 to 6 to ensure that node1 has remote access to node2.


Note:

If your system is configured to use scp and ssh, ensure the equivalent test commands for scp and ssh in steps 5 and 6 execute without pausing for a response. If the command pauses for a Yes or No response, answering Yes will often prevent pauses when you run the command again. Also, ensure that no error or warning messages are sent to stderr.

J.3.4 Check the hangcheck-timer Module

Check whether the hangcheck-timer module is already loaded into the kernel by running the following command on each node of the cluster:

# /sbin/lsmod | grep hangcheck-timer

If a line is displayed with the string hangcheck-timer, the hangcheck-timer module is already loaded. Ask the system administrator of the node for a list of the parameter values used to load the module. On Red Hat Linux distributions, this is usually recorded in the /etc/rc.d/rc.local file. If these parameters are any different than the following values, ask the system administrator to unload the hangcheck-timer module and load the module using the instruction in this section.

Table J-1 Required Parameter Values for hangcheck-timer Module

Parameter Value
hangcheck_tick 30
hangcheck_margin 180

The hangcheck-timer module is included with the kernel rpm on the Red Hat Enterprise Linux AS/ES 2.1 distribution. To load the hangcheck-timer module into the kernel, perform the following steps on each node:

  1. Run the following command to determine the kernel version-type:

    # uname -a
    
    

    An example of a kernel version-type is 2.4.9-e.25smp.

  2. Confirm that the hangcheck-timer module is available on the node by entering the following command, where kernel_version-type is the kernel version-type from step 1:

    # ls /lib/modules/kernel_version-type/kernel/drivers/char/hangcheck-timer.o
    
    
  3. Load the module into the kernel by entering the following command:

    # /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
    
    
  4. Check that the module loaded correctly by entering the following command:

    # /sbin/lsmod | grep hangcheck-timer
    
    
  5. Add the following line to a system initialization script to ensure the module is loaded on system startup:

    /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
    
    

    For example, add the line to the /etc/rc.d/rc.local file on Red Hat systems.


Note:

You only need to load the hangcheck-timer module once. Reloading the module is only required if you unload the module or if the module was loaded with incorrect parameters.

J.4 Installation Steps

Start the installer:

  1. If the Linux system does not mount CD-ROMs or DVDs automatically, you need to set the mount point manually. See Section 5.14, "Setting the Mount Point for the Discs" for details.

  2. Log in. Typically you install the Oracle Cluster Management Software as the oracle user.

  3. Insert Oracle Application Server Disk 1 or the DVD into the CD-ROM or DVD drive.

Follow this procedure to install Oracle Cluster Management Software

Table J-2 Steps for Installing Oracle Cluster Management Software


Screen Action
1. -- Run the Oracle Universal Installer from the CD-ROM or DVD by entering the following commands:

CD-ROM users:

prompt> cd
prompt> mount_point/ocms/runInstaller

DVD users:

prompt> cd
prompt> mount_point/application_server/ocms/runInstaller

2. Welcome Click Next.
3. Specify Inventory Directory This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the full path for the inventory directory: Enter a full path to a directory where you want the installer to store its files. The installer uses these files to keep track of all Oracle products that are installed on this computer. Enter a directory that is different from the Oracle home directory.

Example: /mnt/app/oracle/oraInventory

Click OK.

4. UNIX Group Name This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the name of the operating system group to have permission to update Oracle software installations on this system.

Example: oinstall

Click Next.

5. Run orainstRoot.sh This screen appears only if this is the first installation of any Oracle product on this computer.

Run the orainstRoot.sh script in a different shell as the root user. The script is located in the installer inventory directory specified in the Specify Inventory Directory screen.

Run the script on the node where you are running the installer.

Click Continue after you have run the script.

6. Specify File Locations Destination Name: Enter a name to identify this Oracle home.

Example: oracleas

Destination Path: Enter the full path to the destination directory. This is the Oracle home. The installer will use this path as the Oracle home for all nodes.

Example: /mnt/app/oracle/OraInfra_904

Click Next.

7. Language Selection This release of the Oracle Cluster Management Software supports only the English language. English is automatically selected and cannot be deselected.

Click Next.

8. Cluster Node Selection Enter the hostnames of all the remote nodes of the cluster. You do not need to enter the hostname of the local node that you are using to perform the Oracle Cluster Management Software installation.

Click Next.

9. Quorum Disk Information Enter the full path to the 128 MB raw disk partition created for the Oracle Cluster Management Software installation.

Click Next.

10. Summary Verify your selections. Pay attention to any items listed in red. This indicates an issue that will cause the installation to fail. In particular, expand all items within Space Requirements to confirm that sufficient disk is available for the installation.

Click Install.

11. Install Progress This screen shows the progress of the installation.
12. Run root.sh Note: Do not run the root.sh script until prompted.

When prompted, run the root.sh script in a different shell as the root user. The script is located in this instance’s Oracle home directory.

Note: You have to run this script on each node where you are installing Oracle Cluster Management Software.

Click OK after you have run the script on all nodes.

13. End of Installation Click Exit to quit the installer.

J.5 Post-installation Steps

After running the installer, complete the following steps on each node of the cluster to confirm that the installation was successful and to configure the installation:

  1. Confirm that the
    Destination_path_of_Oracle_Home
    /oracm/admin/nmcfg.ora file exists and contains values of the hostnames and Quorum Disk entered during the installation.

  2. Modify the
    Destination_path_of_Oracle_Home
    /oracm/admin/ocmargs.ora file and replace all occurances of dba with the group name of the
    Destination_Path_of_Oracle_Home
    /oracm/bin/oracm file.

    For example, the output of the following command shows that the group name is oinstall:

    $ ls -l /mnt/app/oracle/OraInfra_904/oracm/bin/oracm
     -rwxr-xr-x  1 oracle  oinstall  251385 Oct 31 15:50  oracm
    
    

    In this case, replace dba with oinstall.

  3. Confirm that the /var/opt/ORCLcluster/oracm/lib directory contains the following two files on each node of the cluster:

    libcmdll.so 
    libwddapi.so 
    

    Note:

    The /var/opt/ORCLcluster/oracm/lib directory and the files in that directory are created when you run the root.sh script during the Oracle Cluster Management Software installation. This is described in step 12. of Table J-2.

J.6 Using Oracle Cluster Management Software

To start an Oracle Cluster Management Software instance, run commands similar to the following on all nodes of the cluster as the root user. Example in Bourne shell:

# ORACLE_HOME=Destination_Path_of_Oracle_Home
# export ORACLE_HOME
# $ORACLE_HOME/oracm/bin/ocmstart.sh

The ocmstart.sh script starts one watchdogd process and multiple oracm and oranm processes. Use the commands listed in Section 9.1.2.3, "Checking Oracle Cluster Management Software on Linux" to confirm that an Oracle Cluster Management Software instance has been started. If an Oracle Cluster Management Software instance is not detected, check the .log files in the $ORACLE_HOME/oracm/log directory for more information.

To stop an Oracle Cluster Management Software instance, run commands similar to the following on all nodes of the cluster as the root user. Example in Bourne shell:

# ORACLE_HOME=Destination_Path_of_Oracle_Home
# export ORACLE_HOME
# $ORACLE_HOME/oracm/bin/ocmstop.sh

J.7 Using a Private Network

Typically, a cluster is configured using a private interconnect to separate the cluster traffic from all other network traffic. This helps to maximize performance. Normally, the private interconnect is created by adding an additional Network Interface Card (NIC) on all nodes of the cluster. If such a configuration is available, complete the following steps to use the private interconnect:

  1. Modify Destination_path_of_Oracle_Home/oracm/admin/nmcfg.ora and replace all public hostnames with the hostnames configured and recognized by the private interconnect.

  2. Repeat step 1 on all nodes of the cluster

  3. Stop and restart the Oracle Cluster Management Software instances on all nodes of the cluster as described in Section J.6, "Using Oracle Cluster Management Software".


    Note:

    Remote shell and remote copy must work on each node of the cluster using the private network hostnames. Complete the steps described in Section J.3.3, "Check Remote Copy and Remote Shell Capability" using the hostnames configured for the private interconnect to make sure remote shell and remote copy capabilities are configured correctly.

The following example shows the configuration of node1 of a two-node Oracle Cluster Management Software cluster with public hostnames, node1 and node2. It also shows the configuration of a private interconnect configured between the same two nodes, identified by, node1-pri and node2-pri, respectively:

$ cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost
138.1.137.46            node1.mydomain.com  node1
10.0.0.1                node1-pri.mydomain.com node1-pri
138.1.137.47            node2.mydomain.com  node2
10.0.0.2                node2-pri.mydomain.com node2-pri
$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:B0:D0:68:B4:3D
          inet addr:138.1.137.46  Bcast:138.1.139.255  Mask:255.255.252.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:23500323 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18955501 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:823841864 (785.6 Mb)  TX bytes:40738070 (38.8 Mb)
          Interrupt:26 Base address:0xe0c0 Memory:f89b7000-f89b7c40

eth1      Link encap:Ethernet  HWaddr 00:02:B3:28:80:8C
          inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:829 errors:0 dropped:0 overruns:0 frame:0
          TX packets:92 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:112411 (109.7 Kb)  TX bytes:6699 (6.5 Kb)
          Interrupt:23 Base address:0xccc0 Memory:f89b9000-f89b9c40

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:16121286 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16121286 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1457050223 (1389.5 Mb)  TX bytes:1457050223 (1389.5 Mb)

To change this Oracle Cluster Management Software cluster to use the private interconnect:

  1. On node1, replace all instances of node1 and node2 in the Destination_Path_of_Oracle_Home/oracm/admin/nmcfg.ora file with node1-pri and node2-pri, respectively.

  2. On node2, replace all instances of node1 and node2 in the Destination_Path_of_Oracle_Home/oracm/admin/nmcfg.ora file with node1-pri and node2-pri, respectively.

  3. Stop and restart the Oracle Cluster Management Software on both node1 and node2.

J.8 Deinstallation Steps

To deinstall the Oracle Cluster Management Software, complete the following steps:

  1. Stop the Oracle Cluster Management Software instance on each node of the cluster. See Section J.6, "Using Oracle Cluster Management Software".

  2. Log in to the installation node as the user that installed the Oracle Cluster Management Software.

  3. Insert Oracle Application Server Disk1 or DVD into the CD-ROM or DVD drive.

  4. Run the Oracle Universal Installer from the CD-ROM or DVD by entering the following commands.

    CD-ROM users:

    prompt> cd
    prompt> mount_point/ocms/runInstaller
    
    

    DVD users:

    prompt> cd
    prompt> mount_point/application_server/ocms/runInstaller
    
    
  5. At the Welcome screen, click on Deinstall Products...

  6. At the Inventory screen, expand all items and select Oracle Cluster Management Software 9.0.1.4.0.

  7. Click on Remove...

  8. At the Confirmation screen, confirm that you wish to deinstall by clicking on Yes.

  9. At the Inventory Screen, expand all items and select each node of the cluster under Cluster Nodes.

  10. Click on Remove...

  11. At the Confirmation screen, confirm that you wish to deinstall by clicking on Yes.

  12. At the Inventory Screen, expand all items and confirm that Oracle Cluster Management Software 9.0.1.4.0 and Cluster Nodes are not listed.

  13. Click on Close.

  14. At the Welcome screen, click on Cancel to exit the Oracle Universal Installer.

  15. Log in to the installation node as the root user.

  16. Set the ORACLE_HOME environment variable to the directory of the Oracle Home of the Oracle Cluster Management Software installation.

  17. Enter the following commands:

    # rm -rf $ORACLE_HOME/oracm
    # rm -rf /var/opt/ORCLcluster/oracm 
    
    
  18. Log in to each node of the cluster as the root user and perform steps 16 and 17.

If you do not want to deinstall Oracle Cluster Management Software, but you need to unmark a node as an Oracle Cluster Management Software cluster node to perform other Oracle product installations:

  1. Log in as the root user.

  2. Rename all the files in the /var/opt/ORCLcluster/oracm/lib directory. The value of the new names is not important. For example, enter the following commands:

    # cd /var/opt/ORCLcluster/oracm/lib
    # mv libcmdll.so libcmdll.so.orig
    # mv libwddapi.so libwddapi.so.orig