Skip Headers
Oracle® Database Oracle Clusterware Installation Guide
11g Release 1 (11.1) for AIX

Part Number B28258-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

1 Summary List: Installing Oracle Clusterware

The following is a summary list of installation configuration requirements and commands. This summary is intended to provide an overview of the installation process.

In addition to providing a summary of the Oracle Clusterware installation process, this list also contains configuration information for preparing a system for Automatic Storage Management (ASM) and Oracle Real Application Clusters (Oracle RAC) installation.

1.1 Verify System Requirements

For more information, review the following section in Chapter 2:

"Checking the Hardware Requirements"

Enter the following commands to check available memory:

# /usr/sbin/lsattr -E -l sys0 -a realmem
# /usr/sbin/lsps -a

The minimum required RAM is 1 GB, and the minimum required swap space is 1 GB. Oracle recommends that you set swap space to twice the amount of RAM for systems with 2 GB of RAM or less. For systems with 2 GB to 8 GB RAM, use swap space equal to RAM. For systems with over 8 GB RAM, use .75 times the size of RAM.

Verify the space available for Oracle Clusterware files using one of the following commands, depending on where you intend to place Oracle Clusterware files:

GPFS:

df -k

Raw Logical Volumes in Concurrent VG (HACMP); in the following example, the variable lv_name is the name of the raw logical volume whose space you want to verify:

lslv lv_name

Raw hard disks; in the following example, the variable rhdisk# is the raw hard disk number that you want to verify, and the variable size_mb is the size in megabytes of the partition that you want to verify:

lsattr -El rhdisk# -a size_mb

If you use standard redundancy for Oracle Clusterware files, which is 2 Oracle Cluster Registry (OCR) partitions and 3 voting disk partitions, then you should have at least 1 GB of disk space available on separate physical disks reserved for Oracle Clusterware files. Each partition for the Oracle Clusterware files should be 256 MB in size.

The Oracle Clusterware home requires 650 MB of disk space.

df -k /tmp

Ensure that you have at least 400 MB of disk space in /tmp. If this space is not available, then increase the partition size, or delete unnecessary files in /tmp.

1.2 Check Network Requirements

For more information, review the following section in Chapter 2:

"Checking the Network Requirements"

The following is a list of address requirements that you must configure on a domain name server (DNS), or configure in the /etc/hosts file for each cluster node:

After you obtain the IP addresses from a network administrator, on you can use the utility system-config-network to assign the public and private IP addresses to NICs, or you can configure them manually using ifconfig. Do not assign the VIP address.

Ping all IP addresses. The public and private IP addresses should respond to ping commands. The VIP addresses should not respond.

1.3 Check Operating System Packages

Refer to the tables listed inChapter 2 "Checking the Software Requirements"for details.

1.4 Set Kernel Parameters

For more information, review the following section in Chapter 2:

"Tuning AIX System Environment"

1.5 Configure Groups and Users

For more information, review the following sections in Chapter 2:

"Overview of Groups and Users for Oracle Clusterware Installations"

For information about creating Oracle Database homes, review the following sections in Chapter 3:

"Creating Standard Configuration Operating System Groups and Users"

"Creating Custom Configuration Groups and Users for Job Roles"

"Environment Requirements for Oracle Database and Oracle ASM Owners"

For purposes of evaluation, we will assume that you have one Oracle installation owner, and that this oracle installation software owner name is oracle. You must create an Oracle installation owner group (oinstall) for Oracle Clusterware. If you intend to install Oracle Database, then you must create an OSDBA group (dba). Use the id oracle command to confirm the correct group and user configuration. Then use smit to create the oracle user, or enter commands similar to the following:

# mkgroup oinstall
# mkgroup dba
# mkuser pgrp=oinstall groups=dba,oinstall oracle

Ensure that the Oracle Clusterware oracle user has the capabilities CAP_NUMA_ATTACH, CAP_BYPASS_RAC_VMM, and CAP_PROPAGATE.

To check existing capabilities, enter the following command as root; in this example, the Oracle Clusterware oracle user is crs:

# lsuser -a capabilities crs
 

To add capabilities, enter a command similar to the following:

# chuser capabilities=CAP_NUMA_ATTACH CAP_BYPASS_RAC_VMM CAP_PROPAGATE crs

Set the password on the oracle account:

passwd oracle

Repeat this process for each cluster member node.

1.6 Create Directories

For more information, review the following section in Chapter 2:

"Requirements for Creating an Oracle Clusterware Home Directory"

For information about creating Oracle Database homes, review the following sections in Chapter 3:

"Understanding the Oracle Base Directory Path"

"Creating the Oracle Base Directory Path"

For installations with Oracle Clusterware only, Oracle recommends that you let Oracle Universal Installer (OUI) create the Oracle Clusterware and Oracle Central Inventory (oraInventory) directories for you. However, as root, you must create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that OUI can select that directory during installation. For OUI to recognize the path as an Oracle software path, it must be in the form u0[1-9]/app.

For example:

mkdir –p  /u01/app
chown –R oracle:oinstall /u01/app

1.7 Configure Oracle Installation Owner Shell Limits

For information, review the following section in Chapter 2:

"Configuring Software Owner User Environments"

1.8 Configure SSH

For information, review the following section in Chapter 2:

"Configuring SSH on All Cluster Nodes"

OUI uses the ssh and scp commands during installation to run remote commands on and copy files to the other cluster nodes. If SSH is not available, then OUI falls back to rsh and rcp commands. If you want to use SSH for increased security during installation, then you must configure SSH so that the ssh and scp commands used during installation do not prompt for a password.

To configure SSH, complete the following tasks:

1.8.1 Check Existing SSH Configuration on the System

To determine if SSH is running, enter the following command:

$ ps -ef grep sshd

If SSH is running, then the response to this command is one or more process ID numbers. In the home directory of the software owner that you want to use for the installation (crs, oracle), use the command ls -al to ensure that the .ssh directory is owned and writable only by the user.

1.8.2 Configure SSH on Cluster Member Nodes

Complete the following tasks on each node. You must configure SSH separately for each Oracle software installation owner that you intend to use for installation.

  • Create .ssh, and create either RSA or DSA keys on each node

  • Add all keys to a common authorized_keys file

1.8.3 Enable SSH User Equivalency on Cluster Member Nodes

After you have copied the authorized_keys file that contains all keys to each node in the cluster, start SSH on the node, and load SSH keys into memory. Note that you must either use this terminal session for installation, or reload SSH keys into memory for the terminal session from which you run the installation.

1.9 Create Storage

The following outlines the procedure for creating OCR and voting disk partitions, and creating ASM disks.

For information, review the following sections in Chapter 4:

"Configuring Storage for Oracle Clusterware Files on a Supported Shared File System"

"Configuring Storage for Oracle Clusterware Files on Raw Devices"

1.9.1 Create Disk Partitions for ASM Disks, OCR Disks, and Voting Disks

Create partitions as needed. For OCR and voting disks, create 280MB partitions for new installations, or use existing partition sizes for upgrades. To create partitions:

  1. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  2. As the root user, enter the following command on any node to identify the device names for the disk devices that you want to use:

    # lspv | grep -i none 
    

    This command displays information similar to the following for each disk device that is not configured in a volume group:

    hdisk17         0009005fb9c23648                    None  
    

    In this example, hdisk17 is the device name of the disk and 0009005fb9c23648 is the physical volume ID (PVID).

  3. If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it:

    # chdev -l hdiskn -a pv=yes
    
  4. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:

    # lspv | grep -i "0009005fb9c23648"
    

    The output from this command should be similar to the following:

    hdisk18         0009005fb9c23648                    None  
    

    In this example, the device name associated with the disk device (hdisk18) is different on this node.

  5. If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices:

    • OCR device:

      # chown root:oinstall /dev/rhdiskn
      # chmod 640 /dev/rhdiskn
      
    • Other devices:

      # chown oracle:dba /dev/rhdiskn
      # chmod 660 /dev/rhdiskn
      
  6. If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the nodes using a common unused name.

    For the new device files, choose an alternative device file name that identifies the purpose of the disk device. The previous table suggests alternative device file names for each file. For database files, replace dbname in the alternative device file name with the name that you chose for the database in step 1.

    Note:

    Alternatively, you could choose a name that contains a number that will never be used on any of the nodes, for example hdisk99.

    To create a new common device file for a disk device on all nodes, perform these steps on each node:

    1. Enter the following command to determine the device major and minor numbers that identify the disk device, where n is the disk number for the disk device on this node:

      # ls -alF /dev/*hdiskn
      

      The output from this command is similar to the following:

      brw------- 1 root system    24,8192 Dec 05 2001  /dev/hdiskn
      crw------- 1 root system    24,8192 Dec 05 2001  /dev/rhdiskn
      

      In this example, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number.

    2. Enter a command similar to the following to create the new device file, specifying the new device file name and the device major and minor numbers that you identified in the previous step:

      Note:

      In the following example, you must specify the character c to create a character raw device file.
      # mknod /dev/ora_ocr_raw_256m c 24 8192
      
    3. Enter commands similar to the following to change the owner, group, and permissions on the character raw device file for the disk:

      • OCR:

        # chown root:oinstall /dev/ora_ocr_raw_256m
        # chmod 640 /dev/ora_ocr_raw_256m
        
      • Oracle Clusterware voting disk:

        # chown oracle:dba /dev/ora_vote_raw_256m
        # chmod 660 /dev/ora_vote_raw_256m
        
    4. Enter a command similar to the following to verify that you have created the new device file successfully:

      # ls -alF /dev | grep "24,8192"
      

      The output should be similar to the following:

      brw------- 1 root   system   24,8192 Dec 05 2001  /dev/hdiskn
      crw-r----- 1 root   oinstall 24,8192 Dec 05 2001  /dev/ora_ocr_raw_256m
      crw------- 1 root   system   24,8192 Dec 05 2001  /dev/rhdiskn
      
  7. To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:

    Disk Type Attribute Value
    SSA, FAStT, or non-MPIO-capable disks reserve_lock no
    ESS, EMC, HDS, CLARiiON, or MPIO-capable disks reserve_policy no_reserve

    To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:

    # lsattr -E -l hdiskn
    

    If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node:

    • SSA and FAStT devices

      # chdev -l hdiskn  -a reserve_lock=no
      
    • ESS, EMC, HDS, CLARiiON, and MPIO-capable devices

      # chdev -l hdiskn  -a reserve_policy=no_reserve
      
  8. Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:

    # chdev -l hdiskn -a pv=clear
    

    When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rhdisk10
    

1.10 Verify Oracle Clusterware Requirements with CVU

For information, review the following section in Chapter 6:

"Verifying Oracle Clusterware Requirements with CVU"

Using the following command syntax, log in as the installation owner user (oracle or crs), and start Cluster Verification Utility (CVU) to check system requirements for installing Oracle Clusterware. In the following syntax example, replace the variable mountpoint with the installation media mountpoint, and replace the variable node_list with the names of the nodes in your cluster, separated by commas:

/mountpoint/runcluvfy.sh stage -pre crsinst -n node_list

1.11 Install Oracle Clusterware Software

For information, review the following sections in Chapter 6:

"Preparing to Install Oracle Clusterware with OUI"

"Installing Oracle Clusterware with OUI"

  1. Ensure SSH keys are loaded into memory for the terminal session from which you rn the Oracle Universal Installer (OUI).

  2. Navigate to the installation media, and start OUI. For example:

    $ cd /Disk1
    ./runInstaller
    
  3. Select Install Oracle Clusterware, and enter the configuration information as prompted.

1.12 Prepare the System for Oracle RAC and ASM

For information, review the following section in Chapter 5:

"Configuring Disks for Automatic Storage Management"

If you intend to install Oracle RAC, as well as Oracle Clusterware, then Oracle recommends that you use ASM for database file management.

1.12.1 Mark ASM Disk Partitions

For OUI to recognize a disk partition as an ASM disk candidate, you must mark the disk by logging in as root and marking the disk partitions that you created for ASM using the following command syntax, where ASM_DISK_NAME is the name of the ASM disk group, and device_name is the name of the disk device that you want to assign to that disk group:

/etc/init.d/oracleasm create disk ASM_DISK_NAME device_name