Skip Headers
Oracle® Database Oracle Clusterware Installation Guide
11g Release 1 (11.1) for AIX

Part Number B28258-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Configuring Oracle Clusterware Storage

This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:

4.1 Reviewing Storage Options for Oracle Clusterware Files

This section describes supported options for storing Oracle Clusterware files. It includes the following sections:

4.1.1 Overview of Storage Options

Use the information in this overview to help you select your storage option.

See Also:

The Oracle Certify site for a list of supported vendors for Network Attached Storage options:
http://www.oracle.com/technology/support/metalink/

Refer also to the Certify site on OracleMetalink for the most current information about certified storage options:

https://metalink.oracle.com/

4.1.1.1 Overview of Oracle Clusterware Storage Options

There are two ways of storing Oracle Clusterware files:

  • A supported shared file system: The supported file system includes:

    • General Parallel File System (GPFS): A cluster file system for AIX that provides concurrent file access

    • Network File System (NFS): A file-level protocol that enables access and sharing of files

      See Also:

      The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices. Note that Direct NFS is not supported for Oracle Clusterware files.
  • Raw partitions: Raw partitions are raw disks that are accessed either through a logical volume manager (LVM), or through non-LVM file systems.

4.1.1.2 General Storage Considerations

For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.

Oracle Clusterware files include voting disks, used to monitor cluster node status, and Oracle Cluster Registry (OCR) which contains configuration information about the cluster. The voting disks and OCR are shared file in a cluster file system environment. If you do not use a cluster file system, then you must place these files on a shared raw device. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.

For voting disk file placement, ensure that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.

For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use GPFS or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.

The following table shows the storage options supported for storing Oracle Clusterware files. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).

Note:

For the most up-to-date information about supported storage options for Oracle Clusterware installations, refer to the Certify pages on the OracleMetaLink Web site:
https://metalink.oracle.com

Table 4-1 Supported Storage Options for Oracle Clusterware Files

Storage Option OCR and Voting Disks Oracle Software

Automatic Storage Management

No

No

General Parallel File System (GPFS)

  • Note: Oracle does not recommend the use of GPFS for voting disks if HACMP is used.

Yes

Yes

Local storage

No

Yes

NFS file system

  • Note: Requires a certified NAS device. Oracle does not recommend the use of NFS for voting disks if HACMP is used.

Yes

Yes

Raw Logical Volumes Managed by HACMP

  • Note: If HACMP is used, then voting disks can only be placed on raw devices or logical volumes.

Yes

No


Use the following guidelines when choosing the storage options that you want to use for each file type:

  • You can choose any combination of the supported storage options for each file type, provided that you satisfy all requirements listed for the chosen storage options.

  • You cannot use Automatic Storage Management to store Oracle Clusterware files, because these files must be accessible before any Automatic Storage Management instance starts.

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

4.1.1.3 Quorum Disk Location Restriction with Existing 9.2 Clusterware Installations

When upgrading your Oracle9i release 9.2 Oracle RAC environment to Oracle Database 11g Release 1 (11.1), you are prompted to specify one or more voting disks during the Oracle Clusterware installation. You must specify a new location for the voting disk in Oracle Database 11g Release 1 (11.1). You cannot reuse the old Oracle9i release 9.2 quorum disk for this purpose.

4.1.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, you must perform the following tasks in the order listed:

1: Check for available shared storage with CVU

Refer to Checking for Available Shared Storage with CVU

2: Configure shared storage for Oracle Clusterware files

3: Configure storage for Oracle Database files and recovery files

4.1.2 Checking for Available Shared Storage with CVU

To check for all shared file systems available across all nodes on the cluster with GPFS, use the following command:

/mountpoint/runcluvfy.sh comp ssa -n node_list

If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:

/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list

In the preceding syntax examples, the variable mountpoint is the mountpoint path of the installation media, the variable node_list is the list of nodes you want to check, separated by commas, and the variable storageID_list is the list of storage device IDs for the storage devices managed by the file system type that you want to check.

For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dev/rhdisk8 and /dev/rhdisk9, and your mountpoint is /dev/dvdrom/, then enter the following command:

/dev/dvdrom/runcluvfy.sh comp ssa -n node1,node2 -s /dev/rhdisk8,/dev/rhdisk9

If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.

4.2 Configuring Storage for Oracle Clusterware Files on a Supported Shared File System

Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

Note:

Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates. The OCR is a shared file in a cluster file system environment. If you do not use a cluster file system, then you must make this file a shared raw device. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.

4.2.1 Requirements for Using a File System for Oracle Clusterware Files

To use a file system for Oracle Clusterware files, the file system must comply with the following requirements.

  • To use a cluster file system on AIX, you must use GPFS.

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy),

    • At least two file systems are mounted, and use the features of Oracle Database 11g Release 1 (11.1) to provide redundancy for the OCR.

  • If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The Oracle Clusterware owner (-, oracle) must have write permissions to create the files in the path that you specify.

Note:

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disk partitions, then you can continue to use those partition sizes.

Use Table 4-2 to determine the partition size for shared file systems.

Table 4-2 Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Clusterware files (OCR and voting disks) with external redundancy

1

At least 280 MB for each volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software

1

At least 280 MB for each volume

Redundant Oracle Clusterware files with redundancy provided by Oracle software (mirrored OCR and two additional voting disks)

1

At least 280 MB of free space for each OCR location, if the OCR is configured on a file system

or

At least 280 MB available for each OCR location if the OCR is configured on raw devices.

and

At least 280 MB for each voting disk location, with a minimum of three disks.


In Table 4-2, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 1.3 GB of storage available over a minimum of three volumes (two separate volume locations for the OCR and OCR mirror, and one voting disk on each volume).

4.2.2 Checking NFS Mount Buffer Size Parameters for Oracle Clusterware

If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize to at least 16384. Oracle recommends that you use the value 32768. Update the /etc/fstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /home/oracle/netapp     nfs\   
cio,rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600

Note:

Refer to your storage vendor documentation for additional information about mount options.

4.2.3 Creating Required Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For GPFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system to the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Make sure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -k command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use:

    File Type File System Requirements
    Oracle Clusterware files Choose a file system with at least 560 MB of free disk space (one OCR and one voting disk, with external redundancy).

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation has permissions to create directories on the disks where you plan to install Oracle Clusterware, then OUI creates the Oracle Clusterware file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Oracle Clusterware file directory:

      # mkdir /mount_point/oracrs
      # chown oracle:oinstall /mount_point/oracrs
      # chmod 775 /mount_point/oracrs
      

Note:

After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned by root, and not writable by any account other than root.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed GPFS configuration.

4.3 Configuring Storage for Oracle Clusterware Files on Raw Devices

The following subsections describe how to configure Oracle Clusterware files on raw partitions.

4.3.1 Identifying Required Raw Partitions for Clusterware Files

Table 4-3 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.

Note:

Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.

Table 4-3 Raw Partitions Required for Oracle Clusterware Files on AIX

Number Size for Each Partition (MB) Purpose

2

(or 1, if you have external redundancy support for this file)

256

Oracle Cluster Registry

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Cluster Registry (OCR).

You should create two partitions: One for the OCR, and one for a mirrored OCR.

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device that you used for the SRVM configuration repository instead of creating this new raw device.

3

(or 1, if you have external redundancy support for this file)

256

Oracle Clusterware voting disks

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Clusterware voting disk.

You should create three partitions: One for the voting disk, and two for additional voting disks.


4.3.2 Configuring Raw Disk Devices for Oracle Clusterware Without HACMP or GPFS

If you are installing Oracle RAC on an AIX cluster without HACMP or GPFS, then you must use shared raw disk devices for the Oracle Clusterware files. You can also use shared raw disk devices for database file storage. However, Oracle recommends that you use Automatic Storage Management to store database files in this situation.

This section describes how to configure the shared raw disk devices for Oracle Clusterware files (Oracle Cluster Registry and Oracle Clusterware voting disk). It also describes how to configure shared raw devices for Oracle ASM and for Database files, if you intend to install Oracle Database, and you need to create new disk devices.

Note:

In the following procedure, you are directed to set physical volume IDs (PVIDs) to confirm that all devices appear under the same name on all nodes. Oracle recommends that you complete the entire procedure, even if you are certain that you do not have PVIDs configured on your system, to prevent the possibility of configuration issues.

To configure shared raw disk devices for Oracle Clusterware files:

  1. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  2. As the root user, enter the following command on any node to identify the device names for the disk devices that you want to use:

    # /usr/sbin/lspv | grep -i none 
    

    This command displays information similar to the following for each disk device that is not configured in a volume group:

    hdisk17         0009005fb9c23648                    None  
    

    In this example, hdisk17 is the device name of the disk and 0009005fb9c23648 is the physical volume ID (PVID).

  3. If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it:

    # chdev -l hdiskn -a pv=yes
    
  4. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:

    # /usr/sbin/lspv | grep -i "0009005fb9c23648"
    

    The output from this command should be similar to the following:

    hdisk18         0009005fb9c23648                    None  
    

    In this example, the device name associated with the disk device (hdisk18) is different on this node.

  5. If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices:

    • OCR device:

      # chown root:oinstall /dev/rhdiskn
      # chmod 640 /dev/rhdiskn
      
    • Other devices:

      # chown oracle:dba /dev/rhdiskn
      # chmod 660 /dev/rhdiskn
      
  6. If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the nodes using a common unused name.

    For the new device files, choose an alternative device file name that identifies the purpose of the disk device. The previous table suggests alternative device file names for each file. For database files, replace dbname in the alternative device file name with the name that you chose for the database in step 1.

    Note:

    Alternatively, you could choose a name that contains a number that will never be used on any of the nodes, for example hdisk99.

    To create a new common device file for a disk device on all nodes, perform these steps on each node:

    1. Enter the following command to determine the device major and minor numbers that identify the disk device, where n is the disk number for the disk device on this node:

      # ls -alF /dev/*hdiskn
      

      The output from this command is similar to the following:

      brw------- 1 root system    24,8192 Dec 05 2001  /dev/hdiskn
      crw------- 1 root system    24,8192 Dec 05 2001  /dev/rhdiskn
      

      In this example, the device file /dev/rhdiskn represents the character raw device, 24 is the device major number, and 8192 is the device minor number.

    2. Enter a command similar to the following to create the new device file, specifying the new device file name and the device major and minor numbers that you identified in the previous step:

      Note:

      In the following example, you must specify the c flag to create a character raw device file.
      # mknod /dev/ora_ocr_raw_256m c 24 8192
      
    3. Enter commands similar to the following to change the owner, group, and permissions on the character raw device file for the disk:

      • OCR:

        # chown root:oinstall /dev/ora_ocr_raw_256m
        # chmod 640 /dev/ora_ocr_raw_256m
        
      • Oracle Clusterware voting disk:

        # chown oracle:dba /dev/ora_vote_raw_256m
        # chmod 660 /dev/ora_vote_raw_256m
        
    4. Enter a command similar to the following to verify that you have created the new device file successfully:

      # ls -alF /dev | grep "24,8192"
      

      The output should be similar to the following:

      brw------- 1 root   system   24,8192 Dec 05 2001  /dev/hdiskn
      crw-r----- 1 root   oinstall 24,8192 Dec 05 2001  /dev/ora_ocr_raw_256m
      crw------- 1 root   system   24,8192 Dec 05 2001  /dev/rhdiskn
      
  7. To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:

    Disk Type Attribute Value
    SSA, FAStT, or non-MPIO-capable disks reserve_lock no
    ESS, EMC, HDS, CLARiiON, or MPIO-capable disks reserve_policy no_reserve

    To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:

    # /usr/sbin/lsattr -E -l hdiskn
    

    If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node:

    • SSA and FAStT devices

      # /usr/sbin/chdev -l hdiskn  -a reserve_lock=no
      
    • ESS, EMC, HDS, CLARiiON, and MPIO-capable devices

      # /usr/sbin/chdev -l hdiskn  -a reserve_policy=no_reserve
      
  8. Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:

    # /usr/sbin/chdev -l hdiskn -a pv=clear
    

    When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rhdisk10
    

4.3.3 Configuring HACMP Multinode Disk Heartbeat (MNDHB) for Oracle Clusterware

This section contains the following topics:

4.3.3.1 Overview of Requirements for Using HACMP with Oracle Clusterware

You must define one Multi-node Disk Heartbeat (MNDHB) network for each Oracle Clusterware voting disk. Each MNDHB and voting disk pair must be located on a single hard disk, separate from the other pairs. You must also configure MNDHB so that the node is halted if access is lost to a quorum of the MNDHB networks in the enhanced concurrent volume group.

To reduce the likelihood of a cluster partition, IBM recommends that HACMP is deployed with multiple IP networks and at least one non-IP network. The non-IP networks can be implemented using RS232 or disk heart-beating. For systems using Oracle RAC and HACMP enhanced concurrent resources (enhanced concurrent logical volumes) for database storage, you must configure MNDHB networks.

Install, configure and have HACMP running before installing Oracle Clusterware. For an Oracle RAC configuration, do not use HACMP for IP failovers on the Oracle RAC network interfaces (public, VIP or private). These network interfaces should not be configured to use HACMP IP failover, as Oracle Clusterware manages VIP failovers for Oracle RAC. The RAC network interfaces are bound to individual nodes and RAC instances. Problems can occur with Oracle Clusterware if HACMP reconfigures IP addresses over different interfaces, or fails over addresses across nodes. You only can use HACMP for failover of IP address on Oracle RAC nodes if Oracle RAC does not use these addresses.

4.3.3.2 Deploying HACMP and MDNDHB for Oracle Clusterware

Complete the following tasks, replacing each term in italics with the appropriate response for your system, or carrying out the action described and entering the appropriate response for your image:

  1. Start HACMP.

  2. Enter the following command to ensure that the HACMP clcomdES daemon is running:

    # lssrc -s clcomdES
    

    If the daemon is not running, then start it using the following command:

    # startsrc –s clcomdES
    
  3. Ensure that your versions of HACMP and AIX meet the system requirements listed in Chapter 2, "Checking the Software Requirements".

  4. Create HACMP cluster and add the Oracle Clusterware nodes. For example:

    # smitty cm_add_change_show_an_hacmp_cluster.dialog
    * Cluster Name [mycluster] 
    
  5. Create an HACMP cluster node for each Oracle Clusterware node. For example:

    # smitty cm_add_a_node_to_the_hacmp_cluster_dialog 
    * Node Name [mycluster_node1]
    Communication Path to Node [] 
    
  6. Create HACMP ethernet heartbeat networks. The HACMP configuration requires network definitions. Select NO for the IP address takeover for these networks, since they are used by Oracle Clusterware.

    Create at least two network definitions: one for the Oracle public interface and a second one for the Oracle private (cluster interconnect) network. Additional ethernet heartbeat networks can be added if desired.

    For example:

    # smitty cm_add_a_network_to_the_hacmp_cluster_select 
    - select ether network 
    * Network Name [my_network_name] 
    * Network Type ether 
    * Netmask [my.network.netmask.here] 
    * Enable IP Address Takeover via IP Aliases [No] 
    IP Address Offset for Heart beating over IP Aliases [] 
    
  7. For each of the networks added in the previous step, define all of the IP names for each Oracle Clusterware node associated with that network, including the public, private and VIP names for each Oracle Clusterware node. For example:

    # smitty cm_add_communication_interfaces_devices.select 
    - select: Add Pre-defined Communication Interfaces and Devices / Communication Interfaces / desired network 
    * IP Label/Address [node_ip_address] 
    * Network Type ether 
    * Network Name some_network_name 
    * Node Name [my_node_name] 
    Network Interface [] 
    
  8. Create an HACMP resource group for the enhanced concurrent volume group resource with the following options:

    # smitty config_resource_group.dialog.custom 
    * Resource Group Name [my_resource_group_name] 
    * Participating Nodes (Default Node Priority) [mynode1,mynode2,mynode3] 
    Startup Policy Online On All Available Nodes 
    Fallover Policy Bring Offline (On Error Node Only) 
    Fallback Policy Never Fallback 
    
  9. Create an AIX enhanced concurrent volume group (Big VG, or Scalable VG) using either the command smitty mkvg, or using command lines. The VG must contain at least one hard disk for each voting disk. You must configure at least three voting disks.

    In the following example, where you see default, accept the default response:

    # smitty _mksvg 
    VOLUME GROUP name [my_vg_name] PP SIZE in MB 
    * PHYSICAL VOLUME names [mydisk1,mydisk2,mydisk3] 
    Force the creation of a volume group? no 
    Activate volume group AUTOMATICALLY no at system restart? 
    Volume Group MAJOR NUMBER [] 
    Create VG Concurrent Capable? enhanced concurrent 
    Max PPs per VG in kilobytes default
    Max Logical Volumes default
    
  10. Under ÒChange/Show Resources for a Resource Group (standard)Ó, add the concurrent volume group to the resource group added in the preceding steps.

    For example:

    # smitty cm_change_show_resources_std_resource_group_menu_dmn.select 
    - select_resource_group_from_step_6
    Resource Group Name shared_storage 
    Participating Nodes (Default Node Priority) mynode1,mynode2,mynode3
    Startup Policy Online On All Available Nodes 
    Fallover Policy Bring Offline (On Error Node Only) 
    Fallback Policy Never Fallback 
    Concurrent Volume Groups [enter_VG_from_step_7]
    Use forced varyon of volume groups, if necessary false 
    Application Servers [] 
    
  11. Using the following command, ensure that one MNDHB network is defined for each Oracle Clusterware voting disk. Each MNDHB and voting disk pair must be collocated on a single hard disk, separate from the other pairs. The MNDHB network and Voting Disks exist on shared logical volumes in an enhanced concurrent logical volume managed by HACMP as an enhanced concurrent resource. For each of the hard disks in the VG created in step 6 on which you want to place a voting disk logical volume (LV), create a MNDHB LV.

    # smitty cl_add_mndhb_lv 
    - select_resource_group_defined_in_step_6
    * Physical Volume name enter F4, then select a hard disk
    Logical Volume Name [] 
    Logical Volume Label [] 
    Volume Group name ccvg 
    Resource Group Name shared_storage 
    Network Name [n]
    

    Note:

    When you define the LVs for the Oracle Clusterware voting disks, they should be defined on the same disks: one for each disk, as used in this step for the MNDHB LVs.
  12. Configure MNDHB so that the node is halted if access is lost to a quorum of the MNDHB networks in the enhanced concurrent volume group. For example:

    # smitty cl_set_mndhb_response 
    - select_the_VG_created_in_step_7 
    On loss of access Halt the node 
    Optional notification method [] 
    Volume Group ccvg 
    
  13. Verify and Synchronize HACMP configuration. For example:

    # smitty cm_initialization_and_standard_config_menu_dmn 
    - select ÒVerify and Synchronize HACMP ConfigurationÓ 
    

    Enter Yes if prompted: ÒWould you like to import shared VG: ccvg, in resource group my_resource_group onto node: mynode to node: racha702 [Yes / No]:Ó

  14. Add the Add the HACMP cluster node IP names to the file /usr/es/sbin/cluster/etc/rhosts.

4.3.3.3 Upgrading an Existing Oracle Clusterware and HACMP Installation

Complete the following procedure:

  1. Back up all databases, and back up the Oracle Cluster Registry (OCR)

  2. Shut down on all nodes all Oracle RAC databases, all node applications, and Oracle Clusterware.

  3. Enter the following command to disable Oracle Clusterware from starting when nodes are restarted:

    # crsctl disable crs
    
  4. Shut down HACMP on all nodes.

  5. Install HACMP APAR IZ01809, following the directions in the README included with that APAR.

  6. Determine if the existing voting disk LVs are already on separate hard disks, and if each of these disks have sufficient space (at least 256 MB for the MNDHB LVs. If this is true, then create a MNDHB LV on each of the hard disks. If this is not true, then create new MNDHB LVs and new voting disk LVs, located on separate hard disks using the following command, responding to the sections in italics with the appropriate information for your system:

    # smitty cl_add_mndhb_lv 
    - Select_resource_group
    * Physical Volume name Enter F4, then select disk for the MNDHB and Voting Disk pair
    Logical Volume Name [] 
    Logical Volume Label [] 
    Volume Group name ccvg 
    Resource Group Name shared_storage 
    Network Name [net_diskhbmulti_01] 
    
  7. Verify and Synchronize HACMP configuration.

  8. Start HACMP on all nodes.

  9. If you added new LVs for voting disks in step 5, then replace each of the existing voting disks with the new ones.

  10. Enter the following command to re-enable Oracle Clusterware:

    # crsctl enable CRS
    
  11. Start Oracle Clusterware on all nodes, and verify that all resources start correctly.

4.3.4 Configuring Raw Logical Volumes for Oracle Clusterware

Note:

To use raw logical volumes for Oracle Clusterware, HACMP must be installed and configured on all cluster nodes.

This section describes how to configure raw logical volumes for Oracle Clusterware and database file storage. The procedures in this section describe how to create a new volume group that contains the logical volumes required for both types of files.

Before you continue, review the following guidelines which contain important information about using volume groups with this release of Oracle RAC:

  • You must use concurrent-capable volume groups for Oracle Clusterware.

  • The Oracle Clusterware files require less than 560 MB of disk space, with external redundancy. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same volume group for the logical volumes for both the Oracle Clusterware files and the database files.

  • If you are upgrading an existing Oracle9i release 2 Oracle RAC installation that uses raw logical volumes, then you can use the existing SRVM configuration repository logical volume for the OCR and create a new logical volume in the same volume group for the Oracle Clusterware voting disk. However, you must remove this volume group from the HACMP concurrent resource group that activates it before you install Oracle Clusterware.

    See Also:

    The HACMP documentation for information about removing a volume group from a concurrent resource group.

    Note:

    If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group" section for more information about the requirements for the Oracle Clusterware voting disk and SYSAUX logical volumes.
  • You must use a HACMP concurrent resource group to activate new or existing volume groups that contain only database files (not Oracle Clusterware files).

    See Also:

    The HACMP documentation for information about adding a volume group to a new or existing concurrent resource group.
  • All volume groups that you intend to use for Oracle Clusterware must be activated in concurrent mode before you start the installation.

  • The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes, (using mirroring, for example), then use this section in conjunction with the HACMP documentation.

4.3.5 Creating a Volume Group for Oracle Clusterware

To create a volume group for the Oracle Clusterware files:

  1. If necessary, install the shared disks that you intend to use.

  2. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/lsdev -Cc disk
    

    The output from this command is similar to the following:

    hdisk0 Available 1A-09-00-8,0  16 Bit LVD SCSI Disk Drive
    hdisk1 Available 1A-09-00-9,0  16 Bit LVD SCSI Disk Drive
    hdisk2 Available 17-08-L       SSA Logical Disk Drive
    
  3. If a disk is not listed as available on any node, then enter the following command to configure the new disks:

    # /usr/sbin/cfgmgr
    
  4. Enter the following command on any node to identify the device names and any associated volume group for each disk:

    # /usr/sbin/lspv
    

    The output from this command is similar to the following:

    hdisk0     0000078752249812   rootvg
    hdisk1     none               none
    hdisk4     00034b6fd4ac1d71   ccvg1
    

    For each disk, this command shows:

    • The disk device name

    • Either the 16 character physical volume identifier (PVID) if the disk has one, or none

    • Either the volume group to which the disk belongs, or none

    The disks that you want to use may have a PVID, but they must not belong to existing volume groups.

  5. If a disk that you want to use for the volume group does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
  6. To identify used device major numbers, enter the following command on each node of the cluster:

    # ls -la /dev | more
    

    This command displays information about all configured devices, similar to the following:

    crw-rw----   1 root     system    45,  0 Jul 19 11:56 vg1
    

    In this example, 45 is the major number of the vg1 volume group device.

  7. Identify an appropriate major number that is unused on all nodes in the cluster.

  8. To create a volume group, enter a command similar to the following, or use SMIT (smit mkvg):

    # /usr/sbin/mkvg -y VGname -B -s PPsize -V majornum -n \
    -C PhysicalVolumes
    
  9. The following table describes the options and variables used in this example. Refer to the mkvg man page for more information about these options.

    Command Option SMIT Field Sample Value and Description
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or it could specify the name of the database that you intend to create.
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or for a database volume group, it could specify the name of the database that you intend to create.
    -B
    
    Create a big VG format Volume Group Specify this option to create a big VG format volume group.

    Note: If you are using SMIT, then choose yes for this field.

    -s PPsize
    
    Physical partition SIZE in megabytes
    32
    
    Specify the size of the physical partitions for the database. The sample value shown enables you to include a disk up to 32 GB in size (32 MB * 1016).
    -V Majornum
    
    Volume Group MAJOR NUMBER
    46
    
    Specify the device major number for the volume group that you identified in Step 7.
    -n
    
    Activate volume group AUTOMATICALLY at system restart Specify this option to prevent the volume group from being activated at system restart.

    Note: If you are using SMIT, then choose no for this field.

    -C
    
    Create VG Concurrent Capable Specify this option to create a concurrent capable volume group.

    Note: If you are using SMIT, then choose yes for this field.

    PhysicalVolumes
    
    PHYSICAL VOLUME names
    hdisk3 hdisk4
    
    Specify the device names of the disks that you want to add to the volume group.

  10. Enter a command similar to the following to vary on the volume group that you created:

    # /usr/sbin/varyonvg VGname
    

4.3.6 Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group

To create the required raw logical volumes in the new Oracle Clusterware volume group:

  1. Identify the logical volumes that you must create.

  2. If you prefer, you can also use the command smit mklv to create raw logical volumes.

    The following example shows the command used to create a logical volume for the ocr volume group in the SYSAUX tablespace with a physical partition size of 114 MB (1792/7 = 256):

    # /usr/sbin/mklv -y test_sysaux_raw_1792m -T O -w n -s n -r n ocr 7
    
  3. Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:

    Note:

    The device file associated with the Oracle Cluster Registry must be owned by root. All other device files must be owned by the Oracle software owner user (oracle).
    # chown oracle:dba /dev/rora_vote_raw_256m
    # chmod 660 /dev/rora_vote_raw_256m
    # chown root:oinstall /dev/rora_ocr_raw_256m
    # chmod 640 /dev/rora_ocr_raw_256m
    

4.3.7 Importing the Volume Group on the Other Cluster Nodes

To make the volume group available to all nodes in the cluster, you must import it on each node, as follows:

  1. Because the physical volume names may be different on the other nodes, enter the following command to determine the PVID of the physical volumes used by the volume group:

    # /usr/sbin/lspv
    
  2. Note the PVIDs of the physical devices used by the volume group.

  3. To vary off the volume group that you want to use, enter a command similar to the following on the node where you created it:

    # /usr/sbin/varyoffvg VGname
    
  4. On each cluster node, complete the following steps:

    1. Enter the following command to determine the physical volume names associated with the PVIDs you noted previously:

      # /usr/sbin/lspv
      
    2. On each node of the cluster, enter commands similar to the following to import the volume group definitions:

      # /usr/sbin/importvg -y VGname -V MajorNumber PhysicalVolume
      

      In this example, MajorNumber is the device major number for the volume group and PhysicalVolume is the name of one of the physical volumes in the volume group.

      For example, to import the definition of the oracle_vg1 volume group with device major number 45 on the hdisk3 and hdisk4 physical volumes, enter the following command:

      # /usr/sbin/importvg -y oracle_vg1 -V 45 hdisk3
      
    3. Change the owner, group, and permissions on the character device files associated with the logical volumes you created, as follows:

      # chown oracle:dba /dev/rora_vote_raw_256m
      # chmod 660 /dev/rora_vote_raw_256m
      # chown root:oinstall /dev/rora_ocr_raw_256m
      # chmod 640 /dev/rora_ocr_raw_256m
      
    4. Enter the following command to ensure that the volume group will not be activated by the operating system when the node starts:

      # /usr/sbin/chvg -a n VGname
      

4.3.8 Activating the Volume Group in Concurrent Mode on All Cluster Nodes

To activate the volume group in concurrent mode on all cluster nodes, enter the following command on each node:

# /usr/sbin/varyonvg -c VGname