Skip Headers
Oracle® Clusterware Installation Guide
11g Release 1 (11.1) for HP-UX

Part Number B28259-05
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Configuring Oracle Clusterware Storage

This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:

4.1 Reviewing Storage Options for Oracle Clusterware Files

This section describes supported options for storing Oracle Clusterware files. It includes the following sections:

4.1.1 Overview of Storage Options

There are two ways of storing Oracle Clusterware files:

  • A supported shared file system: Supported file systems include the following:

    • A supported cluster file system

      See Also:

      The Certify page on OracleMetalink for the status of supported cluster file systems, as none are certified at the time of this release
    • Network File System (NFS): A file-level protocol that enables access and sharing of files

    See Also:

    The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices
  • Raw Devices: Oracle Clusterware files can be placed on RAW devices based on shared disk partitions.

4.1.2 General Storage Considerations for Oracle Clusterware

For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files, or for Oracle Clusterware with Oracle Real Application Clusters databases (Oracle RAC). You do not have to use the same storage option for each file type.

Oracle Clusterware files include voting disks, used to monitor cluster node status, and Oracle Cluster Registry (OCR) which contains configuration information about the cluster. The voting disks and OCR are shared files on a cluster or network file system environment. If you do not use a cluster file system, then you must place these files on shared block devices or shared raw devices. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.

For voting disk file placement, Oracle recommends that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. Any node that does not have available to it an absolute majority of voting disks configured (more than half) will be restarted.

The following table shows the storage options supported for storing Oracle Clusterware files. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).

Note:

For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site:
https://metalink.oracle.com

Table 4-1 Supported Storage Options for Oracle Clusterware

Storage Option File Types Supported
OCR and Voting Disks Oracle Software

Automatic Storage Management

No

No

Local storage

No

Yes

NFS file system

Note: Requires a certified NAS device

Yes

Yes

Shared disk partitions (raw devices)

Yes

No

Shared Logical Volume Manager) (SLVM)

Yes

No


Use the following guidelines when choosing the storage options that you want to use for Oracle Clusterware:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

4.1.3 Quorum Disk Location Restriction with Existing 9.2 Clusterware Installations

When upgrading your Oracle9i release 9.2 Oracle RAC environment to Oracle Database 11g release 1 (11.1), you are prompted to specify one or more voting disks during the Oracle Clusterware installation. You must specify a new location for the voting disk in Oracle Database 11g release 1 (11.1). You cannot reuse the old Oracle9i release 9.2 quorum disk for this purpose.

4.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, you must perform the following tasks in the order listed:

1: Check for available shared storage with CVU

Refer to Checking for Available Shared Storage with CVU

2: Configure shared storage for Oracle Clusterware files

4.2 Checking for Available Shared Storage with CVU

To check for all shared file systems available across all nodes on the cluster on a supported shared file system, log in as the installation owner user (oracle or crs), and use the following syntax:

/mountpoint/runcluvfy.sh comp ssa -n node_list

If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:

/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list

In the preceding syntax examples, the variable mountpoint is the mountpoint path of the installation media, the variable node_list is the list of nodes you want to check, separated by commas, and the variable storageID_list is the paths for the storage devices that you want to check.

For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dw/dsk/c1t2d3 and /dw/dsk/c2t4d5, and your mountpoint is /dev/dvdrom/, then enter the following command:

/dev/dvdrom/clusterware/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dw/dsk/c1t2d3,/dw/dsk/c2t4d5

If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list

4.3 Configuring Storage for Oracle Clusterware Files on a Supported Shared File System

Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

Note:

The OCR is a file that contains the configuration information and status of the cluster. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation. Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates.

4.3.1 Requirements for Using a File System for Oracle Clusterware Files

To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:

  • To use an NFS file system, it must be on a certified NAS device. Log in to OracleMetaLink at the following URL, and click the Certify tab to find a list of certified NAS devices.

    https://metalink.oracle.com/

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:

    • The disks used for the file system are on a highly available storage device (for example, a RAID device that implements file redundancy).

    • At least two file systems are mounted, and use the features of Oracle Database 11g Release 1 (11.1) to provide redundancy for the OCR.

  • If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The user account with which you perform the installation (oracle or crs) must have write permissions to create the files in the path that you specify.

Note:

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disk partitions, then you can continue to use those partition sizes.

Use Table 4-2 to determine the partition size for shared file systems.

Table 4-2 Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Clusterware files (OCR and voting disks) with external redundancy

1

At least 280 MB for each volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software.

1

At least 280 MB for each volume


In Table 4-2, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 1.3 GB of storage available over a minimum of three volumes (two separate volume locations for the OCR and OCR mirror, and one voting disk on each volume).

4.3.2 Checking NFS Mount Buffer Size Parameters for Clusterware

If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize to at least 16384. Oracle recommends that you use the value 32768. Update the /etc/fstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /home/oracle/netapp     nfs\   
rw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,wsize=32768,suid

Note:

Refer to your storage vendor documentation for additional information about mount options.

4.3.3 Creating Required Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For NFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the bdf command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use. Choose a file system with a minimum of 560 MB of free disk space (one OCR and one voting disk, with external redundancy).

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, crs or oracle) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the Oracle Clusterware home (or CRS home). For example, where the user is oracle, and the CRS home is oracrs:

    # mkdir /mount_point/oracrs
    # chown oracle:oinstall /mount_point/oracrs
    # chmod 640 /mount_point/oracrs
    

    Note:

    After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned by root, and not writable by any account other than root.

4.4 Configuring Storage for Oracle Clusterware Files on Raw Devices

The following subsections describe how to configure Oracle Clusterware files on raw partitions.

4.4.1 Identifying Required Raw Partitions for Clusterware Files

Table 4-3 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.

Table 4-3 Raw Partitions Required for Oracle Clusterware Files

Number Size for Each Partition (MB) Purpose

2

(or 1, if you have external redundancy support for this file)

256

Oracle Cluster Registry

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Cluster Registry (OCR).

You should create two partitions: One for the OCR, and one for a mirrored OCR.

If you are upgrading from Oracle9i release 2, then you can continue to use the raw device that you used for the SRVM configuration repository instead of creating this new raw device.

3

(or 1, if you have external redundancy support for this file)

256

Oracle Clusterware voting disks

Note: You need to create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Clusterware voting disk.

You should create three partitions: One for the voting disk, and two for additional voting disks.


4.4.2 Disabling Operating System Activation of Shared Volume Groups

To prevent the operating system from activating shared volume groups when it starts, you must edit the /etc/lvmrc file on every node, as follows:

  1. Create a backup copy of the /etc/lvmrc file:

    # cp /etc/lvmrc /etc/lvmrc_orig
    
  2. Open the /etc/lvmrc file in any text editor and search for the AUTO_VG_ACTIVATE flag.

  3. If necessary, change the value of the AUTO_VG_ACTIVATE flag to 0, to disable automatic volume group activation, as follows:

    AUTO_VG_ACTIVATE=0
    
  4. Search for the custom_vg_activation function in the /etc/lvmrc file.

  5. Add vgchange commands to the function, as shown in the following example, to automatically activate existing local volume groups:

    custom_vg_activation()
    {
            # e.g. /sbin/vgchange -a y -s
            #      parallel_vg_sync "/dev/vg00 /dev/vg01"
            #      parallel_vg_sync "/dev/vg02 /dev/vg03"
    
            /sbin/vgchange -a y vg00
            /sbin/vgchange -a y vg01
            /sbin/vgchange -a y vg02
    
            return 0
    }
    

    In this example, vg00, vg01, and vg02 are the volume groups that you want to activate automatically when the system restarts.

4.4.3 Configuring Raw Disk Devices Without HP Serviceguard Extension

If you are installing Oracle Clusterware or Oracle Clusterware and Oracle Real Application Clusters on an HP-UX cluster without HP Serviceguard Extension for RAC, then you must use shared raw disk devices for the Oracle Clusterware files. You can also use shared raw disk devices for database file storage, however, Oracle recommends that you use Automatic Storage Management to store database files in this situation. This section describes how to configure the shared raw disk devices for Oracle Clusterware files (Oracle Cluster Registry and Oracle Clusterware voting disk) and database files.

Table 4-4 lists the number and size of the raw disk devices that you must configure for database files.

Note:

Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.

Table 4-4 Raw Disk Devices Required for Database Files on HP-UX

Number Size (MB) Purpose and Sample Alternative Device File Name

1

500

SYSTEM tablespace:

dbname_system_raw_500m

1

300 + (Number of instances * 250)

SYSAUX tablespace:

dbname_sysaux_raw_800m

Number of instances

500

UNDOTBSn tablespace (One tablespace for each instance, where n is the number of the instance):

dbname_undotbsn_raw_500m

1

250

TEMP tablespace:

dbname_temp_raw_250m

1

160

EXAMPLE tablespace:

dbname_example_raw_160m

1

120

USERS tablespace:

dbname_users_raw_120m

2 * number of instances

120

Two online redo log files for each instance (where n is the number of the instance and m is the log number, 1 or 2):

dbname_redon_m_raw_120m

2

110

First and second control files:

dbname_control{1|2}_raw_110m

1

5

Server parameter file (SPFILE):

dbname_spfile_raw_5m

1

5

Password file:

dbname_pwdfile_raw_5m

To configure shared raw disk devices for Oracle Clusterware files, database files, or both:

  1. If you intend to use raw disk devices for database file storage, then choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters, for example, orcl.

  2. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  3. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/ioscan -fun -C disk
    

    The output from this command is similar to the following:

    Class  I  H/W Path    Driver S/W State   H/W Type     Description
    ==========================================================================
    disk    0  0/0/1/0.6.0 sdisk  CLAIMED     DEVICE       HP   DVD-ROM 6x/32x
                           /dev/dsk/c0t6d0   /dev/rdsk/c0t6d0
    disk    1  0/0/1/1.2.0 sdisk  CLAIMED     DEVICE      SEAGATE ST39103LC
                           /dev/dsk/c1t2d0   /dev/rdsk/c1t2d0
    

    This command displays information about each disk attached to the system, including the block device name (/dev/dsk/cxtydz) and the character raw device name (/dev/rdsk/cxtydz).

    Note:

    On HP-UX 11i v.3, you can also use agile view to review mass storage devices, including block devices (/dev/disk/diskxyz), or character raw devices (/dev/rdisk/diskxyz). For example:
    #>ioscan -funN -C disk
    Class     I  H/W Path  Driver S/W State   H/W Type     Description
    ===================================================================
    disk      4  64000/0xfa00/0x1   esdisk   CLAIMED     DEVICE       HP 73.4GST373454LC
                     /dev/disk/disk4   /dev/rdisk/disk4
    disk    907  64000/0xfa00/0x2f  esdisk   CLAIMED     DEVICE       COMPAQ  MSA1000 VOLUME
                     /dev/disk/disk907   /dev/rdisk/disk907
    
  4. If the ioscan command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:

    # /usr/sbin/insf -e
    
  5. For each disk that you want to use, enter the following command on any node to verify that it is not already part of an LVM volume group:

    # /sbin/pvdisplay /dev/dsk/cxtydz
    

    If this command displays volume group information, then the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.

    Note:

    If you are using different volume management software, for example VERITAS Volume Manager, then refer to the appropriate documentation for information about verifying that a disk is not in use.
  6. If the ioscan command shows different device names for the same device on any node, then:

    1. Change directory to the /dev/rdsk directory.

    2. Enter the following command to list the raw disk device names and their associated major and minor numbers:

      # ls -la
      

      The output from this command is similar to the following for each disk device:

      crw-r--r--   1 bin        sys        188 0x032000 Nov  4  2003 c3t2d0
      

      In this example, 188 is the device major number and 0x32000 is the device minor number.

    3. Enter the following command to create a new device file for the disk that you want to use, specifying the same major and minor number as the existing device file:

      Note:

      Oracle recommends that you use the alternative device file names shown in the previous table.
      # mknod ora_ocr_raw_256m c 188 0x032000
      
    4. Repeat these steps on each node, specifying the correct major and minor numbers for the new device files on each node.

  7. Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk device that you want to use:

    Note:

    If you are using a multi-pathing disk driver with Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.

    If you created an alternative device file for the device, then set the permissions on that device file.

    • OCR:

      # chown root:oinstall /dev/rdsk/cxtydz
      # chmod 640 /dev/rdsk/cxtydz
      
    • Oracle Clusterware voting disk or database files:

      # chown oracle:dba /dev/rdsk/cxtydz
      # chmod 660 /dev/rdsk/cxtydz
      

      Note:

      For DSF (agile view) paths, enter commands using paths similar to the following:
      # chmod 660 /dev/rdisk/diskxyz
      
  8. If you are using raw disk devices for database files, then follow these steps to create the Oracle Database Configuration Assistant raw device mapping file:

    Note:

    You must complete this procedure only if you are using raw devices for database files. The Oracle Database Configuration Assistant raw device mapping file enables Oracle Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Oracle Database Configuration Assistant raw device mapping file.
    1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

      • Bourne, Bash, or Korn shell:

        $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
        
      • C shell:

        % setenv ORACLE_BASE /u01/app/oracle
        
    2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

      # mkdir -p $ORACLE_BASE/oradata/dbname
      # chown -R oracle:oinstall $ORACLE_BASE/oradata
      # chmod -R 775 $ORACLE_BASE/oradata
      

      In this example, dbname is the name of the database that you chose previously.

    3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

    4. Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.

      Oracle recommends that you use a file name similar to dbname_raw.conf for this file.

      Note:

      The following example shows a sample mapping file for a two-instance RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.
      system=/dev/rdsk/c2t1d1
      sysaux=/dev/rdsk/c2t1d2
      example=/dev/rdsk/c2t1d3
      users=/dev/rdsk/c2t1d4
      temp=/dev/rdsk/c2t1d5
      undotbs1=/dev/rdsk/c2t1d6
      undotbs2=/dev/rdsk/c2t1d7
      redo1_1=/dev/rdsk/c2t1d8
      redo1_2=/dev/rdsk/c2t1d9
      redo2_1=/dev/rdsk/c2t1d10
      redo2_2=/dev/rdsk/c2t1d11
      control1=/dev/rdsk/c2t1d12
      control2=/dev/rdsk/c2t1d13
      spfile=/dev/rdsk/dbname_spfile_raw_5m
      pwdfile=/dev/rdsk/dbname_pwdfile_raw_5m
      

      In this example, dbname is the name of the database.

      Use the following guidelines when creating or editing this file:

      • Each line in the file must have the following format:

        database_object_identifier=device_file_name
        

        The alternative device file names suggested in the previous table include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device file name, redo1_1 is the database object identifier:

        rac_redo1_1_raw_120m
        
      • For a RAC database, the file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

      • Specify at least two control files (control1, control2).

      • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

    5. Save the file and note the file name that you specified.

    6. When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

  9. When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rdsk/cxtydz
    

4.4.4 Configuring Shared Raw Logical Volumes With HP Serviceguard Extension

Note:

The following subsections describe how to create logical volumes on systems with HP Serviceguard extension, using the command line. You can use SAM to complete the same tasks. Refer to the HP-UX documentation for more information about using SAM.

This section describes how to configure shared raw logical volumes for Oracle Clusterware and database file storage for an Oracle Real Application Clusters (RAC) database. The procedures in this section describe how to create a new shared volume group that contains the logical volumes required for both types of files.

To use shared raw logical volumes, HP Serviceguard Extension for RAC must be installed on all cluster nodes. If HP Serviceguard Extension for RAC is not installed, then you can use shared raw disk devices to store the Oracle Clusterware or database files. However, Oracle recommends that you use this method only for the Oracle Clusterware files and use an alternative method such as Automatic Storage Management for database file storage.

Before you continue, review the following guidelines which contain important information about using shared logical volumes with this release of RAC:

  • You must use shared volume groups for Oracle Clusterware and database files.

  • The Oracle Clusterware files require less than 560 MB of disk space, with external redundancy. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same shared volume group for the logical volumes for both the Oracle Clusterware files and the database files.

  • If you are upgrading an existing Oracle9i release 2 RAC installation that uses raw logical volumes, then you can use the existing SRVM configuration repository logical volume for the OCR and create a new logical volume in the same volume group for the Oracle Clusterware voting disk. However, before you install Oracle Clusterware, you must remove this volume group from any Serviceguard package that currently activates it.

    See Also:

    The HP Serviceguard or HP Serviceguard Extension for RAC documentation for information about removing a volume group from a Serviceguard package.

    Note:

    If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Create Raw Logical Volumes in the New Volume Group" section for more information about the requirements for the Oracle Clusterware voting disk and SYSAUX logical volumes.
  • You must use either your own startup script or a Serviceguard package to activate new or existing volume groups that contain only database files and Oracle Clusterware files.

    See Also:

    The HP Serviceguard documentation for information about creating a Serviceguard package to activate a shared volume group for RAC
  • All shared volume groups that you intend to use for Oracle Clusterware or database files must be activated in shared mode before you start the installation.

  • All shared volume groups that you are using for RAC, including the volume group that contains the Oracle Clusterware files, must be specified in the cluster configuration file using the parameter OPS_VOLUME_GROUP.

    Note:

    If you create a new shared volume group for RAC on an existing HP Serviceguard cluster, then you must reconfigure and restart the cluster before installing Oracle Clusterware. Refer to the HP Serviceguard documentation for information about configuring the cluster and specifying shared volume groups.
  • The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes, using mirroring for example, then use this section in conjunction with the HP Serviceguard documentation.

4.4.4.1 Create a Volume Group

To create a volume group:

  1. If necessary, install the shared disks that you intend to use for the database.

  2. To ensure that the disks are available, enter the following command on every node:

    # /sbin/ioscan -fun -C disk
    

    The output from this command is similar to the following:

    Class  I  H/W Path    Driver S/W State   H/W Type     Description
    ==========================================================================
    disk    0  0/0/1/0.6.0 sdisk  CLAIMED     DEVICE       HP   DVD-ROM 6x/32x
                           /dev/dsk/c0t6d0   /dev/rdsk/c0t6d0
    disk    1  0/0/1/1.2.0 sdisk  CLAIMED     DEVICE      SEAGATE ST39103LC
                           /dev/dsk/c1t2d0   /dev/rdsk/c1t2d0
    disk    2  0/0/2/0.2.0 sdisk  CLAIMED     DEVICE       SEAGATE ST118202LC
                           /dev/dsk/c2t2d0   /dev/rdsk/c2t2d0
    

    This command displays information about each disk attached to the system, including the block device name (/dev/dsk/cxtydz) and the character raw device name (/dev/rdsk/cxtydz).

  3. If the ioscan command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:

    # /usr/sbin/insf -e
    
  4. For each disk that you want to add to the volume group, enter the following command on any node to verify that it is not already part of an LVM volume group:

    # /sbin/pvdisplay /dev/dsk/cxtydz
    

    If this command displays volume group information, then the disk is already part of a volume group.

  5. For each disk that you want to add to the volume group, enter a command similar to the following on any node:

    # /sbin/pvcreate /dev/rdsk/cxtydz
    
  6. To create a directory for the volume group in the /dev directory, enter a command similar to the following, where vg_name is the name that you want to use for the volume group:

    # mkdir /dev/vg_name
    
  7. To identify used device minor numbers, enter the following command on each node of the cluster:

    # ls -la /dev/*/group
    

    This command displays information about the device numbers used by all configured volume groups, similar to the following:

    crw-r-----   1 root    sys        64 0x000000 Mar  4  2002 /dev/vg00/group
    crw-r--r--   1 root    sys        64 0x010000 Mar  4  2002 /dev/vg01/group
    

    In this example, 64 is the major number used by all volume group devices and 0x000000 and 0x010000 are the minor numbers used by volume groups vg00 and vg01 respectively. Minor numbers have the format 0xnn0000, where nn is a number in the range 00 to the value of the maxvgs kernel parameter minus 1. The default value for the maxvgs parameter is 10, so the default range is 00 to 09.

  8. Identify an appropriate minor number that is unused on all nodes in the cluster.

  9. To create the volume group and activate it, enter commands similar to the following:

    # /sbin/mknod /dev/vg_name/group c 64 0xnn0000
    # /sbin/vgcreate /dev/vg_name /dev/dsk/cxtydz . . .
    # /sbin/vgchange -a y vg_name
    

    In this example:

    • vg_name is the name that you want to give to the volume group

    • 0xnn0000 is a minor number that is unused on all nodes in the cluster

    • /dev/dsk/cxtydz... is a list of one or more block device names for the disks that you want to add to the volume group

4.4.4.2 Create Raw Logical Volumes in the New Volume Group

To create the required raw logical volumes in the new volume group:

  1. Choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters, for example, orcl.

  2. Identify the logical volumes that you must create.

    Table 4-5 lists the number and size of the logical volumes that you must create for Oracle Clusterware files.

    Table 4-5 Raw Logical Volumes Required for Database Files on HP-UX

    Number Size (MB) Purpose and Sample Logical Volume Name

    1

    500

    SYSTEM tablespace:

    dbname_system_raw_500m
    

    1

    300 + (Number of instances * 250)

    SYSAUX tablespace:

    dbname_sysaux_raw_800m
    

    Number of instances

    500

    UNDOTBSn tablespace (One tablespace for each instance, where n is the number of the instance):

    dbname_undotbsn_raw_500m
    

    1

    250

    TEMP tablespace:

    dbname_temp_raw_250m
    

    1

    160

    EXAMPLE tablespace:

    dbname_example_raw_160m
    

    1

    120

    USERS tablespace:

    dbname_users_raw_120m
    

    2 * number of instances

    120

    Two online redo log files for each instance (where n is the number of the instance and m is the log number, 1 or 2):

    dbname_redon_m_raw_120m
    

    2

    110

    First and second control files:

    dbname_control{1|2}_raw_110m
    

    1

    5

    Server parameter file (SPFILE):

    dbname_spfile_raw_5m
    

    1

    5

    Password file:

    dbname_pwdfile_raw_5m
    

  3. To create each required logical volume, enter a command similar to the following:

    # /sbin/lvcreate -n LVname -L size /dev/vg_name
    

    In this example:

    • LVname is the name of the logical volume that you want to create

      Oracle recommends that you use the sample names shown in the previous table for the logical volumes. Substitute the dbname variable in the sample logical volume name with the name that you chose for the database in step 1.

    • vg_name is the name of the volume group where you want to create the logical volume

    • size is the size of the logical volume in megabytes

    The following example shows a sample command used to create an 800 MB logical volume in the oracle_vg volume group for the SYSAUX tablespace of a database named test:

    # /sbin/lvcreate -n test_sysaux_raw_800m -L 800 /dev/oracle_vg
    
  4. Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:

    # chown oracle:dba /dev/vg_name/r*
    # chmod 755 /dev/vg_name
    # chmod 660 /dev/vg_name/r*
    
  5. Change the owner and group on the character device file associated with the logical volume for the Oracle Cluster Registry, as follows:

    # chown root:oinstall /dev/vg_name/rora_ocr_raw_256m
    

4.4.4.3 Export the Volume Group and Import It on the Other Cluster Nodes

To export the volume group and import it on the other cluster nodes:

  1. Deactivate the volume group:

    # /sbin/vgchange -a n vg_name
    
  2. To export the description of the volume group and its associated logical volumes to a map file, enter a command similar to the following:

    # /sbin/vgexport -v -s -p -m /tmp/vg_name.map /dev/vg_name
    
  3. Enter commands similar to the following to copy the map file to the other cluster nodes:

    # rcp /tmp/vg_name.map nodename:/tmp/vg_name.map
    
  4. Enter commands similar to the following on the other cluster nodes to import the volume group that you created on the first node:

    # mkdir /dev/vg_name
    # /sbin/mknod /dev/vg_name/group c 64 0xnn0000
    # /sbin/vgimport -v -s -m /tmp/vg_name.map /dev/vg_name
    
  5. Enter commands similar to the following on the other cluster nodes to change the owner, group, and permissions on the character device files associated with the logical volumes that you created:

    # chown oracle:dba /dev/vg_name/r*
    # chmod 755 /dev/vg_name
    # chmod 660 /dev/vg_name/r*
    
  6. Change the owner and group on the character device file associated with the logical volume for the Oracle Cluster Registry, as follows:

    # chown root:oinstall /dev/vg_name/rora_ocr_raw_256m
    

4.4.4.4 Activate the Volume Group in Shared Mode on All Cluster Nodes

To activate the volume group in shared mode on all cluster nodes, enter the following command on each node:

# /sbin/vgchange -a s vg_name

4.4.5 Create the Oracle Database Configuration Assistant Raw Device Mapping File

Note:

You must complete this procedure only if you are using raw logical volumes for database files. You do not specify the raw logical volumes for the Oracle Clusterware files in the Oracle Database Configuration Assistant raw device mapping file.

To enable Oracle Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:

  1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

    • Bourne, Bash, or Korn shell:

      $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
      
    • C shell:

      % setenv ORACLE_BASE /u01/app/oracle
      
  2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

    # mkdir -p $ORACLE_BASE/oradata/dbname
    # chown -R oracle:oinstall $ORACLE_BASE/oradata
    # chmod -R 775 $ORACLE_BASE/oradata
    

    In this example, dbname is the name of the database that you chose previously.

  3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

  4. Enter a command similar to the following to create a text file that you can use to create the raw device mapping file:

    # find /dev/vg_name -user oracle -name 'r*' -print > dbname_raw.conf
    
  5. Edit the dbname_raw.conf file in any text editor to create a file similar to the following:

    Note:

    The following example shows a sample mapping file for a two-instance RAC cluster.
    system=/dev/vg_name/rdbname_system_raw_500m
    sysaux=/dev/vg_name/rdbname_sysaux_raw_800m
    example=/dev/vg_name/rdbname_example_raw_160m
    users=/dev/vg_name/rdbname_users_raw_120m
    temp=/dev/vg_name/rdbname_temp_raw_250m
    undotbs1=/dev/vg_name/rdbname_undotbs1_raw_500m
    undotbs2=/dev/vg_name/rdbname_undotbs2_raw_500m
    redo1_1=/dev/vg_name/rdbname_redo1_1_raw_120m
    redo1_2=/dev/vg_name/rdbname_redo1_2_raw_120m
    redo2_1=/dev/vg_name/rdbname_redo2_1_raw_120m
    redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m
    control1=/dev/vg_name/rdbname_control1_raw_110m
    control2=/dev/vg_name/rdbname_control2_raw_110m
    spfile=/dev/vg_name/rdbname_spfile_raw_5m
    pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m
    

    In this example:

    • vg_name is the name of the volume group

    • dbname is the name of the database

    Use the following guidelines when creating or editing this file:

    • Each line in the file must have the following format:

      database_object_identifier=logical_volume
      

      The logical volume names suggested in this manual include the database object identifier that you must use in this mapping file. For example, in the following logical volume name, redo1_1 is the database object identifier:

      /dev/oracle_vg/rrac_redo1_1_raw_120m
      
    • The file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

    • Specify at least two control files (control1, control2).

    • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

    • Save the file and note the file name that you specified.

    • When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.