Oracle® Database Oracle Clusterware Installation Guide 11g Release 1 (11.1) for AIX Part Number B28258-01 |
|
|
View PDF |
This chapter includes storage administration tasks that you should complete if you intend to use Oracle Clusterware with Oracle Real Application Clusters (Oracle RAC).
This chapter contains the following topics:
Reviewing Storage Options for Oracle Database and Recovery Files
Configuring Storage for Oracle Database Files on a Supported Shared File System
Configuring Storage for Oracle Database Files on Shared Storage Devices
Desupport of the Database Configuration Assistant Raw Device Mapping File
This section describes supported options for storing Oracle Database files, and data files.
See Also:
The Oracle Certify site for a list of supported vendors for Network Attached Storage options:http://www.oracle.com/technology/support/metalink/
Refer also to the Certify site on OracleMetalink for the most current information about certified storage options:
https://metalink.oracle.com/
There are three ways of storing Oracle Database and recovery files:
Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files. It performs striping and mirroring of database files automatically.
A supported shared file system: Supported file systems include the following:
General Parallel File System (GPFS): Note that if you intend to use GPFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware. If you intend to store Oracle Clusterware files on GPFS, then you must ensure that GPFS volume sizes are at least 500 MB each.
NAS Network File System (NFS) listed on Oracle Certify: Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.
See Also:
The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices, and supported cluster file systemsRaw Devices: A partition is required for each database file. If you do not use ASM, then for new installations on raw devices, you must use a custom installation.
For all installations, you must choose the storage option that you want to use for Oracle Database files, or for Oracle Clusterware with Oracle RAC. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.
For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use ASM, or shared raw disks, if you do not want the failover processing to include dismounting and remounting of local file systems.
The following table shows the storage options supported for storing Oracle Database files and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.
Note:
For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certify pages on the OracleMetaLink Web site:https://metalink.oracle.com
Table 5-1 Supported Storage Options for Oracle Database and Recovery Files
Use the following guidelines when choosing the storage options that you want to use for each file type:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.
For Standard Edition Oracle RAC installations, ASM is the only supported storage option for database or recovery files.
If you intend to use ASM with Oracle RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:
All nodes on the cluster have the 11g release 1 (11.1) version of Oracle Clusterware installed.
Any existing ASM instance on any node in the cluster is shut down.
If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with ASM instances, then you must ensure that your system meets the following conditions:
Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the Oracle RAC database or Oracle RAC database with ASM instance is located.
The Oracle RAC database or Oracle RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing Oracle RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.
See Also:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseIf you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
After you have installed and configured Oracle Clusterware storage, and after you have reviewed your disk storage options for Oracle Database files, you must perform the following tasks in the order listed:
1: Check for available shared storage with CVU
Refer to Checking for Available Shared Storage with CVU.
2: Configure storage for Oracle Database files and recovery files
To use a shared file system for database or recovery file storage, refer to Configuring Storage for Oracle Database Files on a Supported Shared File System, and ensure that in addition to the volumes you create for Oracle Clusterware files, you also create additional volumes with sizes sufficient to store database files.
To use Automatic Storage Management for database or recovery file storage, refer to "Configuring Disks for Automatic Storage Management"
To use shared devices for database file storage, refer to "Configuring Storage for Oracle Database Files on Shared Storage Devices".
Note:
If you choose to configure database files on raw devices, note that you must complete database software installation first, and then configure storage after installation.You cannot use OUI to configure a database that uses raw devices for storage.
To check for all shared file systems available across all nodes on the cluster on a supported shared file system, log in as the installation owner user (oracle
or crs
), and use the following syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable mountpoint
is the mountpoint path of the installation media, the variable node_list
is the list of nodes you want to check, separated by commas, and the variable storageID_list
is the list of storage device IDs for the storage devices managed by the file system type that you want to check.
For example, if you want to check the shared accessibility from node1
and node2
of storage devices /dev/rhdisk6
and /dev/rhdisk7
, and your mountpoint is /dev/dvdrom/
, then enter the following command:
$ /mnt/dvdrom/runcluvfy.sh comp ssa -n node1,node2 -s /dev/rhdisk6,/dev/rhdisk7
If you do not specify storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.
Database files consist of the files that make up the database, and the recovery area files. There are five options for storing database files:
Network File System (NFS)
Automatic Storage Management (ASM)
Raw devices managed by HACMP
During configuration of Oracle Clusterware, if you selected GPFS or NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required preinstallation steps. You can proceed to Chapter 6, "Installing Oracle Clusterware".
If you want to place your database files on ASM, then proceed to Configuring Disks for Automatic Storage Management.
If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Storage for Oracle Database Files on Shared Storage Devices".
Note:
Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM.Review the following sections to complete storage requirements for Oracle Database files:
Requirements for Using a File System for Oracle Database Files
Enabling Direct NFS Client Oracle Disk Manager Control of NFS
Disabling Direct NFS Client Oracle Disk Management Control of NFS
Creating Required Directories for Oracle Database Files on Shared File Systems
To use a file system for Oracle Database files, the file system must comply with the following requirements:
To use an NFS file system, it must be on a certified NAS device.
If you choose to place your database files on a shared file system, then one of the following must be true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy).
The file systems consist of at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.
The oracle
user must have write permissions to create the files in the path that you specify.
Use Table 5-2 to determine the partition size for shared file systems.
Table 5-2 Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Database files |
1 |
At least 1.5 GB for each volume |
Recovery files Note: Recovery files must be on a different volume than database files |
1 |
At least 2 GB for each volume |
In Table 5-2, the total required volume size is cumulative. For example, to store all database files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.
Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.
NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.
This section contains the following information about Direct NFS:
With Oracle Database 11g release 1 (11.1), instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS V3 servers directly using an Oracle internal Direct NFS client.
To enable Oracle Database to use Direct NFS, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. The mount options used in mounting the file systems are not relevant, as Direct NFS manages settings after installation. Refer to your vendor documentation to complete NFS configuration and mounting.
Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS to operate. To disable reserved port checking, consult your NFS file server documentation.
If you use Direct NFS, then you can choose to use a new file specific for Oracle datafile management, oranfstab
, to specify additional options specific for Oracle Database to Direct NFS. For example, you can use oranfstab
to specify additional paths for a mount point. You can add the oranfstab
file either to /etc
or to $ORACLE_HOME/dbs
. The oranfstab
file is not required to use NFS or Direct NFS.
With Oracle RAC installations, if you want to use Direct NFS, then you must replicate the file /etc/oranfstab
on all nodes, and keep each /etc/oranfstab
file synchronized on all nodes.
When the oranfstab
file is placed in $ORACLE_HOME/dbs
, the entries in the file are specific to a single database. In this case, all nodes running an Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab
file.
When the oranfstab
file is placed in /etc
, then it is globally available to all Oracle databases, and can contain mount points used by all Oracle databases running on nodes in the cluster, including single-instance databases. However, on Oracle RAC systems, if the oranfstab
file is placed in /etc
, then you must replicate the file /etc/oranfstab
file on all nodes, and keep each /etc/oranfstab
file synchronized on all nodes, just as you must with the /etc/fstab
file.
In all cases, mount points must be mounted by the kernel NFS system, even when they are being served using Direct NFS.
Direct NFS determines mount point settings to NFS storage devices based on the configurations in /etc/filesystems
.
Direct NFS searches for mount entries in the following order:
$ORACLE_HOME/dbs/oranfstab
/etc/oranfstab
/etc/filesystems
Direct NFS uses the first matching entry found.
Note:
You can have only one active Direct NFS implementation for each instance. Using Direct NFS on an instance will prevent another Direct NFS implementation.If Oracle Database uses Direct NFS mount points configured using oranfstab
, then it first verifies kernel NFS mounts by cross-checking entries in oranfstab
with operating system NFS mount points. If a mismatch exists, then Direct NFS logs an informational message, and does not serve the NFS server. If Oracle Database is unable to open an NFS server using Direct NFS, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up as defined in "Checking NFS Mount Buffer Size Parameters for Oracle RAC". Additionally, an informational message will be logged into the Oracle alert and trace files indicating that Direct NFS could not be established. The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation.
Direct NFS can use up to four network paths defined in the oranfstab
file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/O commands over any remaining paths.
Use the following views for Direct NFS management:
v$dnfs_servers: Shows a table of servers accessed using Direct NFS.
v$dnfs_files: Shows a table of files currently open using Direct NFS.
v$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files.
v$dnfs_stats: Shows a table of performance statistics for Direct NFS.
Complete the following procedure to enable Direct NFS:
Create an oranfstab
file with the following attributes for each NFS server to be accessed using Direct NFS:
Server: The NFS server name.
Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig
command.
Export: The exported path from the NFS server.
Mount: The local mount point for the NFS server.
Note:
On Linux and Unix platforms, the location of theoranfstab
file is $ORACLE_HOME/dbs
.The following is an example of an oranfstab
file with two NFS server entries:
server: MyDataServer1 path: 132.34.35.12 path: 132.34.35.13 export: /vol/oradata1 mount: /mnt/oradata1 server: MyDataServer2 path: NfsPath1 path: NfsPath2 path: NfsPath3 path: NfsPath4 export: /vol/oradata2 mount: /mnt/oradata2 export: /vol/oradata3 mount: /mnt/oradata3 export: /vol/oradata4 mount: /mnt/oradata4 export: /vol/oradata5 mount: /mnt/oradata5
Oracle Database uses an ODM library, libnfsodm10.so
, to enable Direct NFS. To replace the standard ODM library, $ORACLE_HOME/lib/libodm10.so,
with the ODM NFS library, libnfsodm10.so
, complete the following steps:
Use one of the following methods to disable the Direct NFS client:
Remove the oranfstab
file.
Restore the stub libodm10.so
file by reversing the process you completed in step 2b, "Enabling Direct NFS Client Oracle Disk Manager Control of NFS"
Remove the specific NFS server or export paths in the oranfstab
file.
Note:
If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective.If you are using NFS, then you must set the values for the NFS buffer size parameters rsize
and wsize
to at least 16384. Oracle recommends that you use the value 32768.
If you are using Direct NFS, then set the rsize
and wsize
values to 32768. Direct NFS will not serve an NFS server with write size values (wtmax
) less than 32768.
For example, if you decide to use rsize
and wsize
buffer settings with the value 32768, then update the /etc/fstab
file on each node with an entry similar to the following:
nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs\ cio,rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600
Note:
Refer to your storage vendor documentation for additional information about mount options.Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for a RAC database).
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note:
The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.Use the df -k
command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use:
File Type | File System Requirements |
---|---|
Database files | Choose either:
|
Recovery files | Choose a file system with at least 2 GB of free disk space. |
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, oracle
) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:
Database file directory:
# mkdir /mount_point/oradata # chown oracle:oinstall /mount_point/oradata # chmod 775 /mount_point/oradata
Recovery file directory (flash recovery area):
# mkdir /mount_point/flash_recovery_area # chown oracle:oinstall /mount_point/flash_recovery_area # chmod 775 /mount_point/flash_recovery_area
By making the oracle
user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.
When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed NFS configuration for Oracle Database shared storage.
The following subsections describe how to configure Oracle Clusterware files on raw devices.
Before installing the Oracle Database 11g release 1 (11.1) software with Oracle RAC, create enough partitions of specific sizes to support your database, and also leave a few spare partitions of the same size for future expansion. For example, if you have space on your shared disk array, then select a limited set of standard partition sizes for your entire database. Partition sizes of 50 MB, 100 MB, 500 MB, and 1 GB are suitable for most databases. Also, create a few very small and a few very large spare partitions that are (for example) 1 MB and perhaps 5 GB or greater in size. Based on your plans for using each partition, determine the placement of these spare partitions by combining different sizes on one disk, or by segmenting each disk into same-sized partitions.
Note:
Be aware that each instance has its own redo log files, but all instances in a cluster share the control files and data files. In addition, each instance's online redo log files must be readable by all other instances to enable recovery.In addition to the minimum required number of partitions, you should configure spare partitions. Doing this enables you to perform emergency file relocations or additions if a tablespace data file becomes full.
Note:
For new installations, Oracle recommends that you do not use raw devices for database files.Table 5-3 lists the number and size of the shared partitions that you must configure for database files.
Table 5-3 Shared Devices or Logical Volumes Required for Database Files on AIX
Number | Partition Size (MB) | Purpose |
---|---|---|
1 |
500 |
|
1 |
300 + (Number of instances * 250) |
|
Number of instances |
500 |
|
1 |
250 |
|
1 |
160 |
|
1 |
120 |
|
2 * number of instances |
120 |
|
2 |
110 |
|
1 |
5 |
|
1 |
5 |
Note:
If you prefer to use manual undo management, instead of automatic undo management, then, instead of theUNDOTBS
n
shared storage devices, you must create a single rollback segment tablespace (RBS) on a shared storage device partition that is at least 500 MB in size.This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks:
Identifying Storage Requirements for Automatic Storage Management
Configuring Database File Storage for Automatic Storage Management and Raw Devices
Note:
Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an Automatic Storage Management disk group. Refer to Oracle Database Installation Guide for AIX 5L Based Systems (64-Bit) for information about creating and configuring NAS-based files for use in an Automatic Storage Management disk group.If you intend to use Hitachi HDLM (dmlf
devices) for storage, then ASM instances do not automatically discover the physical disks, but instead discover only the logical volume manager (LVM) disks. This is because the physical disks can only be opened by programs running as root.
Physical disk paths have path names similar to the following:
/dev/rdlmfdrv8 /dev/rdlmfdrv9
To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:
Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.
Note:
You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.For Oracle RAC installations, if you choose to enable automated backups and you do not have a shared file system available, you must choose Automatic Storage Management for recovery file storage.
If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or use different disk groups for each file type.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in non-interactive mode, you must use the same Automatic Storage Management disk group for database files and recovery files.
Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.
The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of disk space that you require, as follows:
External redundancy
An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.
Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.
Normal redundancy
In a normal redundancy disk group, Automatic Storage Management uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.
For most installations, Oracle recommends that you use normal redundancy disk groups.
High redundancy
In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.
While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.
Determine the total amount of disk space that you require for the database files and recovery files.
Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:
Redundancy Level | Minimum Number of Disks | Database Files | Recovery Files | Both File Types |
---|---|---|---|---|
External | 1 | 1.15 GB | 2.3 GB | 3.45 GB |
Normal | 2 | 2.3 GB | 4.6 GB | 6.9 GB |
High | 3 | 3.45 GB | 6.9 GB | 10.35 GB |
For Oracle RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):
15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)
For example, for a four-node Oracle RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:
15 + (2 * 3) + (126 * 4) = 525
If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.
The following section describes how to identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Automatic Storage Management disk group devices.
Note:
You need to complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.
Note:
If you define custom failure groups, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.
Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices.
For information about completing this task, refer to the "Configuring Database File Storage for Automatic Storage Management and Raw Devices" section.
If you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or to use an existing one.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.
Note:
The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.To determine if an existing Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:
View the contents of the oratab
file to determine whether an Automatic Storage Management instance is configured on the system:
$ more /etc/oratab
If an Automatic Storage Management instance is configured on the system, the oratab
file should contain a line similar to the following:
+ASM2:oracle_home_path:N
In this example, +ASM2
is the system identifier (SID) of the Automatic Storage Management instance, with node number appended, and oracle_home_path
is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.
Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.
Connect to the Automatic Storage Management instance as the SYS user with SYSDBA privilege and start the instance if necessary:
$ $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
SQL> STARTUP
Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.
If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.
Note:
If you are adding devices to an existing disk group, Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.To configure disks for use with Automatic Storage Management on AIX, follow these steps:
On AIX-based systems, you must apply Program Technical Fix (PTF) U496549 or higher to your system before you use ASM.
If necessary, install the shared disks that you intend to use for the Automatic Storage Management disk group and restart the system.
To make sure that the disks are available, enter the following command on every node:
# /usr/sbin/lsdev -Cc disk
The output from this command is similar to the following:
hdisk0 Available 1A-09-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1A-09-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 17-08-L SSA Logical Disk Drive
If a disk is not listed as available on any node, then enter the following command to configure the new disks:
# /usr/sbin/cfgmgr
Enter the following command on any node to identify the device names for the physical disks that you want to use:
# /usr/sbin/lspv | grep -i none
This command displays information similar to the following for each disk that is not configured in a volume group:
hdisk2 0000078752249812 None
In this example, hdisk2
is the device name of the disk and 0000078752249812
is the physical volume ID (PVID). The disks that you want to use might have a PVID, but they must not belong to a volume group.
Enter commands similar to the following to clear the PVID from each disk device that you want to use:
# /usr/sbin/chdev -l hdiskn -a pv=clear
On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:
# /usr/sbin/lspv | grep -i 0000078752249812
The output from this command should be similar to the following:
hdisk18 0000078752249812 None
Depending on how each node is configured, the device names may differ between nodes. Note that you will clear PVIDs later in this procedure.
If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices you want to use with ASM datafiles:
# chown oracle:dba /dev/rhdiskn # chmod 660 /dev/rhdiskn
If PVIDs are configured, and the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the nodes using a common unused name.
For the new device files, choose an alternative device file name that identifies the purpose of the disk device. Table 5-4 suggests alternative device file names for each file. For database files, replace dbname
in the alternative device file name with the name that you chose for the database in step 1.
Note:
Alternatively, you can choose a name that contains a number that will never be used on any of the nodes, for examplehdisk99
.To create a new common device file for a disk device on all nodes, follow these steps on each node:
Enter the following command to determine the device major and minor numbers that identify the disk device, where n
is the disk number for the disk device on this node:
# ls -alF /dev/*hdiskn
The output from this command is similar to the following:
brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn
In this example, the device file /dev/rhdisk
n
represents the character raw device, 24
is the device major number, and 8192
is the device minor number.
Enter a command similar to the following to create the new device file, specifying the new device file name and the device major and minor numbers that you identified in the previous step:
Note:
In the following example, you must specify the characterc
to create a character raw device file.# mknod /dev/ora_ocr_raw_256m c 24 8192
Enter commands similar to the following to change the owner, group, and permissions on the character raw device file for the disk:
OCR:
# chown root:oinstall /dev/ora_ocr_raw_256m # chmod 640 /dev/ora_ocr_raw_256m
Voting disk or database files:
# chown oracle:dba /dev/ora_vote_raw_256m # chmod 660 /dev/ora_vote_raw_256m
Enter a command similar to the following to verify that you have created the new device file successfully:
# ls -alF /dev | grep "24,8192"
The output should be similar to the following:
brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw-r----- 1 root oinstall 24,8192 Dec 05 2001 /dev/ora_ocr_raw_256m crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn
To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:
Disk Type | Attribute | Value |
---|---|---|
SSA or FAStT disks | reserve_lock | no |
ESS, EMC, HDS, CLARiiON, or MPIO-capable disks | reserve_policy | no_reserve |
To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:
# /usr/sbin/lsattr -E -l hdiskn
If the required attribute is not set to the correct value on any node, enter a command similar to one of the following on that node:
SSA and FAStT devices:
# /usr/sbin/chdev -l hdiskn -a reserve_lock=no
ESS, EMC, HDS, CLARiiON, and MPIO-capable devices:
# /usr/sbin/chdev -l hdiskn -a reserve_policy=no_reserve
Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:
# /usr/sbin/chdev -l hdiskn -a pv=clear
Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk that you want to add to the disk group:
# chown oracle:dba /dev/rhdiskn # chmod 660 /dev/rhdiskn
Note:
If you are using a multi-pathing disk driver with ASM, then ensure that you set the permissions only on the correct logical device name for the disk.The device name associated with a disk may be different on other nodes. Ensure that you specify the correct device name on each node.
When you are installing Oracle Clusterware Services, you must enter the paths to the appropriate device files when prompted for the path of the OCR and voting disk. For example:
/dev/rhdisk10
When you have completed creating and configuring Automatic Storage Management with raw partitions, proceed to Chapter 6, "Installing Oracle Clusterware"
The following subsections describe how to configure raw partitions for database files.
Configuring Raw Disk Devices for Database File Storage Without HACMP or GPFS
Creating Database File Raw Logical Volumes in the New Volume Group
Importing the Database File Volume Group on the Other Cluster Nodes
Activating the Database File Volume Group in Concurrent Mode on All Cluster Nodes
Table 5-4 lists the number and size of the raw partitions that you must configure for database files.
Note:
Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.Table 5-4 Raw Partitions Required for Database Files on AIX
Note:
If you prefer to use manual undo management, instead of automatic undo management, then, instead of the UNDOTBSn raw devices, you must create a single rollback segment tablespace (RBS) raw device that is at least 500 MB in size.If you are installing Oracle RAC on an AIX cluster without HACMP or GPFS, you can use shared raw disk devices for database file storage. However, Oracle recommends that you use Automatic Storage Management to store database files in this situation. This section describes how to configure the shared raw disk devices for database files.
To configure shared raw disk devices for Oracle Clusterware files, database files, or both:
If you intend to use raw disk devices for database file storage, then specify a name for the database that you want to create.
The name that you specify must start with a letter and have no more than four characters. For example: orcl
.
Identify or configure the required disk devices.
The disk devices must be shared on all of the cluster nodes.
As the root
user, enter the following command on any node to identify the device names for the disk devices that you want to use:
# /usr/sbin/lspv | grep -i none
This command displays information similar to the following for each disk device that is not configured in a volume group:
hdisk17 0009005fb9c23648 None
In this example, hdisk17
is the device name of the disk and 0009005fb9c23648
is the physical volume ID (PVID).
If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it:
# /usr/sbin/chdev -l hdiskn -a pv=yes
On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:
# /usr/sbin/lspv | grep -i "0009005fb9c23648"
The output from this command should be similar to the following:
hdisk18 0009005fb9c23648 None
In this example, the device name associated with the disk device (hdisk18
) is different on this node.
If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices you want to use for database files:
# chown oracle:dba /dev/rhdiskn # chmod 660 /dev/rhdiskn
If the device name associated with the PVID for a disk that you want to use is different on any node, then you must create a new device file for the disk on each of the nodes using a common unused name.
For the new device files, choose an alternative device file name that identifies the purpose of the disk device. Table 5-3 suggests alternative device file names for each file. For database files, replace dbname
in the alternative device file name with the name that you chose for the database in step 1.
Note:
Alternatively, you could choose a name that contains a number that will never be used on any of the nodes, for examplehdisk99
.To create a new common device file for a disk device on all nodes, perform these steps on each node:
Enter the following command to determine the device major and minor numbers that identify the disk device, where n
is the disk number for the disk device on this node:
# ls -alF /dev/*hdiskn
The output from this command is similar to the following:
brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn
In this example, the device file /dev/rhdisk
n
represents the character raw device, 24
is the device major number, and 8192
is the device minor number.
Enter a command similar to the following to create the new device file, specifying the new device file name (in this example, using an alternative device file name from Table 5-3) and the device major and minor numbers that you identified in the previous step:
Note:
As the following example illustrates, you must specify the characterc
to create a character raw device file.# mknod dbname_example_raw_160m c 24 8192
Enter a command similar to the following to verify that you have created the new device file successfully:
# ls -alF /dev | grep "24,8192"
The output should be similar to the following:
brw------- 1 root system 24,8192 Dec 05 2001 /dev/hdiskn crw-r----- 1 root oinstall 24,8192 Dec 05 2001 /dev/ora_ocr_raw_256m crw------- 1 root system 24,8192 Dec 05 2001 /dev/rhdiskn
To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:
Disk Type | Attribute | Value |
---|---|---|
SSA, FAStT, or non-MPIO-capable disks | reserve_lock | no |
ESS, EMC, HDS, CLARiiON, or MPIO-capable disks | reserve_policy | no_reserve |
To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:
# /usr/sbin/lsattr -E -l hdiskn
If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node:
SSA and FAStT devices
# /usr/sbin/chdev -l hdiskn -a reserve_lock=no
ESS, EMC, HDS, CLARiiON, and MPIO-capable devices
# /usr/sbin/chdev -l hdiskn -a reserve_policy=no_reserve
Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:
# /usr/sbin/chdev -l hdiskn -a pv=clear
If you are using raw disk devices for database files, then follow these steps to create the Database Configuration Assistant raw device mapping file:
Note:
You must complete this procedure only if you are using raw devices for database files. The Database Configuration Assistant raw device mapping file enables Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.
Oracle recommends that you use a file name similar to dbname
_raw.conf
for this file.
Note:
The following example shows a sample mapping file for a two-instance Oracle RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.system=/dev/rhdisk11 sysaux=/dev/rhdisk12 example=/dev/rhdisk13 users=/dev/rhdisk14 temp=/dev/rhdisk15 undotbs1=/dev/rhdisk16 undotbs2=/dev/rhdisk17 redo1_1=/dev/rhdisk18 redo1_2=/dev/rhdisk19 redo2_1=/dev/rhdisk20 redo2_2=/dev/rhdisk22 control1=/dev/rhdisk23 control2=/dev/rhdisk24 spfile=/dev/dbname_spfile_raw_5m pwdfile=/dev/dbname_pwdfile_raw_5m
In this example, dbname
is the name of the database.
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=device_file_name
The alternative device file names suggested in Table 5-4 include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device name, redo1_1
is the database object identifier:
/dev/rac_redo1_1_raw_120m
For an Oracle RAC database, the file must specify one automatic undo tablespace datafile (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs
) instead of the automatic undo management tablespace data files.
Save the file and note the file name that you specified.
When you are configuring the oracle
user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:
/dev/rhdisk10
Note:
To use raw logical volumes for database file storage, HACMP must be installed and configured on all cluster nodes.This section describes how to configure raw logical volumes for database file storage. The procedures in this section describe how to create a new volume group that contains the logical volumes required for both types of files.
Before you continue, review the following guidelines which contain important information about using volume groups with this release of Oracle RAC:
You must use concurrent-capable volume groups for database files.
Oracle Clusterware files require less than 560 MB of disk space, with external redundancy. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same volume group for the logical volumes for both the Oracle Clusterware files and the database files.
If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Configuring Raw Logical Volumes for Database File Storage" section for more information about the requirements for the SYSAUX logical volumes.
See Also:
The HACMP documentation for information about removing a volume group from a concurrent resource group.You must use a HACMP concurrent resource group to activate new or existing volume groups that contain only database files (not Oracle Clusterware files).
See Also:
The HACMP documentation for information about adding a volume group to a new or existing concurrent resource group.All volume groups that you intend to use for database files must be activated in concurrent mode before you start the installation.
The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes (for example, using mirroring), then use this section in conjunction with the HACMP documentation.
To create a volume group for the Oracle Database files:
If necessary, install the shared disks that you intend to use.
To ensure that the disks are available, enter the following command on every node:
# /usr/sbin/lsdev -Cc disk
The output from this command is similar to the following:
hdisk0 Available 1A-09-00-8,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 1A-09-00-9,0 16 Bit LVD SCSI Disk Drive hdisk2 Available 17-08-L SSA Logical Disk Drive
If a disk is not listed as available on any node, then enter the following command to configure the new disks:
# /usr/sbin/cfgmgr
Enter the following command on any node to identify the device names and any associated volume group for each disk:
# /usr/sbin/lspv
The output from this command is similar to the following:
hdisk0 0000078752249812 rootvg hdisk1 none none hdisk4 00034b6fd4ac1d71 ccvg1
For each disk, this command shows:
The disk device name
Either the 16 character physical volume identifier (PVID) if the disk has one, or none
Either the volume group to which the disk belongs, or none
The disks that you want to use may have a PVID, but they must not belong to existing volume groups.
If a disk that you want to use for the volume group does not have a PVID, then enter a command similar to the following to assign one to it:
# /usr/sbin/chdev -l hdiskn -a pv=yes
To identify used device major numbers, enter the following command on each node of the cluster:
# ls -la /dev | more
This command displays information about all configured devices, similar to the following:
crw-rw---- 1 root system 45, 0 Jul 19 11:56 vg1
In this example, 45 is the major number of the vg1
volume group device.
Identify an appropriate major number that is unused on all nodes in the cluster.
To create a volume group, enter a command similar to the following, or use SMIT (smit mkvg
):
# /usr/sbin/mkvg -y VGname -B -s PPsize -V majornum -n \ -C PhysicalVolumes
The following table describes the options and variables used in this example. Refer to the mkvg
man page for more information about these options.
Command Option | SMIT Field | Sample Value and Description |
---|---|---|
-y VGname
|
VOLUME GROUP name |
oracle_vg1Specify the name for the volume group. The name that you specify could be a generic name, as shown, or it could specify the name of the database that you intend to create. |
-y VGname
|
VOLUME GROUP name |
oracle_vg1Specify the name for the volume group. The name that you specify could be a generic name, as shown, or for a database volume group, it could specify the name of the database that you intend to create. |
-B |
Create a big VG format Volume Group | Specify this option to create a big VG format volume group.
Note: If you are using SMIT, then choose yes for this field. |
-s PPsize
|
Physical partition SIZE in megabytes |
32Specify the size of the physical partitions for the database. The sample value shown enables you to include a disk up to 32 GB in size (32 MB * 1016). |
-V Majornum
|
Volume Group MAJOR NUMBER |
46Specify the device major number for the volume group that you identified in Step 7. |
-n |
Activate volume group AUTOMATICALLY at system restart | Specify this option to prevent the volume group from being activated at system restart.
Note: If you are using SMIT, then choose no for this field. |
-C |
Create VG Concurrent Capable | Specify this option to create a concurrent capable volume group.
Note: If you are using SMIT, then choose yes for this field. |
PhysicalVolumes
|
PHYSICAL VOLUME names |
hdisk3 hdisk4Specify the device names of the disks that you want to add to the volume group. |
Enter a command similar to the following to vary on the volume group that you created:
# /usr/sbin/varyonvg VGname
To create the required raw logical volumes in the new volume group:
Choose a name for the database that you want to create.
The name that you choose must start with a letter and have no more than four characters, for example, orcl
.
Identify the logical volumes that you must create.
Table 5-5 lists the number and size of the logical volumes that you must create for database files.
Table 5-5 Raw Logical Volumes Required for Database Files
To create each required logical volume for data files, Oracle recommends that you use a command similar to the following to create logical volumes with a zero offset:
# /usr/sbin/mklv -y LVname -T O -w n -s n -r n VGname NumPPs
In this example:
LVname
is the name of the logical volume that you want to create
The -T O
option specifies that the device subtype should be z
, which causes Oracle to use a zero offset when accessing this raw logical volume
VGname
is the name of the volume group where you want to create the logical volume
NumPPs
is the number of physical partitions to use
To determine the value to use for NumPPs
, divide the required size of the logical volume by the size of the physical partition and round the value up to an integer. For example, if the size of the physical partition is 32 MB and you want to create a 500 MB logical volume, then you should specify 16 for the NumPPs (500/32 = 15.625).
Using a zero offset improves database performance and fixes the issues described in Oracle bug 2620053.
Note:
On raw logical volumes, if you create tablespaces in datafiles that are not created in this way, a message is recorded in the alert.log file.If you prefer, you can also use the command smit mklv
to create raw logical volumes.
The following example shows the command used to create a logical volume for the SYSAUX tablespace of the test
database in the oracle_vg1
volume group with a physical partition size of 32 MB (800/32 = 25):
# /usr/sbin/mklv -y test_sysaux_raw_800m -T O -w n -s n -r n oracle_vg1 25
Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:
Note:
The device file associated with the Oracle Cluster Registry must be owned byroot
. All other device files must be owned by the Oracle software owner user (oracle
).# chown oracle:dba /dev/rdbname* # chmod 660 /dev/rdbname*
To make the database file volume group available to all nodes in the cluster, you must import it on each node, as follows:
Because the physical volume names may be different on the other nodes, enter the following command to determine the PVID of the physical volumes used by the volume group:
# /usr/sbin/lspv
Note the PVIDs of the physical devices used by the volume group.
To vary off the volume group that you want to use, enter a command similar to the following on the node where you created it:
# /usr/sbin/varyoffvg VGname
On each cluster node, complete the following steps:
Enter the following command to determine the physical volume names associated with the PVIDs you noted previously:
# /usr/sbin/lspv
On each node of the cluster, enter commands similar to the following to import the volume group definitions:
# /usr/sbin/importvg -y VGname -V MajorNumber PhysicalVolume
In this example, MajorNumber
is the device major number for the volume group and PhysicalVolume
is the name of one of the physical volumes in the volume group.
For example, to import the definition of the oracle_vg1
volume group with device major number 45 on the hdisk3
and hdisk4
physical volumes, enter the following command:
# /usr/sbin/importvg -y oracle_vg1 -V 45 hdisk3
Change the owner, group, and permissions on the character device files associated with the logical volumes you created, as follows:
# chown oracle:dba /dev/rdbname* # chmod 660 /dev/rdbname*
Enter the following command to ensure that the volume group will not be activated by the operating system when the node starts:
# /usr/sbin/chvg -a n VGname
To activate the volume group in concurrent mode on all cluster nodes, enter the following command on each node:
# /usr/sbin/varyonvg -c VGname
Note:
You must complete this procedure only if you are using raw devices for database files. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.To enable Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:
Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Enter the following command to create a text file that you can use to create the raw device mapping file:
# find /dev -user oracle -name 'r*' -print > dbname_raw.conf
Edit the dbname
_raw.conf
file in any text editor to create a file similar to the following:
Note:
The following example shows a sample mapping file for a two-instance Oracle RAC cluster.system=/dev/rdbname_system_raw_500m sysaux=/dev/rdbname_sysaux_raw_800m example=/dev/rdbname_example_raw_160m users=/dev/rdbname_users_raw_120m temp=/dev/rdbname_temp_raw_250m undotbs1=/dev/rdbname_undotbs1_raw_500m undotbs2=/dev/rdbname_undotbs2_raw_500m redo1_1=/dev/rdbname_redo1_1_raw_120m redo1_2=/dev/rdbname_redo1_2_raw_120m redo2_1=/dev/rdbname_redo2_1_raw_120m redo2_2=/dev/rdbname_redo2_2_raw_120m control1=/dev/rdbname_control1_raw_110m control2=/dev/rdbname_control2_raw_110m spfile=/dev/rdbname_spfile_raw_5m pwdfile=/dev/rdbname_pwdfile_raw_5m
In this example, dbname
is the name of the database.
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=logical_volume
The logical volume names suggested in this manual include the database object identifier that you must use in this mapping file. For example, in the following logical volume name, redo1_1
is the database object identifier:
/dev/rrac_redo1_1_raw_120m
The file must specify one automatic undo tablespace datafile (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs
) instead of the automatic undo management tablespace data files.
Save the file and note the file name that you specified.
When you are configuring the oracle
user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
With the release of Oracle Database 11g and Oracle RAC release 11g, configuring raw devices using Database Configuration Assistant is not supported.
As the oracle
user, use the following command syntax to start Cluster Verification Utility (CVU) stage verification to check hardware, operating system, and storage setup:
/mountpoint/runcluvfy.sh stage –post hwos –n node_list [-verbose]
In the preceding syntax example, replace the variable node_list
with the names of the nodes in your cluster, separated by commas. For example, to check the hardware and operating system of a two-node cluster with nodes node1
and node2
, with the mountpoint /mnt/dvdrom/
and with the option to limit the output to the test results, enter the following command:
$ /mnt/dvdrom/runcluvfy.sh stage –post hwos –n node1,node2
Select the option -verbose
to receive detailed reports of the test results, and progress updates about the system checks performed by Cluster Verification Utility.