Oracle® Real Application Clusters Installation and Configuration Guide 10g Release 1 (10.1) for AIX-Based Systems, hp HP-UX PA-RISC (64-bit), hp Tru64 UNIX, Linux, Solaris Operating System (SPARC 64-bit), and Windows (32-bit) Platforms Part Number B10766-02 |
|
|
View PDF |
This chapter describes the procedures for installing Cluster Ready Services (CRS) on UNIX, phase one of the Oracle Database 10g Real Application Clusters installation on UNIX-based systems. The topics in this chapter are:
Perform the following procedures to complete phase one of the installation of the Oracle Database 10g with RAC.
Verify user equivalence by executing the ssh
command on the local node with the date
command argument using the following syntax:
ssh node_name date
The output from this command should be the timestamp of the remote node identified by the value that you use for node_name
. If ssh
is in the /usr/local/bin
directory, then use ssh
to configure user equivalence.
You cannot use ssh
to verify user equivalence if ssh
is in another location in your PATH
. In this case, use rsh
to confirm user equivalence.
Note: When you test user equivalence by executing thessh or rsh commands, the system should not respond with questions nor should you see additional output, for example, besides the output of the date command. |
In addition to the host machine's public internet protocol (IP) address, obtain two more IP addresses for each node that is going to be part of your installation. During the installation, enter the IP addresses into DNS. One of the IP addresses must be a public IP address for the node's virtual IP address (VIP). Oracle uses VIPs for client-to-database connections. Therefore, the VIP address must be publicly accessible. The other address must be a private IP address for inter-node, or instance-to-instance Cache Fusion traffic. Using public interfaces for Cache Fusion can cause performance problems.
If you are using Sun Clusters, then you must install the Oracle-provided UDLM patch onto each node that is part of your current cluster installation. You must install the UDLM patch before you install Cluster Ready Services. Even if you have a pre-Oracle Database 10g UDLM, you must install the Oracle Database 10g UDLM.
Install the UDLM patch using the procedures in the README
file that is located in the /racpatch
directory on the Oracle Cluster Ready Services Release 1 (10.1.0.2) CD-ROM.
Note: You must stop the cluster and upgrade the UDLM one node at a time if you are upgrading from a previous UDLM version. |
This section describes the procedures for using the Oracle Universal Installer (OUI) to install CRS. Note that the CRS home that you identify in this phase of the installation is only for CRS software; this home cannot be the same home as the home that you will use in phase two to install the Oracle Database 10g software with RAC.
Note: You cannot install Oracle Database 10g Cluster Ready Services software on an Oracle Database 10g cluster file system. |
If you are installing CRS on a node that already has a single-instance Oracle Database 10g installation, stop the existing ASM instances and Cluster Synchronization Services (CSS) daemon and use the script $ORACLE_HOME/bin/localconfig delete
in the home that is running CSS to reset the OCR configuration information. After CRS is installed, then start up the ASM instances again and the ASM instances will use the cluster CSS daemon instead of the daemon for the single-instance Oracle database.
Run the runInstaller
command from the /crs
subdirectory on the Oracle Cluster Ready Services Release 1 (10.1.0.2) CD-ROM. This is a separate CD that contains the Cluster Ready Services software. When the OUI displays the Welcome page, click Next.
Depending on whether your environment has an Oracle inventory, the following scenarios apply:
If you are performing this installation in an environment where the OUI inventory is already set up, then the OUI displays the Specify File Locations page. If the Specify File Locations page appears, proceed to Step 5.
If you are performing this installation in an environment in which you have never installed Oracle database software, in other words the environment does not have an OUI inventory, then the OUI displays the Specify Inventory Directory and Credentials page. Enter the inventory location and the UNIX group name information into the Specify Inventory Directory and Credentials page, click Next, and the OUI displays a dialog.
The OUI dialog indicates that you should run the oraInventory location
/orainstRoot.sh
script. Run the orainstRoot.sh
script as root
user, click Continue, and the OUI displays the Specify File Locations page.
The Specify File Locations Page contains predetermined information for the source of the installation files and the target destination information. Enter the CRS home name and its location in the target destination, click Next, and the OUI displays the Language Selection page.
Note: The CRS home that you identify in this step must be different from the Oracle home that you will use in phase two of the installation. |
In the Language Selection page select the languages that you want CRS to use, click Next, and the OUI displays the Cluster Configuration Information page.
The Cluster Configuration Information page contains pre-defined node information if the OUI detects that your system has vendor clusterware. Otherwise, the OUI displays the Cluster Configuration Information page without pre-defined node information.
If you install the clusterware in this installation session without using vendor clusterware, then enter a public node name and a private node name for each node. When you enter the public node name, use the primary host name of each node. In other words, use the name displayed by the hostname
command. This node name can be either the permanent or the virtual host name.
In addition, the cluster name that you use must be globally unique throughout the enterprise and the allowable character set for the cluster name is the same as that of host names. For example, you cannot use the characters !
@
#
%
^
&
*
or ()
. Oracle recommends that you use the vendor cluster name if one exists. Make sure that you also enter a private node name or private IP address for each node. This is an address that is only accessible by the other nodes in this cluster. Oracle uses the private IP addresses for Cache Fusion processing. Click Next after you have entered the cluster configuration information and the OUI performs validation checks such as node availability and remote Oracle home permissions verifications. These verifications may require some time to complete. When the OUI completes the verifications, the OUI displays the Private Interconnect Enforcement page.
Note: The IP addresses that you use for all of the nodes in the current installation process must be from the same subnet. |
In the Private Interconnect Enforcement page the OUI displays a list of cluster-wide interfaces. Use the drop-down menus on this page to classify each interface as Public
, Private
, or Do Not Use
. The default setting for each interface is Do Not Use
. You must classify at least one interconnect as Public
and one as Private
.
When you click Next on the Private Interconnect Enforcement page, the OUI will look for the ocr.loc
file. The OUI will look in the /var/opt/oracle directory in HP-UX and in Solaris Operating System (SPARC 64-bit) environments. On other UNIX systems, the OUI will look for the ocr.loc
file in the /etc
directory. If the ocr.loc
file exists, and if the ocr.loc
file has a valid entry for the Oracle Cluster Registry (OCR) location, then the Voting Disk Location page appears and you should proceed to Step 10.
Otherwise, the Oracle Cluster Registry Location Information page appears. Enter a complete path for the raw device or shared file system file for the Oracle Cluster Registry, click Next, and the Voting Disk Information page appears.
On the Voting Disk Information Page, enter a complete path and file name for the file in which you want to store the voting disk and click Next. This must be a shared raw device or a shared file system file.
Note: The storage size for the OCR should be at least 100MB and the storage size for the voting disk should be at least 20MB. In addition, Oracle recommends that you use a RAID array for storing the OCR and the voting disk to ensure the continuous availability of the partitions. |
See Also: The pre-installation chapters in Part II for information about the minimum raw device sizes |
After you complete the Voting Disk Information page and click Next, if the Oracle inventories on the remote nodes are not set up, then the OUI displays a dialog asking you to run the orainstRoot.sh
script on all of the nodes. After the orainstRoot.sh
script processing completes, the OUI displays a Summary page.
Verify that the OUI should install the components shown on the Summary page and click Install.
During the installation, the OUI first copies software to the local node and then copies the software to the remote nodes. Then the OUI displays a dialog indicating that you must run the root.sh
script on all the nodes that are part of this installation. Execute the root.sh
script on one node at a time and click OK in the dialog that root.sh
displays after it completes each session. Only start another session of root.sh
on another node after the previous root.sh
execution completes; do not execute root.sh
on more than one node at a time. When you complete the final execution of root.sh
, the root.sh
script runs the following assistants without your intervention:
Oracle Cluster Registry Configuration Tool (ocrconfig)—If this tool detects a 9.2.0.2 version of RAC, then the tool upgrades the 9.2.0.2 OCR block format to an Oracle Database 10g OCR block format.
Cluster Configuration Tool (clscfg)—This tool automatically configures your cluster and creates the OCR keys.
When the OUI displays the End of Installation Page, click Exit to exit the Installer.
Verify your CRS installation by executing the olsnodes
command from the CRS Home
/bin
directory. The olsnodes
command syntax is:
olsnodes [-n] [-l] [-v] [-g]
Where:
-n
displays the member number with the member name
-l
displays the local node name
-v
activates verbose mode
-g
activates logging
The output from this command should be a listing of the nodes on which CRS was installed.
After you run root.sh
on all of the nodes and click OK on the last root.sh
dialog, the OUI runs the Oracle Notification Server Configuration Assistant and Oracle Private Interconnect Configuration Assistant. These assistants run without user intervention.
At this point, you have completed phase one, the installation of Cluster Ready Services, and are ready to install the Oracle Database 10g with RAC as described in Chapter 10, " Installing Oracle Database 10g with Real Application Clusters".
The following processes must be running in your environment after the CRS installation in order for Cluster Ready Services to function:
oprocd
-- Process monitor for the cluster. Note that this process will only appear on platforms that do not use vendor clusterware with CRS.
evmd
-- Event manager daemon that starts the racgevt
process to manage callouts.
ocssd
-- Manages cluster node membership and runs as oracle
user; failure of this process results in cluster restart.
crsd
-- Performs high availability recovery and management operations such as maintaining the OCR. Also manages application resources and runs as root
user and restarts automatically upon failure.