Skip Headers

Oracle® Application Server 10g Installation Guide
10g (9.0.4) for hp HP-UX PA-RISC (64-bit) and Linux x86
Part No. B10842-03
  Go To Documentation Library
Home
Go To Table Of Contents
Contents
Go To Index
Index

Previous Next  

9 Installing in High Availability Environments

This chapter describes how to install OracleAS Infrastructure 10g in the following high availability environments:

Section 9.1, "Requirements for High Availability Environments" describes requirements applicable for these high availability environments.

9.1 Requirements for High Availability Environments

This section describes the requirements that you have to meet before you can install Oracle Application Server in an OracleAS Active Failover Cluster or OracleAS Cold Failover Cluster environment. In addition to these common requirements, each environment has its own specific requirements. See the individual sections for details.


Note:

You still need to meet the requirements listed in Chapter 4, " Requirements", plus requirements specific to the high availability environment that you plan to use.

The common requirements are:

9.1.1 Check Minimum Number of Nodes

You need at least two nodes in a high availability environment. If a node fails for any reason, the second node takes over.

9.1.2 Check That Clusterware Is Running

Each node in a cluster must be running a certified clusterware. The following clusterware is certified:

Platform OracleAS Cold Failover Cluster
OracleAS Active Failover Cluster
HP-UX HP Serviceguard HP Serviceguard Extension for RAC (formerly called Serviceguard OPS Edition)
Linux Red Hat Cluster Manager Oracle Cluster Management Software. See Appendix J for more information about Oracle Cluster Management Software.

For the most up-to-date list of certified clusterware, check the OracleAS clusterware certification page of OracleMetaLink (http://metalink.oracle.com).

9.1.2.1 Checking HP Serviceguard on HP-UX

Enter the following command as root to make sure that HP Serviceguard is running:

# /usr/sbin/cmviewcl

The output of this command should list the cluster and indicate that the cluster has the status up. It should also list each node of the cluster. The following example shows the status of a two node cluster:

CLUSTER      STATUS 
iAS_Cluster  up 

   NODE         STATUS       STATE        GMS_STATE 
   oappsvr1     up           running      halted 
   oappsvr2     up           running      halted 

9.1.2.2 Checking Red Hat Cluster Manager on Linux

Enter the following command to make sure that Red Hat Cluster Manager is running:

On Red Hat 2.1:

$ /sbin/service cluster status

On Red Hat 3.0:

$ /sbin/service clumanager status

The output of this command should indicate that all processes are running.

9.1.2.3 Checking Oracle Cluster Management Software on Linux

Enter the following commands to make sure that Oracle Cluster Management Software is running:

$ ps -ef | grep oracm $ ps -ef | grep oranm $ ps -ef | grep watchdogd 

The output of these commands should indicate that at least one instance of each of the oracm, oranm and watchdogd processes exists.

9.1.3 Check That Groups Are Defined Identically on All Nodes

Check that the /etc/group file on all nodes in the cluster contains the operating system groups that you plan to use. You should have one group for the Oracle Installer Inventory directory, and one or two groups for database administration. The group names and the group IDs must be the same for all nodes.

See Section 4.5, "Operating System Groups" for details.

9.1.4 Check the Properties of the oracle User

Check that the oracle operating system user, which you log in as to install Oracle Application Server, has the following properties:

  • Belongs to the oinstall group and to the osdba group. The oinstall group is for the Oracle Installer Inventory directory, and the osdba group is a database administration group. See Section 4.5, "Operating System Groups" for details.

  • Has write privileges on remote directories.

  • If the TMP or TMPDIR environment variables are set for the oracle user, check that these directories exist and that they contain sufficient free disk space for temporary files. Check this condition on each node of the cluster.

9.1.5 Check for Previous Oracle Installations on All Nodes

Details of all Oracle software installations are recorded in the Oracle Installer Inventory directory. Typically, this directory is unique to a node and named oraInventory. The directory path is stored in the oraInst.loc file and this file is stored in the /etc directory on Linux and in the /var/opt/oracle directory on HP-UX. The existance of this file on a node confirms that the node contains some Oracle software installation. Since the OracleAS Infrastructure 10g high availability environments require installations on multiple nodes with Oracle Installer Inventory directories on a file system that may not be accessible on other nodes, the installation instructions in this chapter assume that there have not been any previous installations of any Oracle software on any of the nodes that are used for this high availability environment. The oraInst.loc file and the Oracle Installer Inventory directory should not exist on any of these nodes prior to these high availability installations.


Note:

On Linux you must install Oracle Cluster Management Software before installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment. That installation creates a new oraInst.loc file. Do not rename the oraInst.loc file as described in this section if you are installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment on Linux.

If a oraInst.loc file and a Oracle Installer Inventory directory exist, rename the file and directory.

For example enter the following commands as root on Linux:

# cat /etc/oraInst.loc 
inventory_loc=/localfs/app/oracle/oraInventory 
inst_group=dba 
# mv /etc/oraInst.loc /etc/oraInst.loc.orig 
# mv /localfs/app/oracle/oraInventory /localfs/app/oracle/oraInventory.orig

Since the oraInst.loc file and the Oracle Installer Inventory directories are relevant only during the installation of Oracle software, and not at runtime, renaming them and restoring them later does not affect the behavior of any installed Oracle software on any node. Make sure that the appropriate oraInst.loc file and Oracle Installer Inventory directories are in place before starting the Oracle Universal Installer.

9.2 OracleAS Cold Failover Cluster

An OracleAS Cold Failover Cluster environment (Figure 9-1) consists of:

During normal operation, node 1, which is the primary node, is the active node. It mounts the shared storage to access the OracleAS Infrastructure 10g files, runs OracleAS Infrastructure 10g processes, and handles all requests.

If node 1 goes down for any reason, the clusterware fails over the OracleAS Infrastructure 10g processes on node 1 to node 2. Node 2 becomes the active node, mounts the shared storage, runs the processes, and handles all requests.

To access the active node in an OracleAS Cold Failover Cluster, clients, including middle tier components and applications, use the virtual hostname associated with the OracleAS Cold Failover Cluster. The virtual hostname is associated with the active node (node 1 during normal operation, node 2 if node 1 goes down). Clients do not need to know which node (primary or secondary) is servicing requests.

You also use the virtual hostname in URLs that access the infrastructure. For example, if vhost.mydomain.com is the name of the virtual host, the URLs for the Oracle HTTP Server and the Application Server Control would look like the following:

URL for: Example URL
Oracle HTTP Server Welcome page http://vhost.mydomain.com:7777
Oracle HTTP Server, secure mode https://vhost.mydomain.com:4443
Application Server Control
http://vhost.mydomain.com:1810

Figure 9-1 OracleAS Cold Failover Cluster Environment

Description of cfc.gif follows
Description of the illustration cfc.gif

The rest of this section describes these procedures:

9.2.1 Setting up an OracleAS Cold Failover Cluster Environment

Before you can install OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster, perform these procedures:

Also, ensure that you meet the requirements described in Section 9.1, "Requirements for High Availability Environments".

9.2.1.1 Map the Virtual Hostname and Virtual IP Address

Each node in an OracleAS Cold Failover Cluster environment is associated with its own physical hostname and IP address. In addition, the active node in the cluster is associated with a virtual hostname and IP address. This allows clients to access the OracleAS Cold Failover Cluster using a hostname and IP address that can float between any node of the cluster.

Virtual hostnames and virtual IP addresses are any valid hostname and IP address in the context of the subnet containing the hardware cluster.


Note:

You map the virtual hostname and virtual IP address only to the active node. Do not map the virtual hostname and IP address to both active and secondary nodes at the same time. When you fail over, only then do you map the virtual hostname and IP address to the secondary node, which is now the active node.

The following example configures a node with virtual hostname vhost.mydomain.com with virtual IP address 138.1.12.191.


Note:

Before attempting to complete this procedure, ask the system or network administrator to review all the steps required. The procedure will reconfigure the network settings on the cluster nodes and may vary with differing network implementations.

  1. Register the virtual hostname and IP address with DNS for the network.

    For example, register the vhost.mydomain.com/138.1.12.191 pair with DNS.

  2. Add the following line to the /etc/hosts file on the active node:

    ip_address hostname.domain hostname
    
    

    For example:

    138.1.12.191   vhost.mydomain.com   vhost
    
    
  3. Determine the primary public network interface.

    The primary public network interface for Ethernet encapsulation is typically lan0 on HP-UX and eth0 on Linux. Use the following commands to determine the primary public network interface:

    • On HP-UX, enter the following command and search for a network interface that has an Address value of the physical hostname of the node:

      /usr/bin/netstat -i
      
      
    • On Linux, enter the following command and search for a network interface that has an inet addr value of the physical IP address of the node:

      /sbin/ifconfig
      
      
  4. Find an available index number for the primary public network interface.

    Using the same commands as described in step 3, determine an available index number for an additionl IP address to the primary public network interface.

    For example, on HP-UX, if the following is the output of the /usr/bin/netstat -i command on a HP-UX system and lan0 was determined to be the primary public interface in step 3, then lan0:2 is available for an additional IP address.

    Name     Mtu   Network      Address         Ipkts        Opkts 
    lan0:1   1500  datacenter1   www1.mydomain.com   1050265       734793 lan1*    1500  none          none            0             0 lan0     1500  datacenter1   www2.mydomain.com   39783928      41833023 lo0      4136  loopback      localhost       1226188       1226196

    Do not use 0 as the index number because interface:0 is typically the same as just interface on most systems. For example, lan0:0 is the same as lan0 on HP-UX.

  5. Add the virtual IP address to the primary public network interface by running the appropriate command below as the root user:


    Note:

    You must use the same NETMASK and BROADCAST values for this interface as those used for the primary public network interface (lan0 and eth0 in the examples). Modify the ifconfig commands in this step to include the appropiate netmask and broadcast options.

    • On HP-UX enter the following command using the available index number from step 4:

      /usr/sbin/ifconfig primary_public_interface:available_index ip_address
      
      

      For example, enter the following command if lan0:2 is available:

      /usr/sbin/ifconfig lan0:2 138.1.12.191
      
      
    • On Linux enter the following command using the available index number from step 4:

      /sbin/ifconfig primary_public_interface:available_index ip_address
      
      

      For example, enter the following command if eth0:1 is available:

      /sbin/ifconfig eth0:1 138.1.12.191
      
      
  6. Check that the virtual IP address is configured correctly.

    Using the same commands as listed in step 3, confirm the new entry for the primary_public_interface:available_index entry created in step 5. Additionally, try to connect to the node using the virtual hostname and virtual IP address from another node. For example, entering both of the following commands from a different node should provide a login window to the node you configured in this procedure:

    telnet hostname.domain
    telnet ip_address
    
    

    For example, enter:

    telnet vhost.mydomain.com
    telnet 138.1.12.191
    
    

Cold Failover

If the active node fails, then the secondary node takes over. You must remove the virtual IP mapping from the failed node and map it to the secondary node.


Note:

If the failed node is offline or rebooted, the first step is not required because the failed node will not be configured with the virtual hostname or IP address.

  1. On the failed node, remove the virtual IP address.

    • On HP-UX enter the following command:

      /usr/sbin/ifconfig configured_interface down
      
      

      For example, enter the following command if lan0:2 is configured with the virtual IP address:

      /usr/sbin/ifconfig lan0:2 down
      
      
    • On Linux enter the following command:

      /sbin/ifconfig configured_interface down
      
      

      For example, enter the following command if eth0:1 is configured with the virutal IP address:

      /sbin/ifconfig eth0:1 down
      
      

    Note:

    Use the commands in step 3 of the previous procedure to confirm that the virtual IP address has been removed.

  2. On the secondary node, add the virtual IP address.

    On the secondary node, follow steps 2 to 6 of the previous procedure to add and confirm the virtual IP address on the secondary node.

9.2.1.2 Set Up a File System That Can Be Mounted from Both Nodes

Although the hardware cluster has shared storage, you need to create a file system on this shared storage such that both nodes of the OracleAS Cold Failover Cluster can mount this file system. On this file system, you place the following directories:

  • OracleAS Infrastructure 10g

  • The oraInventory directory and the jre/1.1.8 directory. The installer automatically installs the jre directory at the same level as the oraInventory directory.

    For example, if you specify /mnt/app/oracle/oraInventory as the oraInventory directory, the installer installs the jre directory as /mnt/app/oracle/jre. The installer installs the 1.1.8 directory within the jre directory.

For disk space requirements for OracleAS Infrastructure 10g, see Section 4.1, "Check Hardware Requirements".

If you are running a volume manager on the cluster to manage the shared storage, refer to the volume manager documentation for steps to create a volume. Once a volume is created, you can create the file system on that volume.

If you do not have a volume manager, you can create a file system on the shared disk directly. Ensure that the hardware vendor supports this, that the file system can be mounted from either node of the OracleAS Cold Failover Cluster, and that the file system is repairable from either node in case of a crash.

To check that the file system can be mounted from either node, do the following steps:

  1. Set up and mount the file system from node 1.

  2. Unmount the file system from node 1.

  3. Mount the file system from node 2 using the same mount point that you used in step 1.

  4. Unmount it from node 2, and mount it on node 1, because you will be running the installer from node 1.


Note:

Only one node of the OracleAS Cold Failover Cluster should mount the file system at any given time. File system configuration files on all nodes of the cluster should not include an entry for the automatic mount of the file system upon a node reboot or execution of a global mount command. For example, on UNIX platforms, do not include an entry for this file system in /etc/fstab file.

9.2.2 Installing OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster

For the OracleAS Cold Failover Cluster solution, you must install both the OracleAS Metadata Repository and the Identity Management components on the same computer at the same time by selecting the Identity Management and OracleAS Metadata Repository option in the Select Installation Type screen. This option creates a new database for the OracleAS Metadata Repository and a new Oracle Internet Directory.


Note:

For the OracleAS Cold Failover Cluster solution, you must install a new database (for the OracleAS Metadata Repository) and Oracle Internet Directory. You cannot use an existing database or Oracle Internet Directory for OracleAS Cold Failover Cluster solutions.

Follow this procedure to install OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster environment:

Table 9-1 Steps for Installing OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster


Screen Action
1. -- Start up the installer. See Section 5.15, "Starting the Oracle Universal Installer" for details.
2. Welcome Click Next.
3. Specify Inventory Directory This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the full path for the inventory directory: Enter a full path to a directory where you want the installer to store its files. The installer uses these files to keep track of all Oracle products that are installed on this computer. Enter a directory that is different from the Oracle home directory.

Note: You must enter a directory in the file system that can be mounted from either node in the OracleAS Cold Failover Cluster environment.

Example: /mnt/app/oracle/oraInventory

Click OK.

4. UNIX Group Name This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the name of the operating system group to have permission to update Oracle software installations on this system.

Example: oinstall

Click Next.

5. Run orainstRoot.sh This screen appears only if this is the first installation of any Oracle product on this computer.

Run the orainstRoot.sh script in a different shell as the root user. The script is located in the Oracle Installer Inventory directory specified in the Specify Inventory Directory screen.

Click Continue.

6. Specify File Locations Destination Name: Enter a name to identify this Oracle home.

Example: oracleas

Destination Path: Enter the full path to the destination directory. This is the Oracle home.

Notes:

  • You must enter a directory in the file system that can be mounted from either node in the OracleAS Cold Failover Cluster environment.

  • You must enter a new Oracle home name and directory. Do not select an existing Oracle home from the drop down list. If you select an existing Oracle home, the installer will not display the next screen, Specify Hardware Cluster Installation Mode.

Example: /mnt/app/oracle/OraInfra_904

Click Next.

7. Specify Hardware Cluster Installation Mode Select Single Node or Cold Failover Cluster Installation. Click Next.

If you do not see this screen, the installer was not able to determine that the current node is running a clusterware (see Section 9.1.2, "Check That Clusterware Is Running"). However, you can continue the installation. You just need to select High Availability Addressing in the Select Configuration Options screen in step 12. Also, ensure that your clusterware is running.

Note: On Linux, Single Node or Cold Failover Cluster Installation is the only option available on this screen if a certified clusterware for an OracleAS Cold Failover Cluster environment is detected.

8. Select a Product to Install Select OracleAS Infrastructure 10g to install an infrastructure.

If you need to install additional languages, click Product Languages. See Section 5.6, "Installing Additional Languages" for details.

Click Next.

9. Select Installation Type Select Identity Management and OracleAS Metadata Repository. Click Next.
10. Preview of Steps for Infrastructure Installation This screen lists the screens that the installer will display. Click Next.
11. Confirm Pre-Installation Requirements Verify that you meet all the listed requirements. Click Next.
12. Select Configuration Options Select all the components except for OracleAS Certificate Authority.

Check that High Availability Addressing is selected. If the installer displayed the Specify Hardware Cluster Installation Mode screen earlier, this option is greyed out and selected by default.

If the installer did not display the Specify Hardware Cluster Installation Mode screen, the High Availability Addressing option will not be greyed out. You must select this option.

Click Next.

13. Specify Namespace in Internet Directory Select the suggested namespace, or enter a custom namespace for the location of the default Identity Management realm.

Ensure the value shown in Suggested Namespace is valid and meets your deployment needs. If not, enter the desired value in Custom Namespace. See Section 6.15, "What Do I Enter in the "Specify Namespace in Internet Directory" Screen?".

Click Next.

14. Specify High Availability Addressing Note: This is a critical screen when installing the Infrastructure in an OracleAS Cold Failover Cluster. If you do not see this screen, return to the Select Configuration Options screen and ensure that you selected High Availability Addressing.

Enter the virtual hostname for the OracleAS Cold Failover Cluster environment.

Example: vhost.mydomain.com

Click Next.

15. Specify Privileged Operating System Groups This screen appears if you are running the installer as a user who is not in the OSDBA or the OSOPER operating system groups.

Database Administrator (OSDBA) Group:

Example: dbadmin

Database Operator (OSOPER) Group:

Example: dbadmin

Click Next.

16. Database Identification Global Database Name: Enter a name for the OracleAS Metadata Repository database. Append the domain name of your computer to the database name.

Example: asdb.mydomain.com

SID: Enter the system identifier for the OracleAS Metadata Repository database. Typically this is the same as the global database name, but without the domain name. The SID cannot be longer than eight characters.

Example: asdb

Click Next.

17. Set SYS and SYSTEM Passwords Set the passwords for these database users. Click Next.
18. Database File Location Enter or select a directory for database files: Enter a directory where you want the installer to create data files for the OracleAS Metadata Repository database.

Note: You must enter a directory in the file system that can be mounted from either node in the OracleAS Cold Failover Cluster environment.

Click Next.

19. Database Character Set Select Use the default character set. Click Next.
20. Specify Instance Name and ias_admin Password Instance Name: Enter a name for this infrastructure instance. Instance names can contain the $, and _ (underscore) characters in addition to any alphanumeric characters. If you have more than one Oracle Application Server instance on a computer, the instance names must be unique.

Example: infra_904

ias_admin Password and Confirm Password: Enter and confirm the password for the ias_admin user. This is the administrative user for this infrastructure instance.

See Section 5.8, "The ias_admin User and Restrictions on its Password" for password requirements.

Example: welcome99

Click Next.

21. Choose JDK Home Directory (HP-UX only) Enter JDK Home: Enter the full path to the HP Java 2 SDK 1.4.1.05 (or higher) for PA-RISC installation.

Click Next.

22. Summary Verify your selections. Pay attention to any items listed in red. This indicates an issue that will cause the installation to fail. In particular, check all items within Space Requirements to confirm that sufficient disk is available for the installation.

Click Install.

23. Install Progress This screen shows the progress of the installation.
24. Run root.sh Note: Do not run the root.sh script until prompted.

When prompted, run the root.sh script in a different shell as the root user. The script is located in this instance’s Oracle home directory.

Click OK after you have run the script on all nodes.

25. Configuration Assistants This screen shows the progress of the configuration assistants. Configuration assistants configure components.
26. End of Installation Click Finish to quit the installer.

9.2.3 Performing Post-Installation Steps for OracleAS Cold Failover Cluster

Perform the following step after installing OracleAS Cold Failover Cluster:

9.2.3.1 Edit the oraInst.loc and oratab Files on the Second Node

After the OracleAS Infrastructure 10g installation is complete, edit the oraInst.loc and oratab files on the second node. The following table shows the location of the oraInst.loc and oratab files for HP-UX and Linux:

File Location on HP-UX Location on Linux
oraInst.loc /var/opt/oracle /etc
oratab /etc /etc

Edit the oratab file on the second node as follows:

  1. Create or edit a /etc/oratab file.

  2. Copy the oratab entries from the installation node for the Metadata Repository, created during the OracleAS Cold Failover Cluster installation.

For example, copy the following entries from the oratab file on the installation node to the oratab file on the second node where /mnt/app/oracle/OraInfra_904 is the Oracle Home directory:

*:/mnt/app/oracle/OraInfra_904:N
asdb:/mnt/app/oracle/OraInfra_904:N

Create the oraInst.loc file on the second node by copying the oraInst.loc file from the installation node to the second node. The oraInst.loc file is not used during runtime by Oracle Application Server. It is used only by the installer.

9.2.3.2 Create a Clusterware Agent for Automatic Failover

An OracleAS Cold Failover Cluster environment provides the framework for a manual failover of the OracleAS 10g Infrastructure. To achieve automatic failover, you must set up an agent using the clusterware. An example of automatic failover is setting up the secondary node to monitor the heartbeat of the primary node and when the secondary node detects that the primary node is down, the virtual IP address, shared storage, and all the OracleAS 10g Infrastructure processes are failed over to the secondary node.

For example, an HP Serviceguard Package and a Red Hat Cluster Manager Service could be created to achieve this automatic failover. The procedure to create these agents are not within the scope of this guide but example agents are available from the OracleAS clusterware certification page of OracleMetaLink (http://metalink.oracle.com).

9.2.4 Installing Middle Tiers Against an OracleAS Cold Failover Cluster Infrastructure

For middle tiers to work with an OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster, you can install the middle tiers on computers outside the cluster, or on nodes within the cluster.

If you choose to install middle tiers on OracleAS Cold Failover Cluster nodes, either on the local storage or shared storage, note that the middle tiers will not be able to take advantage of any cluster benefits. If the active node fails, the middle tiers will not fail over to the other node. Middle tiers have their own high availability solutions: see the Oracle Application Server 10g High Availability Guide for details.


Note:

Oracle recommends that you do not install middle tiers on the same shared disk where you installed the OracleAS Infrastructure 10g. The reason is that when this shared disk fails over to the secondary node, the middle tier becomes inaccessible.

The best solution is to install and run middle tiers on nodes outside the OracleAS Cold Failover Cluster.

But if you want to run a middle tier on either the primary or secondary node, install it on a local disk or on a disk other that the one where you installed the OracleAS Infrastructure 10g.


9.2.4.1 If You Plan to Install Middle Tiers on OracleAS Cold Failover Cluster Nodes

If you plan to install a middle tier on an OracleAS Cold Failover Cluster node (primary or secondary), perform these tasks before installing the middle tier:

9.2.4.1.1 Create a staticports.ini File for the Middle Tier

Ensure that the ports used by the middle tier are not the same as the ports used by the infrastructure. The reason is that the infrastructure can fail over from the primary to the secondary node (and vice versa), and there must not be any port conflicts on either node. The same ports must be reserved for the infrastructure on both nodes.

If the infrastructure is running on the same node where you want to install the middle tier, the installer can detect which ports are in use and select different ports for the middle tier. For example, if the infrastructure is running on the primary node, and you run the installer on the primary node to install the middle tier, then the installer can assign different ports for the middle tier.

However, if the infrastructure is running on a node different from where you want to install the middle tier, the installer cannot detect which ports are used by the infrastructure. For example, if the infrastructure is running on the primary node but you want to install the middle tier on the secondary node, the installer is unable to detect which ports the infrastructure is using. In this situation, you need to set up a staticports.ini file to specify port numbers for the middle tier. See Section 4.4.2, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.

To see which ports the infrastructure is using, view the ORACLE_HOME/install/portlist.ini file, where ORACLE_HOME refers to the directory where you installed the infrastructure.

9.2.4.1.2 Create an Alternative oraInst.loc File

Set up the environment so that the middle tier will have its own Oracle Installer Inventory directory, instead of using the same inventory directory used by the Infrastructure. To do this, you need to rename the oraInst.loc file to something else so that the installer will prompt you to enter a new inventory directory for the middle tier installation. By default the oraInst.loc file is stored in the /etc directory on Linux and the /var/opt/oracle directory on HP-UX. The following example on Linux renames this file to oraInst.loc.infra.

prompt> su
Password: root_password
# cd /etc
# mv oraInst.loc oraInst.loc.infra


Note:

On HP-UX, use cd to change to the /var/opt/oracle directory before performing the mv command.

When the installer prompts for the inventory directory during the middle tier installation, specify a directory on the local storage or on a disk other than the one where you installed the OracleAS Infrastructure 10g.

When the middle tier installation is complete, rename the newly created oraInst.loc file (for example rename it to oraInst.loc.mt) and restore the oraInst.loc.infra file back to oraInst.loc. Make sure that the correct version of the oraInst.loc file is in place prior to any future Oracle installations on this node. The oraInst.loc file is not used during the Oracle Application Server runtime. The only time you need the file is when you run the installer, for example, to de-install an instance or to expand an instance.

9.3 OracleAS Active Failover Cluster


Note:

In the initial release of Oracle Application Server 10g (9.0.4), OracleAS Active Failover Cluster is a Limited Release feature. Please check OracleMetaLink (http://metalink.oracle.com) for the most current certification status of this feature or consult your sales representative before deploying this feature in a production environment.

You increase the availability of OracleAS Infrastructure 10g by installing and running it in an OracleAS Active Failover Cluster environment (Figure 9-2). In an OracleAS Active Failover Cluster, the OracleAS Metadata Repository runs on a Real Application Clusters database, and the Identity Management components run on the same nodes in the cluster.

To create this environment, you install the OracleAS Infrastructure 10g components—OracleAS Metadata Repository and Identity Management components—in a clustered environment.

To use OracleAS Active Failover Cluster, you need the following items:


To Learn More About Real Application Clusters

For complete information about Real Application Clusters, see the following books in the database documentation library.

You can view these books on the Oracle Technology Network web site (http://otn.oracle.com).


For the Latest News

There are some known issues related to OracleAS Active Failover Cluster. These issues are documented in the Oracle Application Server 10g Release Notes.

Figure 9-2 OracleAS Active Failover Cluster Environment

Description of rac.gif follows
Description of the illustration rac.gif


Components You Need to Install

You need to install OracleAS Infrastructure 10g components on the clustered nodes. This means that you cannot use an existing database, or an existing Oracle Internet Directory. You need to have the installer create a new database and Oracle Internet Directory for you.

On the Select Installation Type screen, you need to select Identity Management and OracleAS Metadata Repository.


Adding Nodes After Installation

After you install OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster, you cannot install it on additional nodes after the initial installation. You must select all the nodes in the cluster where you want to install OracleAS Infrastructure 10g during the initial installation.


Where the Installer Writes Files

You run the installer on any node in the OracleAS Active Failover Cluster where you want to install OracleAS Infrastructure 10g. The installer detects that the node is part of a cluster, and it displays a screen listing all the nodes in the cluster. On this screen, you select the nodes where you want to install OracleAS Infrastructure 10g. The node where you are running the installer is always selected.

The installer writes files on the local storage devices of the selected nodes and also on the shared storage device, as shown in Table 9-2:

Table 9-2 Where the Installer Writes Files in an OracleAS Active Failover Cluster

File or Directory Location
ORACLE_HOME directory The installer writes the Oracle home directory on the local storage devices of the selected nodes. The installer uses the same path name, specified in the Specify File Locations screen, for all nodes.
oraInventory directory The installer writes the Oracle Installer Inventory directory on the local storage devices of the selected nodes. The installer uses the same path name, specified in the Specify Inventory Directory screen, for all nodes.
Files for OracleAS Metadata Repository The installer writes the database software files for the OracleAS Metadata Repository on all the selected nodes, but for the data files, the installer invokes the Database Configuration Assistant to write the data files on raw partitions located on the shared storage device.

The rest of this section describes these procedures:

9.3.1 Setting Up the OracleAS Active Failover Cluster Environment

Before you install the OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster environment, perform the following procedures:

9.3.1.1 Set Up staticports.ini File

Each OracleAS Infrastructure 10g component must use the same port number across all nodes in the cluster. To do this, create a staticports.ini file, which enables you to specify port numbers for each component. See Section 4.4.2, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.


Note:

The installer checks the availability of the ports specified in the staticports.ini file on the local node only. It does not check that the ports are free on the remote nodes. You must check yourself that these ports are free on all the nodes.

9.3.1.2 Set Up a Virtual Server Name for the Load Balancer

You enter the load balancer’s virtual server name, and not the load balancer’s physical hostname, when the installer prompts for the load balancer name. See your load balancer documentation for steps on how to set up a virtual server name.

See the next point, Section 9.3.1.3, "Verify the Load Balancer’s Virtual Server Name Does Not Contain the Names of the Nodes in the Cluster", for guidelines on the virtual server name.

After the virtual server name is set up, check that the name is accessible:

prompt> ping load_balancer_virtual_name

9.3.1.3 Verify the Load Balancer’s Virtual Server Name Does Not Contain the Names of the Nodes in the Cluster

When the installer copies files to different nodes in the cluster, it replaces the current hostname in the files with hostname of the target node. Ensure that the load balancer’s virtual server name does not contain the names of the nodes in the cluster, or the installer might change the virtual server name of the load balancer as well.

Example: if you are installing on nodes named rac-1 and rac-2, be sure that the load balancer virtual server name does not contain "rac-1" or "rac-2". When the installer is installing files to rac-2, it searches for the string "rac-1" in the files and replaces it with "rac-2". If the load balancer’s virtual server name happens to be LB-rac-1x, the installer sees the string "rac-1" in the name and replaces it with "rac-2", thus mangling the virtual server name to LB-rac-2x.

9.3.1.4 Configure the Load Balancer to Point to One Node Only

You need to configure the load balancer so that it directs all traffic only to the node where you will be running the installer. After installation, you change the configuration back so that the load balancer directs traffic to all nodes in the cluster.

9.3.1.5 Create Identical Users and Groups on All Nodes in the Cluster


Note:

This procedure is required only if you are using local users and groups. It is not required if you are using users and groups defined in a directory service, such as NIS, because the users and groups are already identical.

Create an operating system user with the same user ID on all nodes in the cluster. This is required for user equivalence to work (see Section 9.3.1.6, "Set Up User Equivalence"). When you run the installer on one node as this user, the installer needs to access the other nodes in the cluster as this user.

If you have already created the oracle user as described in Section 4.6, "Operating System User", determine its user ID so that when you create the oracle user on other nodes, you can specify the same user ID.

To determine the user ID:

prompt> id oracle
uid=3223(oracle) gid=8400(dba) groups=8400(dba),5000(oinstall) 

The number after "uid" specifies the user ID, and the numbers after "groups" specify the group IDs. In this example, the oracle user must have ID 3223 on all nodes, and the dba and oinstall groups must have IDs 8400 and 5000 on all nodes.

See Section 4.6, "Operating System User" and Section 4.5, "Operating System Groups" for steps on how to create users and groups.

9.3.1.6 Set Up User Equivalence

The installer needs user equivalence to be set up for all the nodes in the cluster. You can set up Secure Shell (ssh and scp) or Remote Shell (/usr/bin/rsh on Linux, /usr/bin/remsh on HP-UX, and /usr/bin/rcp on both Linux and HP-UX) for user equivalence. Make sure that this procedure is compatible with your security policy.

To determine which user equivalence type to use, the installer checks if Secure Shell is set up. If so, it uses it. Otherwise, it uses Remote Shell.

9.3.1.6.1 To Set Up User Equivalence for Remote Shell

Perform the following steps:

  1. On the node where you plan to run the installer, in the following files:

    • .rhosts file in the home directory of the oracle user

    • .rhosts file in the home directory of the root user (that is, /.rhosts)

    enter a line for each node name in the cluster. Be sure to include the name of the local node itself.

    For example, if the cluster has three nodes named node1, node2, and node3, you would populate the .rhosts files with the following lines:

    node1
    node2
    node3
    

    Tip:

    Instead of writing these lines in the .rhosts files for the oracle user and for the root user, you can enter the same lines in the /etc/hosts.equiv file.

  2. Check that the user equivalence is working:

    1. Log in as the oracle user on the node where you plan to run the installer.

    2. As the oracle user, perform a remote login to each node in the cluster:

      prompt> rlogin node2
      
      

      If the command prompts you to enter a password, then the oracle user does not have identical attributes on all nodes. You need to correct this to enable the installer to copy files to the remote nodes.


Tip:

If user equivalence is not working, try modifying the .rhosts or the /etc/hosts.equiv files in the following ways to get it to work:
  • Specify the fully qualified hostname in the files:

    node1.mydomain.com
    node2.mydomain.com
    node3.mydomain.com
    
    
  • Specify the username after the hostname. Separate the hostname from the username with a space character:

    node1.mydomain.com oracle
    node2.mydomain.com oracle
    node3.mydomain.com oracle
    
    

    For the root user’s .rhosts file, replace "oracle" with "root".

  • You can include all these variations in the files:

    node1 oracle
    node1.mydomain.com oracle
    node2 oracle
    node2.mydomain.com oracle
    node3 oracle
    node3.mydomain.com oracle
    
    

    For the root user’s .rhosts file, replace "oracle" with "root".


9.3.1.6.2 To Check if Secure Shell Is Configured

If you are using Secure Shell for host equivalency between the nodes of a cluster, make sure that the ssh and scp commands do not prompt for any user response, such as prompting for the password or a Yes/No response, during execution. Also, ensure that no error or warning messages are sent to stderr during execution. After setting up Secure Shell, you can run these commands to check:

  • To check ssh, run these commands on each node in the cluster where ssh_path is /usr/bin on Linux and /usr/local/bin on HP-UX:

    prompt> ssh_path/ssh local_hostname ls /tmp
    prompt> ssh_path/ssh remote_hostname ls /tmp
    
    

    In the example, the ssh command runs the "ls /tmp" command on the local node and remote node. Replace local_hostname and remote_hostname with the hostnames of the local and remote nodes, respectively.

  • To check scp, run these commands on each node in the cluster where scp_path is /usr/bin on Linux and /usr/local/bin on HP-UX:

    prompt> touch /tmp/tempfile
    prompt> scp_path/scp /tmp/tempfile local_hostname:/tmp/tempfile2
    prompt> scp_path/scp /tmp/tempfile remote_hostname:/tmp/tempfile2 
    
    

    In the example, the touch command creates a file in the /tmp directory, and the scp commands copy the file to another file on both the local and remote nodes. Replace local_hostname and remote_hostname with the hostnames of the local and remote nodes, respectively.

If the commands prompt for a user response or if the commands cause an error or warning message to be sent to stderr during installation, it means that the Secure Shell is not set up properly, and the installer resorts to using the equivalent rsh and rcp commands. You then need to perform the steps in Section 9.3.1.6.1, "To Set Up User Equivalence for Remote Shell" for the installer to succeed.

9.3.1.7 Configure Raw Partitions for Server Management (SRVM)

This step is required if this is the first installation of an Oracle database on the cluster. SRVM is a component of Real Application Clusters.

The raw partition for SRVM must have these properties:

  • It must be accessible from all nodes in the cluster.

  • Its size must be at least 100 MB.

The command to create raw partitions is specific to the volume manager you are using. For example, if you using VERITAS Volume Manager, the command is vxassist.

9.3.1.8 (optional) Set the SRVM_SHARED_CONFIG Environment Variable

If OracleAS Infrastructure 10g is the first Oracle product to be installed on the cluster, set the SRVM_SHARED_CONFIG environment variable to the name of the raw partition that you created for the SRVM shared configuration device.

Example (C shell)

% setenv SRVM_SHARED_CONFIG /dev/vx/rdsk/ias_dg/srvcfg

Example (Bourne or Korn shell):

$ SRVM_SHARED_CONFIG=/dev/vx/rdsk/ias_dg/srvcfg; export SRVM_SHARED_CONFIG

If you do not set this environment variable, the installer displays the Shared Configuration File Name screen, where you enter the path for the SRVM configuration device.

9.3.1.9 Configure Raw Partitions for the OracleAS Metadata Repository

In addition to the raw partition for SRVM (see Section 9.3.1.7, "Configure Raw Partitions for Server Management (SRVM)"), you need to configure raw partitions on the shared storage device for the OracleAS Metadata Repository database.

Table 9-3 lists the required tablespaces and system objects, their minimum sizes, and the recommended name for the raw partition:

Table 9-3 Raw Partitions for the OracleAS Metadata Repository

Raw Partition for Minimum Size Recommended Name
SYSTEM tablespace 1024 MB dbname_raw_system_1024m
Server parameter file 64 MB dbname_raw_spfile_64m
USERS tablespace 256 MB dbname_raw_users_256m
TEMP tablespace 128 MB dbname_raw_temp_128m
UNDOTBS1 tablespace 256 MB dbname_raw_undotbs1_256m
UNDOTBS2 tablespace 256 MB dbname_raw_undotbs2_256m
DRSYS tablespace 64 MB dbname_raw_drsys_64m
Three control files 64 MB for each file dbname_raw_controlfile1_64m

dbname_raw_controlfile2_64m

dbname_raw_controlfile3_64m

Three redo log files for each instance 64 MB for each file dbname_raw_thread_lognumber_64m

thread specifies the thread ID of the instance.

number specifies the log number (1, 2, or 3) of the instance.

PORTAL tablespace 128 MB dbname_raw_portal_128m
PORTAL_DOC tablespace 64 MB dbname_raw_portaldoc_64m
PORTAL_IDX tablespace 64 MB dbname_raw_portalidx_64m
PORTAL_LOG tablespace 64 MB dbname_raw_portallog_64m
DCM tablespace 256 MB dbname_raw_dcm_256m
OCATS tablespace 64 MB dbname_raw_ocats_64m
DISCO_PTM5_CACHE tablespace 64 MB dbname_raw_discoptm5cache_64m
DISCO_PTM5_META tablespace 64 MB dbname_raw_discoptm5meta_64m
DSGATEWAY_TAB tablespace 64 MB dbname_raw_dsgatewaytab_64m
WCRSYS_TS tablespace 64 MB dbname_raw_wcrsysts_64m
UDDISYS_TS tablespace 64 MB dbname_raw_uddisysts_64m
OLTS_ATTRSTORE tablespace 128 MB dbname_raw_oltsattrstore_128m
OLTS_BTTRSTORE tablespace 64 MB dbname_raw_oltsbttrstore_64m
OLTS_CT_STORE tablespace 256 MB dbname_raw_oltsctstore_256m
OLTS_DEFAULT tablespace 128 MB dbname_raw_oltsdefault_128m
OLTS_SVRMGSTORE tablespace 64 MB dbname_raw_oltssvrmgstore_64m
IP_DT tablespace 128 MB dbname_raw_ipdt_128m
IP_RT tablespace 128 MB dbname_raw_iprt_128m
IP_LOB tablespace 128 MB dbname_raw_iplob_128m
IP_IDX tablespace 128 MB dbname_raw_ipidx_128m
IAS_META tablespace 256 MB dbname_raw_iasmeta1_256m

9.3.1.10 Create a Text File Listing the Raw Partitions

Create a text file listing the database object and raw partition name in name-value pair format. Place the text file on the node where you plan to run the installer.

The following example shows the contents of the text file for a two-instance OracleAS Metadata Repository. If you have more than two instances, add more lines for "undotbs" and the redo log files.

system1=/dev/vx/rdsk/ias_dg/infra_system_1024m
spfile1=/dev/vx/rdsk/ias_dg/infra_raw_spfile_64m
users1=/dev/vx/rdsk/ias_dg/infra_raw_users_256m 
temp1=/dev/vx/rdsk/ias_dg/infra_raw_temp_128m
undotbs1=/dev/vx/rdsk/ias_dg/infra_raw_undotbs1_256m
undotbs2=/dev/vx/rdsk/ias_dg/infra_raw_undotbs2_256m
..... Create additional lines for "undotbsN" if you have more than 2 instances.
drsys1=/dev/vx/rdsk/ias_dg/infra_raw_drsys_64m
control1=/dev/vx/rdsk/ias_dg/infra_raw_controlfile1_64m
control2=/dev/vx/rdsk/ias_dg/infra_raw_controlfile2_64m
control3=/dev/vx/rdsk/ias_dg/infra_ raw_controlfile3_64m
redo1_1=/dev/vx/rdsk/ias_dg/infra_raw_1_log1_64m
redo1_2=/dev/vx/rdsk/ias_dg/infra_raw_1_log2_64m 
redo1_3=/dev/vx/rdsk/ias_dg/infra_raw_1_log3_64m
redo2_1=/dev/vx/rdsk/ias_dg/infra_raw_2_log1_64m
redo2_2=/dev/vx/rdsk/ias_dg/infra_raw_2_log2_64m 
redo2_3=/dev/vx/rdsk/ias_dg/infra_raw_2_log3_64m
..... Create additional lines for "redoN" log files if you have more
..... than 2 instances.
portal1=/dev/vx/rdsk/ias_dg/infra_raw_portal_128m
portal_doc1=/dev/vx/rdsk/ias_dg/infra_raw_portaldoc_64m
portal_idx1=/dev/vx/rdsk/ias_dg/infra_raw_portalidx_64m
portal_log1=/dev/vx/rdsk/ias_dg/infra_raw_portallog_64m
dcm1=/dev/vx/rdsk/ias_dg/infra_raw_dcm_256m
ocats1=/dev/vx/rdsk/ias_dg/infra_raw_ocats_64m
disco_ptm5_cache1=/dev/vx/rdsk/ias_dg/infra_raw_discoptm5cache_64m
disco_ptm5_meta1=/dev/vx/rdsk/ias_dg/infra_raw_discoptm5meta_64m
dsgateway_tab1=/dev/vx/rdsk/ias_dg/infra_raw_dsgatewaytab_64m
wcrsys_ts1=/dev/vx/rdsk/ias_dg/infra_raw_wcrsysts_64m
uddisys_ts1=/dev/vx/rdsk/ias_dg/infra_raw_uddisysts_64m
olts_attrstore1=/dev/vx/rdsk/ias_dg/infra_raw_oltsattrstore_128m
olts_battrstore1=/dev/vx/rdsk/ias_dg/infra_raw_oltsbattrstore_64m
olts_ct_store1=/dev/vx/rdsk/ias_dg/infra_raw_oltsctstore_256m
olts_default1=/dev/vx/rdsk/ias_dg/infra_raw_oltsdefault_128m
olts_svrmgstore1=/dev/vx/rdsk/ias_dg/infra_oltssvrmgstore_64m
ip_dt1=/dev/vx/rdsk/ias_dg/infra_raw_ipdt_128m
ip_rt1=/dev/vx/rdsk/ias_dg/infra_raw_iprt_128m
ip_lob1=/dev/vx/rdsk/ias_dg/infra_raw_iplob_128m
ip_idx1=/dev/vx/rdsk/ias_dg/infra_raw_ipidx_128m
ias_meta1=/dev/vx/rdsk/ias_dg/infra_raw_iasmeta1_256m

9.3.1.11 Set the DBCA_RAW_CONFIG Environment Variable

Set the DBCA_RAW_CONFIG environment variable to point to the text file. For example, if you created the file as /opt/oracle/rawdevices.txt, you can set the variable using one of these commands:

Example (C shell):

% setenv DBCA_RAW_CONFIG /opt/oracle/rawdevices.txt

Example (Bourne or Korn shell):

$ DBCA_RAW_CONFIG=/opt/oracle/rawdevices.txt; export DBCA_RAW_CONFIG

9.3.1.12 Set the Shell Limit for Number of Open File Descriptors (Linux, only)

Setting the parameter for the number of open file descriptors for an Oracle Application Server installation on Linux is described in Section 4.3.2, "Configuring the Kernel Parameters on Linux". However, installing the OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster environment requires a higher value. Oracle recommends this value to be set to 32K or higher within the shell of the user that will perform the installation. For example, enter the following commands:

$ ulimit -n
1024
$ ulimit -n 32768 
$ ulimit -n
32768

The default and maximum value of this parameter for all user shells on the system is set in the /etc/security/limits.conf file. This file will need to be modified by the root user if the maximum value allowable is less than 32K. The installation user will need to log out and log in again for the change to take effect.

After completing the OracleAS Infrastructure 10g installation, this parameter can be changed back to it's original value.

9.3.2 Installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster

In an OracleAS Active Failover Cluster, you install the OracleAS Metadata Repository and the Identity Management components in one installation session by selecting the "Identity Management and OracleAS Metadata Repository" option in the Select Installation Type screen. This option creates a new database for the OracleAS Metadata Repository and a new Oracle Internet Directory.


Note:

In an OracleAS Active Failover Cluster, you must install a new OracleAS Metadata Repository and Oracle Internet Directory. You cannot use an existing database or Oracle Internet Directory.

Follow this procedure to install OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster:

Table 9-4 Steps for Installing OracleAS Infrastructure 10g in an OracleAS Active Failover Cluster


Screen Action
1. -- Start up the installer. See Section 5.15, "Starting the Oracle Universal Installer" for details.
2. Welcome Click Next.
3. Specify Inventory Directory This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the full path for the inventory directory: Enter a full path to a directory where you want the installer to store its files. The installer uses these files to keep track of all Oracle products that are installed on this computer. Enter a directory that is different from the Oracle home directory.

Example: /mnt/app/oracle/oraInventory

Click OK.

4. UNIX Group Name This screen appears only if this is the first installation of any Oracle product on this computer.

Enter the name of the operating system group to have permission to update Oracle software installations on this system.

Example: oinstall

Click Next.

5. Run orainstRoot.sh This screen appears only if this is the first installation of any Oracle product on this computer.

Run the orainstRoot.sh script in a different shell as the root user. The script is located in the installer inventory directory specified in the Specify Inventory Directory screen.

Run the script on the node where you are running the installer. The installer will prompt you to run the script on other nodes later, in step 8.

Click Continue after you have run the script.

6. Specify File Locations Destination Name: Enter a name to identify this Oracle home.

Example: oracleas

Destination Path: Enter the full path to the destination directory. This is the Oracle home. The installer will use this path as the Oracle home for all nodes.

Example: /mnt/app/oracle/OraInfra_904

Note: If you are using Oracle Cluster Management Software on Linux:

  • you must specify the name and the Oracle home of the Oracle Cluster Management Software installation.

If you are not using Oracle Cluster Management Software:

  • you must enter a new Oracle home name and directory. Do not select an existing Oracle home from the drop down list. If you select an existing Oracle home, the installer will not display the next screen, Specify Hardware Cluster Installation Mode, which is a critical screen.

Click Next.

7. Specify Hardware Cluster Installation Mode Note: This is a critical screen when installing the infrastructure in an OracleAS Active Failover Cluster environment. If you do not see this screen, exit the installer and check that your clusterware is installed and running (see Section 9.1.2, "Check That Clusterware Is Running").

Select Active Failover Cluster Installation, and select the nodes where you want to install OracleAS Infrastructure 10g. You need to install OracleAS Infrastructure 10g on at least two nodes.

Note: On Linux, this screen is titled Selected Nodes and will list the cluster nodes without any option to additionally select for an Active Failover Cluster Installation nor add/subtract nodes of the cluster. This is expected behavior due to the detection of the Oracle Cluster Management Software. Reaching this screen confirms that an Active Failover Cluster Installation has been chosen for all nodes of the Oracle Cluster Management Software cluster.

Click Next.

8. Run orainstRoot.sh Run the orainstRoot.sh script as the root user on the selected nodes. The script is located in the Oracle Installer Inventory directory, which is specified in the Specify Inventory Directory screen, on the selected nodes.

Click Continue after you have run the script on all the selected nodes.

9. Select a Product to Install Select OracleAS Infrastructure 10g to install an infrastructure.

If you need to install additional languages, click Product Languages. See Section 5.6, "Installing Additional Languages" for details.

Click Next.

10. Select Installation Type Select Identity Management and OracleAS Metadata Repository. Click Next.
11. Preview of Steps for Infrastructure Installation This screen lists the screens that the installer will display. Click Next.
12. Confirm Pre-Installation Requirements Verify that you meet all the listed requirements. Click Next.
13. Select Configuration Options Select all the components except for OracleAS Certificate Authority.

Check that High Availability Addressing is selected. It should be greyed out and selected.

Click Next.

14. Specify Namespace in Internet Directory Select the suggested namespace, or enter a custom namespace for the location of the default Identity Management realm.

Ensure the value shown in Suggested Namespace is valid and meets your deployment needs. If not, enter the desired value in Custom Namespace. See Section 6.15, "What Do I Enter in the "Specify Namespace in Internet Directory" Screen?".

Click Next.

15. Specify High Availability Addressing Note: This is a critical screen when installing the infrastructure in an OracleAS Active Failover Cluster. If you do not see this screen, return to the Select Configuration Options screen and ensure that you selected High Availability Addressing.

Enter the fully qualified virtual server name of the load balancer. (Do not enter the physical hostname for the load balancer.) Click Next.

16. Shared Configuration File Name This screen appears if you did not set the SRVM_SHARED_CONFIG environment variable. See Section 9.3.1.8, "(optional) Set the SRVM_SHARED_CONFIG Environment Variable".

Shared Configuration File Name: Enter the path of the raw partition that you created for the SRVM shared configuration device:

Example: /dev/vx/rdsk/rac/srvm256m

Click Next.

17. Database Identification Global Database Name: Enter a name for the OracleAS Metadata Repository database. Append the domain name of your computer to the database name.

Example: asdb.mydomain.com

SID Prefix: Enter the system identifier for the OracleAS Metadata Repository database. Typically this is the same as the global database name, but without the domain name. The SID cannot be longer than eight characters.

Example: asdb

Click Next.

18. Set SYS and SYSTEM Passwords Set the passwords for these database users. Click Next.
19. Database Character Set Select Use the default character set. Click Next.
20. Specify Instance Name and ias_admin Password Instance Name: Enter a name for this infrastructure instance. Instance names can contain the $, and _ (underscore) characters in addition to any alphanumeric characters. If you have more than one Oracle Application Server instance on a computer, the instance names must be unique.

Example: infra_904

ias_admin Password and Confirm Password: Enter and confirm the password for the ias_admin user. This is the administrative user for this infrastructure instance.

See Section 5.8, "The ias_admin User and Restrictions on its Password" for password requirements.

Example: welcome99

Click Next.

21. Choose JDK Home Directory (HP-UX only) Enter JDK Home: Enter the full path to the HP Java 2 SDK 1.4.1.05 (or higher) for PA-RISC installation.

Click Next.

22. Summary Verify your selections. Pay attention to any items listed in red. This indicates an issue that will cause the installation to fail. In particular, expand all items within Space Requirements to confirm that sufficient disk is available for the installation.

Click Install.

23. Install Progress This screen shows the progress of the installation.
24. Run root.sh Note: Do not run the root.sh script until prompted.

When prompted, run the root.sh script in a different shell as the root user. The script is located in this instance’s Oracle home directory.

Note: You have to run this script on each node where you are installing OracleAS Infrastructure 10g.

Click OK after you have run the script on all nodes.

25. Configuration Assistants This screen shows the progress of the configuration assistants. Configuration assistants configure components.
26. End of Installation Click Finish to quit the installer.

9.3.3 Post-Installation Procedure

Before you started the installer, you configured the load balancer so that it directed traffic to the node running the installer only. You can now reconfigure the load balancer so that it directs traffic to all nodes in the cluster.

9.3.4 Installing Middle Tiers Against an OracleAS Active Failover Cluster Infrastructure

Required pre-installation step: Configure the load balancer so that it points to only one node in the OracleAS Active Failover Cluster. The node can be any node in the cluster. After you have installed the middle tiers, you can change the load balancer back so that it points to all nodes in the cluster.

Installation: To install Oracle Application Server middle tiers against an OracleAS Infrastructure 10g running in an OracleAS Active Failover Cluster, follow the procedures as documented in Chapter 7, " InstallingMiddleTiers", but with this difference:

  • In the Register with Oracle Internet Directory screen, enter the load balancer’s virtual server name (not the physical hostname of the load balancer) in the Hostname field. This is the same name that you specified in the Specify High Availability Addressing screen in the OracleAS Infrastructure 10g installation.

9.4 OracleAS Disaster Recovery

Use the OracleAS Disaster Recovery environment when you want to have two physically separate sites in your environment. One site is the production site, and the other site is the standby site. The production site is active, while the standby site is passive; the standby site becomes active when the production site goes down.

Generally, the standby site mirrors the production site: each node in the standby site corresponds to a node in the production site. This includes the nodes running both OracleAS Infrastructure 10g and middle tiers. As a small variation to this environment, you can set up the OracleAS Infrastructure 10g on the production site in a OracleAS Cold Failover Cluster environment. See Section 9.4.1.4, "If You Want to Use OracleAS Cold Failover Cluster on the Production Site" for details.

Figure 9-3 shows an example OracleAS Disaster Recovery environment. Each site has two nodes running middle tiers and a node running OracleAS Infrastructure 10g.


Data Synchronization

For OracleAS Disaster Recovery to work, data between the production and standby sites must be synchronized so that failover can happen very quickly. Configuration changes done at the production site must be synchronized with the standby site.

There are two types of data, and the synchronization method depends on the type of data:

See the Oracle Application Server 10g High Availability Guide for details on how to use Oracle Data Guard and the backup and recovery scripts.

Figure 9-3 OracleAS Disaster Recovery Environment

Description of dr.gif follows
Description of the illustration dr.gif

This section contains the following subsections:

9.4.1 Setting Up the OracleAS Disaster Recovery Environment

Before you can install Oracle Application Server on the nodes in an OracleAS Disaster Recovery environment, you have to perform these steps:

9.4.1.1 Ensure Nodes Are Identical at the Operating System Level

Ensure that the nodes are identical with respect to the following items:

  • The nodes are running the same version of the operating system.

  • The nodes have the same operating system patches and packages.

  • You can install Oracle Application Server in the same directory path on all nodes.

9.4.1.2 Set Up staticports.ini File

The same component must use the same port number on the production and standby sites. For example, if Oracle HTTP Server is using port 80 on the production site, it must also use port 80 on the standby site. To ensure this is the case, create a staticports.ini file for use during installation. This file enables you to specify port numbers for each component. See Section 4.4.2, "Using Custom Port Numbers (the "Static Ports" Feature)" for details.

9.4.1.3 Set Up Identical Hostnames on Both Production and Standby Sites

The names of the corresponding nodes on the production and standby sites must be identical, so that when you synchronize data between the sites, you do not have to edit the data to fix the hostnames.

9.4.1.3.1 For the Infrastructure Node

For the node running the infrastructure, set up a virtual name. To do this, specify an alias for the node in the /etc/hosts file.

For example, on the infrastructure node on the production site, the following line in /etc/hosts sets the alias to iasinfra:

138.1.2.111   prodinfra   iasinfra

On the standby site, the following line sets the node’s alias to iasinfra.

213.2.2.110   standbyinfra   iasinfra

When you install OracleAS Infrastructure 10g on the production and standby sites, you specify this alias (iasinfra) in the Specify High Availability Addressing screen. The configuration data will then contain this alias for the infrastructure nodes.

9.4.1.3.2 For the Middle Tier Nodes

For the nodes running the middle tiers, you cannot set up aliases like you did for the infrastructure nodes because the installer does not display the Specify High Availability Addressing screen for middle tier installations. When installing middle tiers, the installer determines the hostname automatically by calling the gethostname() function. You want to be sure that for each middle tier node on the production site, the corresponding node on the standby site returns the same hostname.

To do this, set up a local, or internal, hostname, which could be different from the public, or external, hostname. You can change the names of the nodes on the standby site to match the names of the corresponding nodes on the production site, or you can change the names of the nodes on both production and standby sites to be the same. This depends on other applications that you might be running on the nodes, and whether changing the node name will affect those applications.

  1. Change the local hostname to the hostname of the respective node on the production site. The string returned by the hostname command should return this new local hostname.


    Note:

    The procedure to change the hostname of a system differs between different operating systems. Contact the system administrator of your system to perform this step. Note also that changing the hostname of a system will affect installed software that has a dependency on the previous hostname. Consider the impact of this before changing the hostname.

  2. Enable the other nodes in the OracleAS Disaster Recovery environment to be able to resolve the node using the new local hostname. You can do this in one of two ways:

    • Method 1: Set up separate internal DNS servers for the production and standby sites. This configuration allows nodes on each site (production or standby) to resolve hostnames within the site. Above the internal DNS servers are the corporate, or external, DNS servers. The internal DNS servers forward non-authoritative requests to the external DNS servers. The external DNS servers do not know about the existence of the internal DNS servers. See Figure 9-4.

      To use this method, go to step 3.

      Figure 9-4 Method 1: Using DNS Servers

      Description of dr_name_res.gif follows
      Description of the illustration dr_name_res.gif

    • Method 2: Edit the /etc/hosts file on each node on both sites. This method does not involve configuring DNS servers, but you have to maintain the /etc/hosts file on each node in the OracleAS Disaster Recovery environment. For example, if an IP address changes, you have to update the files on all the nodes, and reboot the nodes.

      To use this method, go to step 4.

  3. If you are using the separate internal DNS server method (method 1), set up your DNS files as follows:

    1. Make sure the external DNS names are defined in the external DNS zone. Example:

      prodmid1.us.oracle.com     IN  A  138.1.2.333
      prodmid2.us.oracle.com     IN  A  138.1.2.444
      prodinf.us.oracle.com      IN  A  138.1.2.111
      standbymid1.us.oracle.com  IN  A  213.2.2.330
      standbymid2.us.oracle.com  IN  A  213.2.2.331
      standbyinf.us.oracle.com   IN  A  213.2.2.110
      
      
    2. At the production site, create a new zone at the production site using a domain name different from your external domain name. To do this, populate the zone data files with entries for each node in the OracleAS Disaster Recovery environment.

      For the infrastructure node, use the virtual name or alias.

      For the middle tier nodes, use the local hostname set up in step 1.

      The following example uses "iasha" as the domain name for the new zone.

      iasmid1.iasha    IN  A  138.1.2.333
      iasmid2.iasha    IN  A  138.1.2.444
      iasinfra.iasha   IN  A  138.1.2.111
      
      

      Do the same for the standby site. Use the same domain name that you used for the production site.

      iasmid1.iasha    IN  A  213.2.2.330
      iasmid1.iasha    IN  A  213.2.2.331
      iasinfra.iasha   IN  A  213.2.2.110
      
      
    3. Configure the DNS resolver to point to the internal DNS servers instead of the external DNS server.

      In the /etc/resolv.conf file for each node on the production site, replace the existing name server IP address with the IP address of the internal DNS server for the production site.

      Do the same for the nodes on the standby site, but use the IP address of the internal DNS server for the standby site.

    4. Create a separate entry for Oracle Data Guard in the internal DNS servers. This entry is used by Oracle Data Guard to ship redo data to the database on the standby site.

      In the example below, the "remote_infra" entry points to the infrastructure node on the standby site. This name is used by the TNS entries on both the production and standby sites so that if a switchover occurs, the entry does not have to be changed.

      Figure 9-5 Entry for Oracle Data Guard in the Internal DNS Servers

      Description of dr_dg.gif follows
      Description of the illustration dr_dg.gif

      On the production site, the DNS entries look like this:

      iasmid1.iasha       IN  A  138.1.2.333
      iasmid2.iasha       IN  A  138.1.2.444
      iasinfra.iasha      IN  A  138.1.2.111
      remote_infra.iasha  IN  A  213.2.2.110
      
      

      On the standby site, the DNS entries look like this:

      iasmid1.iasha       IN  A  213.2.2.330
      iasmid2.iasha       IN  A  213.2.2.331
      iasinfra.iasha      IN  A  213.2.2.110
      remote_infra.iasha  IN  A  138.1.2.111
      
      
  4. If you are using the /etc/hosts method for name resolution (method 2), perform these steps:

    1. On each node on the production site, include these lines in the /etc/hosts file. The IP addresses resolve to nodes on the production site.


      Note:

      In the /etc/hosts file, be sure that the line that identifies the current node comes immediately after the localhost definition line (the line with the 127.0.0.1 address).

      127.0.0.1    localhost
      138.1.2.333  iasmid1.mydomain.com   iasmid1
      138.1.2.444  iasmid2.mydomain.com   iasmid2
      138.1.2.111  iasinfra.mydomain.com  iasinfra
      
      
    2. On each node on the standby site, include these lines in the /etc/hosts file. The IP addresses resolve to nodes on the standby site.


      Note:

      In the /etc/hosts file, be sure that the line that identifies the current node comes immediately after the loopback definition line (the line with the 127.0.0.1 address).

      127.0.0.1    localhost
      213.2.2.330  iasmid1.mydomain.com   iasmid1
      213.2.2.331  iasmid2.mydomain.com   iasmid2
      213.2.2.110  iasinfra.mydomain.com  iasinfra
      
      
    3. Ensure that the "hosts:" line in the /etc/nsswitch.conf file has "files" as the first item:

      hosts:   files nis dns
      
      

      The entry specifies the ordering of the name resolution. If another method is listed first, then the node will use the other method to resolve the hostname.


    Note:

    Reboot the nodes after editing these files.

After making the changes and rebooting the nodes, check that the hostnames are working properly by running the following commands:

  • On the middle tier nodes on both sites, run the hostname command. This should return the internal hostname. For example, the command should return "iasmid1" if you run it on prodmid1 and standbymid1.

    prompt> hostname
    iasmid1
    
    
  • On each node, ping the other nodes in the environment using the internal hostname as well as the external hostname. The command should be successful. For example, from the first midtier node, prodmid1, you can run the following commands:

    prompt> ping prodinfra       ping the production infrastructure node
    PING prodinfra: 56 data byes
    64 bytes from prodinfra.mydomain.com (138.1.2.111): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping iasinfra        ping the production infrastructure node
    PING iasinfra: 56 data byes
    64 bytes from iasinfra.mydomain.com (138.1.2.111): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping iasmid2         ping the second production midtier node
    PING iasmid2: 56 data byes
    64 bytes from iasmid2.mydomain.com (138.1.2.444): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping prodmid2        ping the second production midtier node
    PING prodmid2: 56 data byes
    64 bytes from prodmid2.mydomain.com (138.1.2.444): icmp_seq=0. time=0. ms
    ^C
    
    prompt> ping standbymid1       ping the first standby midtier node
    PING standbymid1: 56 data byes
    64 bytes from standbymid1.mydomain.com (213.2.2.330): icmp_seq=0. time=0. ms
    ^C
    
    

9.4.1.4 If You Want to Use OracleAS Cold Failover Cluster on the Production Site

On the production site of a OracleAS Disaster Recovery system, you can set up the OracleAS Infrastructure 10g to run in a OracleAS Cold Failover Cluster configuration. In this case, you have two nodes in a hardware cluster, and you install the ­OracleAS Infrastructure 10g on a shared disk. See Section 9.2, "OracleAS Cold Failover Cluster" for details.

Figure 9-6 Infrastructure in an OracleAS Cold Failover Cluster Configuration

Description of dr_cfc.gif follows
Description of the illustration dr_cfc.gif

To set up OracleAS Cold Failover Cluster in this environment, use the virtual IP address (instead of the physical IP address) for iasinfra.iasha on the production site. The following example assumes 138.1.2.120 is the virtual IP address.

iasmid1.iasha          IN  A  138.1.2.333
iasmid2.iasha          IN  A  138.1.2.444
iasinfra.iasha         IN  A  138.1.2.120         this is a virtual IP address
remote_infra.iasha     IN  A  213.2.2.110

On the standby site, you still use the physical IP address for iasinfra.iasha, but the remote_infra.iasha uses the virtual IP address.

iasmid1.iasha          IN  A  213.2.2.330
iasmid2.iasha          IN  A  213.2.2.331
iasinfra.iasha         IN  A  213.2.2.110         physical IP address
remote_infra.iasha     IN  A  138.1.2.120         virtual IP address

9.4.2 Installing Oracle Application Server in an OracleAS Disaster Recovery Environment

Install Oracle Application Server as follows:


Note:

For all of the installations, be sure to use staticports.ini to specify port numbers for the components. See Section 9.4.1.2, "Set Up staticports.ini File". In addition, be sure to specify the correct option name for each installation type (see Table 4-9).

  1. Install OracleAS Infrastructure 10g on the production site.

  2. Install OracleAS Infrastructure 10g on the standby site.

  3. Install the middle tiers on the production site.

  4. Install the middle tiers on the standby site.

9.4.2.1 Installing the OracleAS Infrastructure 10g

As with OracleAS Cold Failover Cluster and OracleAS Active Failover Cluster, you must install the Identity Management and the OracleAS Metadata Repository components of OracleAS Infrastructure 10g on the same node. You cannot distribute the components over multiple nodes.

The installation steps are similar to that for OracleAS Cold Failover Cluster. See Section 9.2.2, "Installing OracleAS Infrastructure 10g in an OracleAS Cold Failover Cluster" for the screen sequence. Note the following points:

  • It is OK if the Specify Hardware Cluster Installation Mode screen does not appear. See Table 9-1, step 7.

  • Be sure you select High Availability Addressing in the Select Configuration Options screen. See Table 9-1, step 12.

  • In the Specify High Availability Addressing screen, enter an alias as the virtual address (for example, iasinfra.mydomain.com). See Table 9-1, step 14.

9.4.2.2 Installing Middle Tiers

You can install any type of middle tier that you like:

For installing J2EE and Web Cache, see Section 7.9, "Installing J2EE and Web Cache with OracleAS Database-Based Cluster and Identity Management Access".

For installing Portal and Wireless or Business Intelligence and Forms, see Section 7.13, "Installing Portal and Wireless or Business Intelligence and Forms".

Note the following points:

  • When the installer prompts you to register with Oracle Internet Directory, and asks you for the Oracle Internet Directory hostname, enter the alias of the node running OracleAS Infrastructure 10g (for example, iasinfra.mydomain.com).

9.4.3 What to Read Next

For information on how to manage your OracleAS Disaster Recovery environment, such as setting up Oracle Data Guard and configuring the OracleAS Metadata Repository database, see the Oracle Application Server 10g High Availability Guide.