Oracle® Application Server 10g High Availability Guide
10g (9.0.4) Part No. B10495-02 |
|
![]() |
![]() |
This chapter focuses on the high availability aspects of the Oracle Application Server 10g Infrastructure. It discusses the features and architectural solutions for high availability of the Infrastructure and is divided into the following sections:
Oracle Application Server 10g provides a completely integrated infrastructure and framework for development and deployment of enterprise applications. An Oracle Application Server 10g Infrastructure installation type provides centralized product metadata, security and management services, and configuration information and data repositories for the Oracle Application Server 10g middle tier. By integrating the Infrastructure services required by the middle tier, time and effort required to develop enterprise applications are reduced. In turn, the total cost of developing and deploying these applications is reduced, and the deployed applications are more reliable.
The Oracle Application Server 10g Infrastructure provides the following overall services:
Oracle Application Server 10g Infrastructure stores all application server metadata required by Oracle Application Server 10g middle tier instances. This data is stored in an Oracle9i database, thereby leveraging the robustness of the database to provide a reliable, scalable, and easy-to-manage metadata repository.
The security service provides a consistent security model and identity management for all applications deployed on Oracle Application Server 10g. The service enables centralized authentication using single sign-on, Web-based administration through the Oracle Delegated Administration Services, and centralized storage of user authentication credentials. The Oracle Internet Directory is used as the underlying repository for this service.
This service is used by Distributed Configuration Management to manage and administer Oracle Application Server 10g middle tier instances and the Oracle Application Server 10g Infrastructure. It is also used to administer clustering services for the middle tier. Application Server Console reduces the total administrative cost by centralizing the management of deployed J2EE applications.
The Oracle Application Server 10g Infrastructure consists of several components that contribute to its role and function. These components work with each other to provide the Infrastructure’s product metadata, security, and management services. This section describes these Infrastructure components, which are:
Oracle Application Server Metadata Repository is an Oracle9i Enterprise Edition database server and stores component-specific information that is accessed by the Oracle Application Server middle tier or Infrastructure components as part of their application deployment. The end user or the client application does not access this data directly. For example, a Portal application on the middle tier accesses the Portal metadata as part of the Portal page assembly aggregation. Metadata also includes demo data for many Oracle Application Server components, such as data used by the Order Management Demo for BC4J.
Oracle Application Server metadata and customer or application data can co-exist in the Oracle Application Server Metadata Repository, the difference is in which applications are allowed to access them.
The Oracle Application Server Metadata Repository stores three main types of metadata corresponding to the three main Infrastructure services described in the section "Oracle Application Server 10g Infrastructure Overview". These types of metadata are:
Table 3–1 shows the Oracle Application Server components that store and use these types of metadata during application deployment.
Table 3-1 Metadata and Infrastructure Components
Oracle Application Server Metadata Repository (OracleAS Metadata Repository) is needed for all application deployments except for those using the J2EE and Web Cache installation type. Oracle Application Server provides three middle tier installation options:
J2EE and Web Cache: Installs Oracle HTTP Server, Oracle Application Server Containers for J2EE (OC4J), Oracle Application Server Web Cache (OracleAS Web Cache), Web Services, Oracle Business Components for Java (BC4J), and Application Server Console.
Portal and Wireless: Installs all components of J2EE and OracleAS Web Cache, plus UDDI, Oracle Application Server Portal (OracleAS Portal), Oracle Application Server Syndication Services (OracleAS Syndication Services), Oracle Ultra Search, and Oracle Application Server Wireless (OracleAS Wireless).
Business Intelligence and Forms: Installs all components of J2EE and OracleAS Web Cache, OracleAS Portal and Oracle Application Server 10g Wireless, plus Oracle Application Server Forms Services, Oracle Application Server Reports Services, Oracle Application Server Discoverer, and Oracle Application Server Personalization.
Integration components, such as Oracle Application Server ProcessConnect, Oracle Application Server InterConnect, and Oracle Workflow are installed on top of any of these middle tier install options.
The Distributed Configuration Management (DCM) component enables middle tier management, and stores its metadata in the OracleAS Metadata Repository for both the Portal and Wireless, and the Business Intelligence and Forms install options. For the J2EE and Web Cache installation type, by default, DCM uses a file-based repository. If you choose to associate the J2EE and Web Cache installation type with an Infrastructure, the file-based repository is moved into the OracleAS Metadata Repository.
See Also: Oracle Application Server 10g Installation Guide for information on the Oracle Application Server 10g installation details. |
The Oracle Identity Management framework in the Infrastructure includes the following components:
Oracle Internet Directory is Oracle’s implementation of a directory service using the Lightweight Directory Access Protocol (LDAP) version 3. It runs as an application on the Oracle9i database and utilizes the database’s high performance, scalability, and high availability.
Oracle Internet Directory provides a centralized repository for creating and managing users for the rest of the Oracle Application Server 10g components such as OC4J, Oracle Application Server 10g Portal, or Oracle Application Server 10g Wireless. Central management of user authorization and authentication enables users to be defined centrally in Oracle Internet Directory and shared across all Oracle Application Server 10g components.
Oracle Internet Directory is provided with a Java-based management tool (Oracle Directory Manager), a Web-based administration tool (Oracle Delegated Administration Services) for trusted proxy-based administration, and several command-line tools. Oracle Delegated Administration Services provide a means of provisioning end users in the Oracle Application Server 10g environment by delegated administrators who are not the Oracle Internet Directory administrator. It also allows end users to modify their own attributes.
Oracle Internet Directory also enables Oracle Application Server 10g components to synchronize data about users and group events, so that those components can update any user information stored in their local application instances.
OracleAS Single Sign-On is a multi-part environment which is made up of both middle tier and database functions allowing for a single user authentication across partner applications. A partner application can be achieved either by using the SSOSDK or via the Apache mod_osso
module. This module allows Apache (and subsequently URLS) to be made partner applications.
OracleAS Single Sign-On is fully integrated with Oracle Internet Directory, which stores user information. It supports LDAP-based user and password management through Oracle Internet Directory.
OracleAS Single Sign-On supports Public Key Infrastructure (PKI) client authentication, which enables PKI authentication to a wide range of Web applications. Additionally, it supports the use of X.509 digital client certificates and Kerberos Security Tickets for user authentication.
By means of an API, OracleAS Single Sign-On can integrate with third-party authentication mechanisms such as Netegrity Site Minder.
See Also: Oracle Application Server Single Sign-On Administrator's Guide. (This guide also includes Identity Management replication instructions.) |
The Infrastructure installation type installs Oracle HTTP Server for the Infrastructure. This is used to service requests from other distributed components of the Infrastructure and middle tier instances. In the Infrastructure, Oracle HTTP Server services requests for OracleAS Single Sign-On and Oracle Delegated Administration Services. The latter is implemented as a servlet in an OC4J process in the Infrastructure.
In the Infrastructure, OC4J is installed in the Infrastructure to run Oracle Delegated Administration Services and OracleAS Single Sign-On. The former runs as a servlet in OC4J.
Oracle Delegated Administration Services provide a self-service console (for end users and application administrators) that can be customized to support third-party applications. In addition, it provides a number of services for building customized administration interfaces that manipulate Oracle Internet Directory data. Oracle Delegated Administration Services are a component of Oracle Identity Management.
See Also: Oracle Internet Directory Administrator's Guide for more information about Oracle Delegated Administration Services. |
Oracle Enterprise Manager - Application Server Console (Application Server Console) provides a Web-based interface for managing Oracle Application Server components and applications. Using the Oracle Application Server Console, you can do the following:
monitor Oracle Application Server components, Oracle Application Server middle tier and Infrastructure instances, OracleAS Clusters, and deployed J2EE applications and their components
configure Oracle Application Server components, instances, OracleAS Clusters, and deployed applications
operate OracleAS components, instances, OracleAS Clusters, and deployed applications
manage security for OracleAS components and deployed applications
For more information on Oracle Enterprise Manager and its two frameworks, see Oracle Enterprise Manager Concepts.
See Also: Oracle Application Server 10g Administrator's Guide - provides descriptions on Application Server Console and instructions on how to use it. |
As described earlier the Oracle Application Server 10g Infrastructure provides the following services
product metadata
security service
management service
From an availability standpoint, these services are provided by the following components, which must all be available to guarantee availability of the Infrastructure:
For the Infrastructure to provide all essential services, all of the above components must be available. On UNIX platforms, this means that the processes associated with these components must be up and active. Any high availability solution must be able to detect and recover from any software failures of any of the processes associated with the Infrastructure components. It must also be able to detect and recover from any hardware failures on the hosts that are running the Infrastructure.
In Oracle Application Server 10g, all of the Infrastructure processes, except the database, its listener, and Application Server Console, are started, managed, and restarted by the Oracle Process Management and Notification (OPMN) framework. This means any failure of an OPMN-managed process is handled internally by OPMN. OPMN is automatically installed and configured at install time. However, any database process failure or database listener failure is not handled by OPMN. Also, failure of any OPMN processes leaves the Infrastructure in a non-resilient mode if the failure is not detected and appropriate recovery steps taken.
OracleAS provides two solutions to provide intrasite high availability for the Infrastructure. These are:
These intrasite high availability solutions provide protection from local hardware and software failures that cannot be detected and recovered by OPMN. Examples of such failures are a system panic or node crash. These solutions, however, cannot protect the Infrastructure from site failures or media failures, which result in damage to or loss of data.
Oracle Application Server 10g provides a disaster recovery solution to protect against disasters and site failures. This solution is described in Chapter 6, "Oracle Application Server Disaster Recovery ".
A site failure or disaster will most likely affect all the systems including middle tiers, Infrastructure, and backend databases. Hence, the disaster recovery solution also provides mechanisms to protect the middle tier and the Infrastructure database.
In short, the intrasite high availability solutions, OracleAS Cold Failover Cluster and OracleAS Active Failover Cluster, provide resilience for only the OracleAS Infrastructure from local hardware and software failures. The middle tier can continue to function with a resilient Infrastructure. The disaster recovery solution, on the other hand, deals with a complete site failure, which requires failing over not only the Infrastructure but also the middle tier. The intrasite high availability solutions for the Infrastructure are discussed in the following sections.
The Oracle Application Server Cold Failover Clusters (OracleAS Cold Failover Cluster) solution for the Infrastructure uses a two node hardware cluster as depicted in Figure 3-1, "Normal operation of OracleAS Cold Failover Cluster solution" below.
For the purpose of describing the solution, it is important to clarify the following terminology within the context of the OracleAS Cold Failover Cluster solution.
A cluster, in generic definition, is a collection of loosely coupled computers (called nodes) that provides a single view of network services (for example: an IP address) or application services (for example: databases, web servers) to clients of these services. Each node in a cluster is a standalone server that runs its own processes. These processes can communicate with one another to form what looks like a single system that cooperatively provides applications, system resources, and data to users. This type of clustering offers several advantages over traditional single server systems for highly available and scalable applications.
Hardware clusters are clusters that achieve high availability and scalability through the use of additional hardware (cluster interconnect, shared storage) and software (health monitors, resource monitors). (The cluster interconnect is a private link used by the hardware cluster for heartbeat information to detect node death.) Due to the need for additional hardware and software, hardware clusters are commonly provided by hardware vendors such as SUN, HP, IBM, and Dell. While the number of nodes that can be configured in a hardware cluster is vendor dependent, for the purpose of Oracle Application Server 10g Infrastructure High Availability using the Oracle Application Server Cold Failover Clusters solution, only two nodes are required. Hence, this document assumes a two-node hardware cluster for that solution.
Failover is the process by which the hardware cluster automatically relocates the execution of an application from a failed node to a designated standby node. When a failover occurs, clients may see a brief interruption in service and may need to reconnect after the failover operation has completed. However, clients are not aware of the physical server from which they are provided the application and data. The hardware cluster’s software provides the APIs to automatically start, stop, monitor, and failover applications between the two nodes of the hardware cluster.
The node that is actively executing one or more Infrastructure installations at any given time. If this node fails, the hardware cluster automatically fails the Infrastructure over to the secondary node. Since the primary node runs the active Infrastructure installation(s), it is considered the "hot" node.
This is the node that takes over the execution of the Infrastructure if the primary node fails. Since the secondary node does not originally run the Infrastructure, it is considered the "cold" node. And, because the application fails from a hot node (primary) to a cold node (secondary), this type of failover is called cold failover.
To present a single system view of the cluster to network clients, hardware clusters use what is called a logical or virtual IP address. This is a dynamic IP address that is presented to the outside world as the entry point into the cluster. The hardware cluster’s software manages the movement of this IP address between the two physical nodes of the cluster while the external clients connect to this IP address without the need to know which physical node this IP address is currently active on. In a typical two-node cluster configuration, each physical node has its own physical IP address and hostname, while there could be several logical IP addresses, which float or migrate between the two nodes. For a given OracleAS Infrastructure installation, the logical IP/virtual name associated with that installation is the IP/name that is used by the clients to connect to the Infrastructure. Refer to the Oracle Application Server 10g Installation Guide for more information on the installation process.
The virtual hostname is the hostname associated with the logical or virtual IP. This is the name that is chosen to give the OracleAS middle tier a single system view of the hardware cluster. This name-IP entry must be added to the DNS that the site uses, so that the middle tier nodes can associate with the Infrastructure without having to add this entry into their local /etc/hosts
(or equivalent) file. For example, if the two physical hostnames of the hardware cluster are node1.mycompany.com
and node2.mycompany.com
, the single view of this cluster can be provided by the name selfservice.mycompany.com
. In the DNS, selfservice
maps to the logical IP address of the Infrastructure, which floats between node1
and node2
without the middle tier knowing which physical node is active and servicing the requests.
Note: Whenever the phrase "virtual name" is used in this document, it is assumed to be associated with the logical IP address. In cases where just the IP address is needed or used, it will be explicitly stated so. |
Even though each hardware cluster node is a standalone server that runs its own set of processes, the storage subsystem required for any cluster-aware purpose is usually shared. Shared storage refers to the ability of the cluster to be able to access the same storage, usually disks, from both the nodes. While the nodes have equal access to the storage, only one node, the primary node, has active access to the storage at any given time. The hardware cluster’s software grants the secondary node access to this storage if the primary node fails. For the OracleAS Infrastructure, its ORACLE_HOME
is on such a shared storage file system. This file system is mounted by the primary node; if that node fails, the secondary node takes over and mounts the file system. In some cases, the primary node may relinquish control of the shared storage, such as when the hardware cluster’s software deems the Infrastructure as unusable from the primary node and decides to move it to the secondary.
Figure 3-1 shows the layout of the two-node cluster for the OracleAS Cold Failover Cluster high availability solution. The two nodes are attached to shared storage. For illustration purposes, a virtual/logical IP address of 144.25.245.1 is active on physical Node 1. Hence, Node 1 is the primary or active node. The virtual name selfservice.mycompany.com
is mapped to this virtual IP address, and the middle tier associates the Infrastructure with selfservice.mycompany.com
.
Figure 3-1 Normal operation of OracleAS Cold Failover Cluster solution
In normal operating mode, the hardware cluster’s software enables the virtual IP 144.25.245.1 on physical Node 1 and starts all Infrastructure processes (database, database listener, Oracle Enterprise Manager process, and OPMN) on that node. OPMN then starts, monitors, and restarts, if necessary, any of the following failed Infrastructure processes: Oracle Internet Directory, OC4J instances, and Oracle HTTP Server.
If the primary node fails, the virtual IP address 144.25.245.1 is manually enabled on the secondary node (Figure 3-2). All the Infrastructure processes are then started on the secondary node. The middle tier processes accessing the Infrastructure will see a temporary loss of service as the virtual IP and the shared storage are moved over and the database, database listener, and all other Infrastructure processes are started. Once the processes are up, middle tier processes that were retrying during this time are reconnected. New connections are not aware that a failover has occurred.
While the hardware cluster framework can start, monitor, detect, restart, or failover Infrastructure processes, these actions are not automatic and involve some scripting or simple programming. Required scripts are described in Chapter 5, " Managing Infrastructure High Availability".
For information on setting up and operating the OracleAS Cold Failover Cluster solution for the Infrastructure, see Oracle Application Server 10g Installation Guide. This guide covers pre-installation and installation tasks.
The OracleAS Cold Failover Cluster solution consists of a two-node cluster accessing a shared disk (see Figure 3-3) that contains the Infratructure’s data files. At any point in time, only one node is active. During normal operation, the second node is on standby. OracleAS middle tier components access the cluster through a virtual hostname that is mapped to a virtual IP in the subnet. In the example in Figure 3-3, the virtual hostname selfservice.mycompany.com
and virtual IP 144.25.245.1 are used. When a failover occurs from node 1 to node 2, these virtual hostname and IP are moved to the standby node, which now becomes the active node. The failure of the active node is transparent to the OracleAS middle tier components.
Note: Only static IP addresses can be used in the OracleAS Cold Failover Cluster solution for Windows. |
Figure 3-3 Oracle Application Server Cold Failover Clusters solution for Windows
The concepts explained in the previous section (OracleAS Cold Failover Cluster for UNIX) are also applicable for the Windows OracleAS Cold Failover Cluster solution, which uses Microsoft Cluster Server software for managing high availability for the hardware cluster. Additionally, Oracle Fail Safe is used in conjunction with Microsoft Cluster Server to configure the following components:
virtual hostname and IP
OracleAS Infrastructure database
Oracle Process Management and Notification service
Application Server Console
Central to the Windows OracleAS Cold Failover Cluster solution is the concept of resource groups. A group is a collection of resources defined through Oracle Fail Safe. During failover from the active node to the standby node, the group, and hence, the resources in it, failover as a unit. During installation and configuration of the OracleAS Cold Failover Cluster, a single group is defined for the solution. This group consists of the following:
virtual IP for the cluster
virtual hostname for the cluster
shared disk
Infrastructure database
TNS listener for the database
OPMN
Application Server Console
The integration of Oracle Fail Safe and Microsoft Cluster Server provide an easy to manage environment and automatic failover functionality in the OracleAS Cold Failover Cluster solution. The Infrastructure database, its TNS listener, and OPMN are installed as Windows services and are monitored by Oracle Fail Safe and Microsoft Cluster Server. Upon failure of any of these Windows services, Microsoft Cluster Server will try to restart the service three times (default setting) before failing the group to the standby. Additionally, OPMN monitors, starts, and restarts the Oracle Internet Directory, OC4J, and Oracle HTTP Server processes.
See Also: Oracle Application Server 10g Installation Guide for details on the installation process and requirements for installation. |
OracleAS middle tier can also be installed on the same node(s) as the OracleAS Cold Failover Cluster solution (see Figure 3-4). If the OracleAS middle tier is installed on both nodes of the OracleAS Cold Failover Cluster, both middle tier installations are concurrently active and servicing requests while the Infrastructure is active only on one of the nodes. Figure 3-4 provides a graphical depiction of this discussion.
Figure 3-4 OracleAS Middle Tier on same nodes as OracleAS Cold Failover Cluster solution
This set up has the following characteristics:
The middle tiers are installed on local storage, and a load balancer should be available in front of them to load balance between them.
The middle tiers do not benefit from the failover capabilities of the hardware cluster system or OracleAS Cold Failover Cluster solution. They have their own ways of achieving high availability, as discussed in Chapter 2 and Chapter 4 of this guide.
The middle tier instances (J2EE and Web Cache installation type) can be grouped together to form an Oracle Application Server 10g Cluster, benefiting from the high availability attributes of Oracle Application Server 10g Clusters as described in Chapter 2.
On each node, port conflicts between the middle tier and the Infrastructure must be avoided. Port numbers used by the middle tier must be different from those used by the Infrastructure. Any conflict can be avoided at installation time using the staticports.ini
file. See Oracle Application Server 10g Installation Guide.
If the Infrastructure on a node experiences a software failure, the middle tier on the same node may still be serviceable. This can be true even after the Infrastructure fails over to the standby node in the OracleAS Cold Failover Cluster.
See Also: Oracle Application Server 10g Installation Guide |
Note: Check OracleMetalink (http://metalink.oracle.com) for the most current certification status of this feature or consult your Oracle sales representative before deploying this feature in a production environment. |
Oracle Application Server Active Failover Cluster (OracleAS Active Failover Cluster) provides a robust cluster architecture for the Infrastructure. It provides a more transparent high availability solution than the OracleAS Cold Failover Cluster solution. Because the nodes in the OracleAS Active Failover Cluster solution are all active, failover from one node to another is quick and requires no manual intervention. The active-active set up also provides scalability to the Infrastructure deployed on it. Figure 3-5 depicts the overall architecture of the solution.
Figure 3-5 OracleAS Active Failover Cluster high availability solution
In this solution, the Infrastructure software is installed identically on each node of a hardware cluster that is running OracleAS Active Failover Cluster technology. Each node has a local copy of the Infrastructure software (including Oracle Identity Management software) and an instance of the database. The database files are installed in shared storage accessible by all nodes. The database instances open the database concurrently for read/write operations. The Infrastructure configuration files that are not in the database but in the file system are local to each node. These files contain node-specific configuration information.
The cluster is front-ended by a load balancer appliance. Oracle recommends that this load balancer be deployed in a fault-tolerant mode to maintain availability in case of load balancer failure. The load balancer appliance is used to direct non Oracle Net traffic from the middle tier to the Infrastructure. This traffic includes HTTP, HTTPS, and LDAP requests. The configuration of the load balancer is set to direct requests from the middle tier to any of the active Infrastructure nodes.
Oracle Net traffic from the middle tier does not go through the load balancer. This traffic is directed to the Infrastructure nodes via connect descriptors with multiple addresses in the address list. The address list is used to load balance certain Oracle Net traffic across the Infrastructure nodes. Oracle Net traffic include those initiated by:
JDBC thin
dblinks using connect strings or tnsAlias
tnsAlias-based access such as DADs (Database Access Descriptors)
connect descriptor-based access
The OracleAS Active Failover Cluster high availability solution enables failover for failure of a whole node as well as failure of individual components of the node such as the database instance and Oracle Internet Directory.
The following considerations apply to this solution:
The OracleAS Active Failover Cluster is deployed on a hardware cluster.
All nodes in the cluster are peers in the following ways:
They run the same version of operating system.
They have the same version or compatible versions of all software such as the Java runtime.
ORACLE_HOME
path and structure is the same on all nodes of the cluster.
ORACLE_SID
has to be unique for each database instance on each node.
Service name has to be common for all database instances.
Identical port numbers for Infrastructure components.
Infrastructure components are in one cluster (not asymmetrically distributed) and in a single OracleAS Active Failover Cluster database. All nodes have identical configuration of Infrastructure components (OPMN, Oracle HTTP Server, OracleAS Single Sign-On, Oracle Enterprise Manager process, Oracle Internet Directory LDAP server, and Oracle Delegated Administration Services).
For information on setting up and operating the OracleAS Active Failover Cluster high availability solution for the Infrastructure, see Chapter 5, " Managing Infrastructure High Availability". The pre-installation and installation tasks for this high availability solution are provided in detail in Oracle Application Server 10g Installation Guide.
In order for an OracleAS Active Failover Cluster to service Oracle Internet Directory LDAP and HTTP (for OracleAS Single Sign-On and Oracle Delegated Administration Services) requests, a load balancer is required for the OracleAS Active Failover Cluster configuration. The hostname of the load balancer virtual server is exposed as the hostname of the Infrastructure for these requests. This section describes the configuration requirements for the load balancer for the default installation of OracleAS Active Failover Cluster.
For high availability, the following is recommended:
The load balancer should be deployed in a fault tolerant configuration. Two load balancers should be used. These fault tolerant load balancers should be identical in terms of their configuration and capacity. Their failover should be automatic and seamless from the middle tier’s standpoint.
The load balancer type used should be able to handle both HTTP and LDAP traffic in the default OracleAS Active Failover Cluster configuration described in this chapter. Any load balancing mechanism that supports only one of the protocols (for example, OracleAS Web Cache for HTTP) cannot be used in the default configuration.
The load balancer should be accessible from all nodes of the OracleAS Active Failover Cluster deployment.
The load balancer should be accessible from all machines that need to access the Infrastructure.
The load balancer should not drop idle connections. Any timeouts associated with dropping of connection should be eliminated.
Two load balancer parameters are of primary importance for the OracleAS Active Failover Cluster configuration:
The nodes to which the load balancer directs traffic.
The persistence setting of the load balancer.
The recommended setting for the load balancer for the above two parameters are provided below in Table 3–2. Load balancers come in many flavors and each may have its own configuration mechanism. Consult your load balancer’s documentation for the specific instructions to achieve these configurations.
Table 3-2 Recommended settings for load balancer
Deployment Phase | Traffic redirection Setting | Persistence Setting |
---|---|---|
OracleAS Active Failover Cluster installation |
|
NA |
OracleAS Active Failover Cluster normal operations |
|
Session level persistence should be configured for LDAP and HTTP traffic. |
OracleAS Active Failover Cluster node or process is brought down |
|
Session level persistence should be configured for LDAP and HTTP traffic. |
middle tier association |
|
Session level persistence should be configured for LDAP and HTTP requests. |
The persistence mechanism used should provide session level stickiness. By default, HTTP and Oracle Internet Directory requests both use the same virtual host address configured for the load balancer. Hence, the persistence mechanism used is available for both kinds of requests.
If the load balancer allows for the configuration of different persistence mechanisms for different server ports (LDAP and HTTP) for the same virtual server, then this is recommended strategy. In this case, a cookie-based persistence with session-level timeout is more suitable for the HTTP traffic. No persistence setting is required for the LDAP traffic.
If the load balancer does not allow specification of different persistence mechanisms for LDAP and HTTP, then the timeout value for session level stickiness should be configured based on the requirements of the deployed application. The timeout value should not be too high as chances of traffic from a given middle tier instance always being directed to the same node of the OracleAS Active Failover Cluster are higher. Alternatively, if the timeout is too low, the chances of a session timeout occurring for longer running operations that access the Infrastructure are higher.
The recommended default stickiness timeout is 60 seconds. This should be adjusted based on the nature of the deployment and the load balancing achieved across the OracleAS Active Failover Cluster nodes. It should be increased if session timeouts are experienced by Delegated Administration Services users. It should be decreased if even load balancing is not achieved.
Both the LDAP & HTTP traffic should be tested after configuration of the load balancer. This should be done from any machine outside the OracleAS Active Failover Cluster. The tests should have the following coverage:
Access and test the Oracle Delegated Administration Services URL to test HTTP requests.
Access and test the OracleAS Single Sign-On URL to test HTTP requests.
Access and test the Oracle Internet Directory by running a few ldapsearch
commands for LDAP requests.
The requests types above should be directed to different nodes of the OracleAS Active Failover Cluster. The desired operation(s) should complete successfully for the tests to be successful.