Oracle Application Server 10g High Availability Guide 10g (9.0.4) Part Number B10495-01 |
|
This chapter describes how to perform configuration changes and on-going maintenance of OracleAS Clusters. Because managing individual nodes involves some complexity, an OracleAS Cluster provides the ability to manage the nodes as a single entity, thereby reducing the management complexity. Instructions are provided for managing and configuring OracleAS Clusters using Oracle Enterprise Manager Application Server Control (Application Server Control) and where required, using the dcmctl
command line utility.
This chapter covers the following topics:
Oracle Application Server supports different clustering configuration options to support high availability in the Oracle Application Server middle tier. OracleAS Clusters provide distributed configuration information and let multiple Oracle Application Server instances work together and behave as a single system to external clients. When configured to use redundant components throughout, OracleAS Clusters support a highly available system in which to deploy and run applications with no single point of failure.
This section covers the following topics:
When administering an OracleAS Cluster that is managed using a repository, an administrator uses either Application Server Control or dcmctl
commands to manage and configure common configuration information. The Oracle Application Server manageability components then replicate the common configuration information across all Oracle Application Server instances within the cluster. Using OracleAS Clusters, the common configuration information for the cluster is called the cluster-wide configuration.
Each application server instance in a OracleAS Cluster has the same base configuration. The base configuration contains the cluster-wide configuration and excludes instance-specific parameters.
This section covers the following:
Oracle Application Server Clusters managed using a database repository utilize Oracle9i database to store configuration information and metadata, including both cluster-wide configuration information and instance-specific parameters.
Using a database repository protects configuration information by storing the information in the database. Using the database, combined with Oracle Application Server high availability solutions both protects configuration information and allows you to continue operations after system failures.
Oracle Application Server Clusters managed using a file-based repository use the file system to store configuration information, including both cluster-wide configuration information and instance-specific parameters. Using Oracle Application Server Clusters managed using a file-based repository does not present a single point of failure; remaining Oracle Application Server instances within a cluster are available to service client requests when one or more Oracle Application Server instances is down.
Configuring and managing Oracle Application Server Clusters managed using a file-based repository requires that the administrator set up a Farm and perform certain configuration tasks using the dcmctl
command line utility.
Figure 4-1 shows the cluster configuration hierarchy, starting with an Oracle Application Server top-level Farm for an OracleAS Cluster. This figure applies to both types of OracleAS Clusters: those managed using a file-based repository and those managed using a database repository.
Figure 4-1 shows the OracleAS Clusters configuration hierarchy, including the following:
Manually configured OracleAS Clusters store configuration information in local configuration files and do not use either a database repository or a file-based repository. In a manually configured cluster, it is the administrator's responsibility to synchronize the configuration of the Oracle Application Server instances that are part of the cluster.
In an OracleAS Web Cache cluster, multiple instances of OracleAS Web Cache operate as one logical cache to provide high availability. Each OracleAS Web Cache in the cluster is called a cache cluster member. A cache cluster can consist of two or more members. The cache cluster members communicate with one another to request cacheable content that is cached by another cache cluster member and to detect when a cache cluster member fails. When a cache cluster member detects the failure of another cluster member, the remaining cache cluster members automatically take over ownership of the content of the failing member. When the cache cluster member can be reached again, OracleAS Web Cache again reassigns the ownership of the content.
See Also:
Oracle Application Server Web Cache Administrator's Guide for information on OracleAS Web Cache clustering and configuring a OracleAS Web Cache cluster. |
This section describes how to create and use an OracleAS Cluster. The information in this section applies both to Oracle Application Server Clusters managed using a database repository and to those managed using a file-based repository.
This section covers the following topics:
Distributed Configuration Management Reference Guide for information on
See Also:
dcmctl
commands
The collection of Oracle Application Server instances within a single repository, either a database repository or a file-based repository is known as a Farm. When an Oracle Application Server instance is part of a Farm, you can view a list of all application server instances that are part of the Farm when you start Application Server Control. The application server instances shown in the Standalone Instances area on the Application Server Control Farm Home Page are available to be added to a cluster.
This section covers the following:
If you have not already done so during the Oracle Application Server installation process, you can associate an application server instance with a Farm using one of the following techniques:
For a Farm that uses a database repository, do the following to add an application server instance to the Farm:
For a Farm that is managed using a file-based repository, you need to use the dcmctl
joinFarm
command to add a standalone application server instance to the Farm.
You create a new OracleAS Cluster using the Application Server Control Farm Home Page. Application Server Control only shows the Farm Home Page when an Oracle Application Server instance is part of a Farm.
From the Farm Home page, create a new OracleAS Cluster as follows:
Application Server Control displays the Create Cluster page.
Figure 4-2 shows the Create Cluster page.
A confirmation page appears.
After creating a new OracleAS Cluster, the Farm Home page shows the cluster in the Clusters area. After creating a new cluster, the cluster is empty and does not include any application server instances. Use the Join Cluster button on the Farm Home page to add application server instances to the cluster.
Figure 4-3 shows the Application Server Control Farm Home Page, including two clusters, cluster1 and cluster2.
Table 4-1 lists the cluster control options available on the Farm Home Page.
Oracle Application Server replicates cluster-wide configuration within an OracleAS Cluster. This applies whether the cluster contains only one application server instance or many application server instances. To provide high availability for the Oracle Application Server middle tier using an OracleAS Cluster, a cluster needs to contain at least two application server instances.
This section covers the following topics:
See Also:
dcmctl
commands
To add an application server instance to a cluster:
Figure 4-4 shows the Join Cluster page.
Repeat these steps for each additional standalone application server instance you want to join the cluster.
Note the following when adding application server instances to an OracleAS Cluster:
dcmctl
joinCluster
command.
dcmctl
isClusterable
command to test if an application server instance is clusterable. If the application server instance is not clusterable, then Application Server Control returns an error when you attempt to add the instance to a cluster.
To remove an application server instance from a cluster, do the following:
To remove multiple standalone application server instances, you need to repeat these steps multiple times.
Note the following when removing application server instances from an OracleAS Cluster:
dcmctl
leaveCluster
command removes one application server instance from the cluster at each invocation.
You can create OracleAS Clusters that do not depend on the database to store cluster-wide configuration and management information. Using a file-based repository, cluster-wide configuration information and related metadata is stored on the file system of an Oracle Application Server instance that is the repository host (host). Oracle Application Server instances that are part of a Farm that uses a file-based repository depend on the repository host to store cluster-wide configuration information. After creating a Farm that includes Oracle Application Server instances managed using a file-based repository, you can create OracleAS Clusters.
This section covers the following topics:
This section describes how to create a Farm that uses a file-based repository and covers the following:
After a Farm is created that includes Oracle Application Server instances managed using a file-based repository, you can create OracleAS Clusters using either Application Server Control or dcmctl
commands.
whichFarm
and Leaving a FarmTo create a file-based repository you need to start with a standalone application server instance. A standalone application server instance is an instance that is not associated with a Farm. To verify that the Oracle Application Server instance that you want to use as the repository host for a file-based repository is a Standalone Instance, issue the following command:
% dcmctl whichFarm
This command returns the following when an instance is not associated with any Farm:
Standalone OracleAS instance
Table 4-2 shows sample output from whichFarm
. When an instance is not a standalone instance whichFarm
returns information showing that the instance is part of a Farm.
When the instance that you want to use with a file-based repository is part of an existing Farm, you need to first leave the Farm before you can initialize a file-based repository.
Use the leavefarm
command to leave the Farm as follows:
% dcmctl leaveFarm
After you leave the Farm, whichFarm
returns the following:
% dcmctl whichFarm Standalone OracleAS instance
There are restrictions on leaving a farm using dcmctl
leaveFarm
, including the following:
dcmctl
leaveFarm
on an Oracle Application Server Infrastructure system, dcmctl
reports an error unless the Infrastructure system is the only Oracle Application Server instance that is part of the Farm.
dcmctl
leaveFarm
command stops all the Oracle Application Server components running on the Oracle Application Server instance.
leaveFarm
on an Oracle Application Server Infrastructure system that serves as the repository for any Oracle Application Server instances other than itself. To run leavefarm on the Oracle Application Server Infrastructure, you must first go the Oracle Application Server instances and run leaveFarm
on those instances.
After selecting the Oracle Application Server instance to be the repository host for the file-based repository, do the following to create a Farm and initialize the file-based repository on the repository host instance:
dcmctl getRepositoryid
dcmctl joinFarm -r <repositoryID>
Where repositoryID
is the value returned from the previous step. The dcmctl
joinFarm
command sets up the repository host instance and initializes the Farm managed using a file-based repository; Oracle Application Server stores the Farm's configuration information in a file-based repository on the repository host instance.
After selecting the repository host instance for the file-based repository and initializing the file-based repository, do the following to add additional application server instances to the Farm:
dcmctl getRepositoryId
To obtain the repository ID for the repository host instance, you can issue the getRepositoryid
command on any system which is part of the Farm you want to join (that is, if another instance uses the same repository host instance, you can use the dcmctl
getRepositoryid
command on that system).
dcmctl joinFarm -r repositoryID
Where the repositoryID
you specify is the value returned in step 1.
See Also:
Distributed Configuration Management Reference Guide for information on |
This section covers the following topics:
Once a Farm is set up that is managed using a file-based repository, you can use the Application Server Control or dcmctl
commands to create and manage OracleAS Cluster within the Farm, and you can configure standalone instances within the Farm to join a cluster.
See Also:
|
An important consideration for using OracleAS Clusters with a file-based repository is determining which Oracle Application Server instance is the repository host.
Consider the following when selecting the repository host for a file-based repository:
Oracle Application Server provides commands to save a file-based repository and prevent it from being permanently lost, when the repository host instance goes down or its file system is damaged. Using the exportRepository
command you can save the entire file-based repository. After saving the configuration information using the exportRepository
command, using importRepository
, you can restore the saved configuration information to the repository host instance, or to a different instance in the Farm.
To export the repository from the repository host instance, do the following:
dcmctl exportRepository -file
file_name
To import a saved file-based repository, on the system that is to be the repository host instance for the Farm, do the following:
dcmctl importRepository -file file_name
The file_name
is a previously saved file that was created using the dcmctl exportRepository
command. When the file-based repository is restored to a different Oracle Application Server instance, the instance where the importRepository
command runs becomes the new repository host instance.
To specify that the Oracle Application Server instance that was the old repository host instance for a file-based repository is no longer the repository host instance, issue the following command:
dcmctl repositoryRelocated
If you have an Oracle Application Server instance joined to a Farm that uses a file-based repository, you can move that instance to another repository whether it is a file-based or database-based repository. The steps to move to another repository type involve the steps to leave a Farm and join another Farm.
When an OracleAS instance leaves a Farm, it essentially becomes a standalone instance. The instance's DCM-managed configuration metadata in the repository is moved to the instance. Any archives for the instance are also deleted. However, connections to the Infrastructure database that may exist for other components (Oracle Application Server Single Sign-On, JAAS, and Oracle Internet Directory) are not removed.
To leave a Farm, execute the following command at the OracleAS instance:
dcmctl leaveFarm
Note:
After executing the |
The following sections provide instructions to move an OracleAS instance from a file-based repository to other repositories:
When moving an OracleAS instance from a file-based repository to a database-based repository, you must first disassociate the instance from its current repository by leaving the repository's Farm. The instance then becomes a standalone instance at which point you can join it to the Farm of a database-based repository. The following instructions tell you how to perform these tasks:
dcmctl whichFarm
dcmctl leaveFarm
command to bring the instance to a standalone state.
dcmctl joinFarm
To join the instance to another Farm that is using a file-based repository, use the dcmctl
command together with the file-based repository's ID. At the command line of the OracleAS instance:
dcmctl whichFarm
dcmctl leaveFarm
command to bring the instance to a standalone state.
dcmctl getRepositoryId
A repository identifier in the format "hostname:port" is returned.
dcmctl joinFarm -r <repository_ID>
When instances in a Farm use a file-based repository, you can configure DCM so that configuration information that is sent between instances uses SSL. This feature provides for the security of messages sent between instances in the farm and prevents unauthorized instances from joining the farm.
This section describes the steps required to setup SSL and certificate-based security for instances that use a file-based repository.
This section covers the following:
Use the JDK keytool
command to generate a certificate and set up the keystore, as documented in:
http://java.sun.com/j2se/1.4.1/docs/tooldocs/solaris/keytool.html
If you have already generated the key pair and obtained the certificate for OC4J, then you can use the same keystore you previously obtained.
To use SSL certificate-based security, a Java keystore must be setup on each instance in the farm. This keystore may be the same as that used by other Java applications or it can be unique for DCM file-based repository configuration.
After obtaining the keystore and certificate information, on each Oracle Application Server instance in the farm, you need to use the dcmctl
configRepositorySSL
command to create the file that holds keystore information.
Use the configRepositorySSL
as follows on each instance to create the keystore information file:
% dcmctl configRepositorySSL -keystorepath_to_keystore
-storepasspassword
Modify the dcmCache.xml
cache configuration <useSSL>
attribute shown in Table 4-3 to enable or disable the use of SSL.
Optionally, you can specify the location of the file generated using configRepositorySSL
by modifying the value of the <sslConfigFile>
element. If you modify this value, you need to copy the .ssl.conf
file that configRepositorySSL
generates to the new file that you specify using <sslConfigFile>
.
The dcmCache.xml
file is in $ORACLE_HOME/dcm/config
directory on Unix, and in %ORACLE_HOME%\dcm\config
directory on Windows systems.
After the security configuration is consistent across all the instances in the farm, restart each instance, starting with the repository host instance for the file-based repository.
This section describes OC4J configuration for OC4J Instances and processes that are part of OracleAS Clusters that are managed using repositories.
This section covers the following:
Oracle Application Server Containers for J2EE User's Guide for detailed information on configuring OC4J Instances
See Also:
After application server instances join OracleAS Clusters, the application server instances, and the OC4J instances that run on the application server instances have the following properties:
dcmctl
to modify any cluster-wide OC4J parameters, the modifications are propagated to all application server instances in the cluster. To make cluster-wide OC4J configuration changes you need to change the configuration parameters on a single application server instance; the Oracle Application Server distributed configuration management system then propagates the modifications to all the other application server instances within the cluster.
Table 4-4 provides a summary of the OC4J instance-specific parameters. Other OC4J parameters are cluster-wide parameters and are replicated across OracleAS Clusters.
This section covers the following topics:
Oracle Application Server Containers for J2EE User's Guide for complete information OC4J configuration and application deployment
See Also:
You can create a new OC4J instance on any application server instance within managed OracleAS Clusters and the OC4J instance will be propagated to all application server instances across the cluster.
To create an OC4J instance, do the following:
A new OC4J instance is created with the name you provided. This OC4J instance shows up on each application server instance page across the cluster, in the System Components section.
To delete an OC4J instance, select the radio button next to the OC4J instance you wish to delete, then click Delete. The Oracle Application Server Distributed Configuration Management system propagates the OC4J removal across the cluster.
Using OracleAS Clusters, when you deploy an application to one application server instance, the application is propagated to all application server instances across the cluster.
To deploy an application across a cluster, do the following:
dcmctl
commands.
Oracle Application Server Containers for J2EE User's Guide for complete information on deploying applications to an OC4J instance.
See Also:
To assure that Oracle Application Server maintains, across OracleAS Clusters, the state of stateful Web applications you need to configure state replication for the Web applications.
To configure state replication for stateful Web applications, do the following:
Optionally, you can provide the multicast host IP address and port number. If you do not provide the host and port for the multicast address, it defaults to host IP address 230.230.0.0.1 and port number 9127. The host IP address must be between 224.0.0.2 through 239.255.255.255. Do not use the same multicast address for both HTTP and EJB multicast addresses.
Note: When choosing a multicast address, ensure that the address does not collide with the addresses listed in http://www.iana.org/assignments/multicast-addresses Also, if the low order 23 bits of an address is the same as the local network control block, 224.0.0.0 - 224.0.0.255, then a collision may occur. To avoid this provide an address that does not have the same bits in the lower 23 bits of the address as the addresses in this range. |
<distributable/>
tag to all web.xml
files in all Web applications. If the Web application is serializable, you must add this tag to the web.xml
file.
The following shows an example of this tag added to web.xml
:
<web-app> <distributable/> <servlet> ... </servlet> </web-app>
To create an EJB cluster, you specify the OC4J instances that are to be involved in the cluster, configure each of them with the same multicast address, username, and password, and deploy the EJB, which is to be clustered, to each of the nodes in the cluster.
Unlike HTTP clustering, EJBs involved in a cluster cannot be sub-grouped in an island. Instead, all EJBs within the cluster are in one group. Also, only session beans are clustered.
The state of all beans is replicated at the end of every method call to all nodes in the cluster using a multicast topic. Each node included in the EJB cluster is configured to use the same multicast address.
The concepts for understanding how EJB object state is replicated within a cluster are described in the Oracle Application Server Containers for J2EE Enterprise JavaBeans Developer's Guide.
To configure EJB replication, you must do the following:
Figure 4-6 shows this section.
Optionally, you can provide the multicast host IP address and port number. If you do not provide the host and port for the multicast address, it defaults to host IP address 230.230.0.0.1 and port number 9127. The host IP address must be between 224.0.0.2 through 239.255.255.255. Do not use the same multicast address for both HTTP and EJB multicast addresses.
Note: When choosing a multicast address, ensure that the address does not collide with the addresses listed in http://www.iana.org/assignments/multicast-addresses Also, if the low order 23 bits of an address is the same as the local network control block, 224.0.0.0 - 224.0.0.255, then a collision may occur. To avoid this provide an address that does not have the same bits in the lower 23 bits of the address as the addresses in this range. |
orion-ejb-jar.xml
file within the JAR file. See "Configuring Stateful Session Bean Replication for OracleAS Clusters" for full details. You can configure these within the orion-ejb-jar.xml
file before deployment or add this through the Application Server Control screens after deployment. To add this after deployment, drill down to the JAR file from the application page.
For stateful session beans, you may have you modify the orion-ejb-jar.xml
file to add the state replication configuration. Since you configure the replication type for the stateful session bean within the bean deployment descriptor, each bean can use a different type of replication.
Stateful session beans require state to be replicated among nodes. In fact, stateful session beans must send all their state between the nodes, which can have a noticeable effect on performance. Thus, the following replication modes are available to you to decide on how to manage the performance cost of replication:
The state of the stateful session bean is replicated to all nodes in the cluster, with the same multicast address, at the end of each EJB method call. If a node loses power, then the state has already been replicated.
To use end of call replication, set the replication
attribute of the <session-deployment>
tag in the orion-ejb-jar.xml
file to "endOfCall
".
For example,
<session-deployment replication="EndOfCall" .../>
The state of the stateful session bean is replicated to only one other node in the cluster, with the same multicast address, when the JVM is terminating. This is the most performant option, because the state is replicated only once. However, it is not very reliable for the following reasons:
To use JVM termination replication, set the replication
attribute of the <session-deployment>
tag in the orion-ejb-jar.xml
file to "VMTermination
".
For example,
<session-deployment replication="VMTermination" .../>
This section covers the instance-specific parameters that are not replicated across OracleAS Clusters that are managed using repositories.
This section covers the following:
Oracle Application Server Containers for J2EE User's Guide for complete information OC4J configuration and application deployment
See Also:
To provide a redundant environment and to support high availability using OracleAS Clusters, you need to configure multiple OC4J processes within each OC4J instance.
Using OracleAS Clusters, state is replicated in OC4J islands with the same name within OC4J instances with the same name across the cluster. To assure high availability, with stateful applications, OC4J island names within an OC4J instance must be the same in corresponding OC4J instances across the cluster. It is the administrator's responsibility to make sure that island names match where session state replication is needed in a cluster.
The number of OC4J processes on an OC4J instance within a cluster is an instance-specific parameter since different hosts running application server instances in the cluster could each have different capabilities, such as total system memory. Thus, it could be appropriate for cluster to contain application server instances that each run different numbers of OC4J processes within an OC4J instance.
To modify OC4J islands and the number of processes each OC4J island contains, do the following:
Figure 4-7 displays the Multiple VM Configuration Islands section.
Figure 4-8 shows the section where you can modify these ports and set command line options.
To modify OC4J ports or the command line options, do the following:
Figure 4-8 shows the Ports and Command line options areas on the Server Properties page.
This section describes Oracle HTTP Server configuration for OracleAS Clusters that are managed using repositories.
This section covers the following:
This section covers the following:
Using OracleAS Clusters, the Oracle HTTP Server module mod_oc4j
load balances requests to OC4J processes. The Oracle HTTP Server, using mod_oc4j
configuration options, supports different load balancing policies. By providing configurable load balancing policies, OracleAS Clusters can provide performance benefits along with failover and high availability for different types of systems, depending on the network topology and host machine capabilities.
By default, mod_oc4
j uses weights to select a node to forward a request to. Each node has a default weight of 1 unless specified otherwise. A node's weight is taken as a ratio compared to the weights of the other available nodes to define the number of requests the node should service compared to the other nodes in the cluster. Once a node is selected to service a particular request, by default, mod_oc4j
uses the roundrobin
policy to select OC4J processes on the node. If an incoming request belongs to an established session, the request is forwarded to the same node and the same OC4J process that started the session.
The OC4J load balancing policies do not take into account the number of OC4J processes running on a node when calculating which node to send a request to. Node selection is based on the configured weight for the node, and its availability. The number of OC4J processes to run is configured using Application Server Control.
To modify the mod_oc4j
load balancing policy, Administrators use the Oc4jSelectMethod
and Oc4jRoutingWeigh
t configuration directives in the mod_oc4j.conf
file.
To configure the mod_oc4j.conf
file, using Application Server Control, select the HTTP_Server component in an application server instance. Then, select the Administration link and select the Advanced Server Properties link. On the Advanced Server Properties page, select the mod_oc4j.conf
link. On the Edit mod_oc4j.conf page, within the <IfModule mod_oc4j.c>
section, modify Oc4jSelectMethod
and Oc4jRoutingWeight
to select the desired load balancing option.
If you do not use Application Server Control, then edit mod_oc4j.conf
and use the dcmctl
command to propagate the changes to other mod_oc4j.conf
files across the OracleAS Clusters as follows:
% dcmctl updateconfig -ct ohs % opmnctl @cluster:<cluster_name> restartproc ias-component=HTTP_Server
process-type=HTTP_Server
The opmnctl restartproc
command is required to restart all the Oracle HTTP Server instances in the OracleAS Clusters for the changes to take effect.
See Also:
|
The following are instance-specific parameters in the Oracle HTTP Server.
You can modify the HTTP Server ports and listening addresses on the Server Properties Page, which can be accessed from the HTTP Server Home Page. You can modify the virtual host information by selecting a virtual host from the Virtual Hosts section on the HTTP Server Home Page.
To enable Oracle Application Server Single Sign-On to work with an OracleAS Cluster, the Single Sign-On server needs to be aware of the entry point into the cluster, which is commonly the load balancing mechanism in front of the Oracle HTTP Servers. This mechanism could exist as Oracle Application Server Web Cache, a network load balancer appliance, or an Oracle HTTP Server installation.
In order to register an OracleAS Cluster's entry point with the Single Sign-On server, use the SSORegistrar
tool, which can be executed through ossoreg.jar
.
In order to participate in Single Sign-On functionality, all Oracle HTTP Server instances in a cluster must have an identical Single Sign-On registration.
As with all cluster-wide configuration information, the Single Sign-On configuration is propagated among all Oracle HTTP server instances in the cluster. However, the initial configuration is manually configured and propagated. On one of the application server instances, define the configuration with the ossoreg.jar
tool. Then, DCM propagates the configuration to all other Oracle HTTP Servers in the cluster.
If you do not use a network load balancer, then the Single Sign-on configuration must originate with whatever you use as the incoming load balancer-- Oracle Application Server Web Cache, Oracle HTTP Server, and so on.
To configure a cluster for Single Sign-On, execute the ossoreg.jar
command against one of the application server instances in the cluster. This tool registers the Single Sign-On server and the redirect URLs with all Oracle HTTP Servers in the cluster.
Run the ossoreg.jar
command with all of the options as follows, substituting information for the italicized portions of the parameter values.
The values are described fully in Table 4-5.
mod_osso_url
parameter. This should be a HTTP or HTTPS URL depending on the site security policy regarding SSL access to OracleAS Single Sign-On protected resources.
u
option.
$ORACLE_HOME/jdk/bin/java -jar $ORACLE_HOME/sso/lib/ossoreg.jar -oracle_home_path<orcl_home_path>
-site_name<site_name>
-config_mod_osso TRUE -mod_osso_url<URL>
-u<userid>
[-virtualhost<virtual_host_name>
] [-update_mode CREATE | DELETE | MODIFY] [-config_file<config_file_path>
] [-admin_info<admin_info>
] [-admin_id<adminid>
]
The SSORegistrar
tool establishes all information necessary to facilitate secure communication between the Oracle HTTP Servers in the cluster and the Single Sign-On server.
When using Single Sign-On with the Oracle HTTP Servers in the cluster, the KeepAlive
directive must be set to OFF
since the Oracle HTTP Servers are behind a network load balancer. If the KeepAlive
directive is set to ON
, then the network load balancer maintains state with the Oracle HTTP Server for the same connection, which results in an HTTP 503 error. Modify the KeepAlive
directive in the Oracle HTTP Server configuration. This directive is located in the httpd.conf
file of the Oracle HTTP Server.
You can configure a cluster of OracleAS instances to provide only certain limited advantages of clustering.
This section describes how to configure these advanced types of clusters.
If you have more than a single OracleAS instance in a Farm, you can configure one of the Oracle HTTP Servers to be the load balancer for all of the instances. This eliminates the need for all but one of the Oracle HTTP Servers in the OracleAS instances. When you configure a single Oracle HTTP Server as a load balancer, the Oracle HTTP Server must be configured to know about all the OC4J instances in the Farm and route the incoming requests appropriately.
Configure the following:
mod_oc4j.conf
configuration file with the OC4J instance information for each root context, which enables mod_oc4j to route to each deployed application.
cd ORACLE_HOME_Instance/Apache/Apache/conf
mod_oc4j.conf
to include mount points for the root context of each deployed application in the other OC4J instances in the cluster. Each mod_oc4j
configuration file contains mount points for each root context of the deployed applications to which it routes incoming requests.
To route to applications deployed in another instance, you must add a mount point for the other instances' application root context with the additional keyword of "instance://
". The syntax for this keyword requires the OracleAS Instance name and the OC4J instance name.
To route to applications deployed in another cluster, you must add a mount point for the application root context with the additional keyword of "cluster://
". The syntax for this keyword requires the cluster name and the OC4J instance name.
Examples of routing to another instance, multiple instances, or another cluster are as follows:
Oc4jMount /myapp/* instance://Inst2:OC4J_Home Oc4jMount /myapp1/* instance://Inst2:OC4J_Home, Inst3:OC4J_Home Oc4jMount /myapp2/* cluster://Cluster1:OC4J_Home
dcmctl updateConfig dcmctl restart
Once configuration for the cluster is complete, you must ensure that each OracleAS instance and OC4J instance has the same configuration. This type of cluster does not replicate configuration across all instances. You must manage the configuration manually.
You can configure for OC4J state replication through the Application Server Control in the same way as for managed clustering.
Firewalls protect a company's infrastructure by restricting illegal network traffic. Firewall configuration typically involves restricting the ports that are available to one side of the firewall. In addition, it can be set up to restrict the type of traffic that can pass through a particular port, such as HTTP. If a client attempts to connect to a restricted port or uses a protocol that is not allowed, then the client is disconnected immediately by the firewall. Firewalls can also be used within a company Intranet to restrict user access to specific servers.
Some of the components of OracleAS can be deployed on different nodes, which can be separated by firewalls. Figure 4-9 demonstrates one recommended organization of OracleAS components between two firewalls:
All communication between the Oracle HTTP Servers and the OC4J processes behind the second firewall should use SSL encryption. Authorization should be provided using SSL client certificates.
However, the Oracle HTTP Server and OC4J processes communicate through several ports using DCM, OPMN, and mod_oc4j
for this communication. This communication must continue, even if a firewall exists between them. You can continue the communication by exposing the OracleAS component ports through the firewall that are needed to communicate between the OC4J components. You can either manually open each port needed for this communication or you can use the OracleAS Port Tunnel, which opens a single port to handle all communication that normally occurs through several ports. These options are discussed in the following sections:
Instead of opening multiple ports on the intranet firewall, you can use the OracleAS Port Tunnel. The Port Tunnel is a process that facilitates the communication between Oracle HTTP Server and OC4J, including the communication for DCM, OPMN and mod_oc4j, using a single port exposed on the intranet firewall. Thus, you do not have to expose several ports for communication for a single OC4J process. Instead, the Port Tunnel exposes a single port and can handle all of the port requirements for several OC4J processes.
All communication between the Oracle HTTP Servers and the Port Tunnel is encrypted using SSL.
Figure 4-10 shows how three Oracle HTTP Servers communicate with three OC4J processes through the Port Tunnel. Only a single port is exposed on the intranet firewall. The Oracle HTTP Servers exist on a single machine; the Port Tunnel and OC4J processes exist on a separate machine.
However, if you have only a single process managing the communication between the Oracle HTTP Servers and the OC4J processes, you cannot guarantee high availability or failover. You can add multiple Port Tunnel processes, each listening on their own port, to manage the availability and the failover. We recommend that you use two Port Tunnel processes for each machine. You want to minimize the number of ports exposed on the intranet for security, but you also should provide for failover and availability.
Once the Port Tunnel processes are configured and initialized, then the Oracle HTTP Servers automatically balance the load among the port tunnel processes, just as they would among OC4J processes.
While you are risking exposure of a single port for each Port Tunnel process, the number of ports exposed using the Port Tunnel are much less than if you expose all of the ports needed for straight communication between Oracle HTTP Server and OC4J processes, as you can see in "Opening OracleAS Ports To Communicate Through Intranet" .
All of the details for configuring and initializing Port Tunnel processes are documented in the HTTP Security chapter in the Oracle Application Server 10g Security Guide.
You can route between Oracle HTTP Servers and OC4J processes that are located on either side of an intranet firewall by exposing each of the OracleAS component ports through the firewall that are needed to communicate between the OC4J components.
The ports that should be opened on the firewall depend on the services that you are using. Table 4-6 describes the ports that you should open for each service.
Table 4-6 Ports that Each Service Uses
Service Name | Description | Configuration XML File |
---|---|---|
Oracle HTTP Server |
Any incoming requests uses HTTP or HTTPS. |
The ports listed in the listen directives in the |
OPMN |
OPMN uses HTTP ports to communicate between other OPMN processes in a OracleAS Cluster. OPMN communication is bidirectional, so the ports for all OPMN processes must be opened to each other and to the OC4J processes. |
The |
DCM |
DCM uses JDBC to talk to the back-end Oracle-based repository. If it is not desirable to open up a port to the database, then you can use a file-based repository, instead of a database repository. See "Routing Between Instances in Same Farm" for directions on setting up a file-based repository. |
The JDBC default port number is 1521. The JDBC database port number is defined in the |
|
DCM bootstraps with information from the Oracle Internet Directory over an LDAP port. |
The default ports are 389 for LDAP and 636 for LDAP over SSL. If these are taken, then the next in the range are selected; the range is 4031-4040. You can change the port numbers in the |
mod_oc4j module |
Communicates with each OC4J process over an AJP port. The port range default is 3001-3100. |
Defined in the <port> element either specifically or within a range in the |
RMI or JMS |
You may use RMI or JMS to communicate with OC4J. The default for the port range for RMI ports is 3101 to 3200. The default for the port range for JMS ports is 3201 to 3300. |
Defined in the <port> element either specifically or within a range in the
|
Infrastructure |
The Infrastructure database only executes on port 1521. |
N/A |
Portal |
Uses the same AJP port range as configured for OC4J processes. |
Defined in the <port> element either specifically or within a range in the |
At installation time, the Oracle Installer picks available ports and assigns them to relevant processes. You can see the assigned ports for all components by selecting Ports in the default home page using Application Server Control.
Note: Some port numbers have multiple dependencies. If you change a port number, you may be required to alter other components. See the "Managing Ports" chapter in the Oracle Application Server 10g Administrator's Guide for a full discussion on how to manage your port numbers. |
You can view all of the ports that are in use through Application Server Control. From the OracleAS Home Instance, select Ports at the top left corner of the page. Figure 4-11 shows all of the ports in use for this OracleAS instance, including all Oracle HTTP Server and OC4J instances. See the "Managing Ports" chapter in the Oracle Application Server 10g Administrator's Guide for more information on managing ports.
|
![]() Copyright © 2003 Oracle Corporation. All Rights Reserved. |
|