Skip Headers
Oracle® Fusion Middleware Disaster Recovery Guide
11g Release 1 (11.1.1)

Part Number E15250-02
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Design Considerations

This chapter describes design considerations for an Oracle Fusion Middleware Disaster Recovery solution for an enterprise deployment.

It contains the following topics:

This chapter describes detailed instructions for setting up an Oracle Fusion Middleware 11g Disaster Recovery production site and standby site for the Linux and UNIX operating systems. It primarily uses the Oracle SOA Suite enterprise deployment shown in Figure 3-1 in the examples of how to set up the Oracle Fusion Middleware 11g Disaster Recovery solution for an enterprise deployment. After you understand how to set up Disaster Recovery for the Oracle SOA Suite enterprise topology, you can use the information for other 11g enterprise deployments in this chapter to set up Disaster Recovery for those deployments as well.

Note:

This chapter describes an Oracle Fusion Middleware 11g Disaster Recovery symmetric topology that uses the Oracle SOA Suite enterprise deployment shown in Figure 3-1 at both the production site and the standby site. Figure 3-1 shows the deployment for only one site; the high level of detail shown for this deployment precludes showing the deployment for both sites in a single figure.

Figure 1-1 shows a Disaster Recovery symmetric production site and standby site in a single figure.

Figure 3-1 Deployment Used at Production and Standby Sites for Oracle Fusion Middleware Disaster Recovery

Description of Figure 3-1 follows
Description of "Figure 3-1 Deployment Used at Production and Standby Sites for Oracle Fusion Middleware Disaster Recovery"

Figure 3-1 shows the mySOACompany with Oracle Access Manager enterprise deployment from the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite. See the Oracle Fusion Middleware Enterprise Deployment Guide for Oracle SOA Suite for detailed information on installing and configuring this Oracle SOA Suite enterprise deployment.

The Oracle Fusion Middleware Disaster Recovery topology that you design must be symmetric for the following at the production site and standby site.

3.1 Network Considerations

This section describes the following network considerations:

3.1.1 Planning Host Names

In a Disaster Recovery topology, the production site host names must be resolvable to the IP addresses of the corresponding peer systems at the standby site. Therefore, it is important to plan the host names for the production site and standby site.

This section describes how to plan physical host names and alias host names for the middle tier hosts that use the Oracle Fusion Middleware instances at the production site and standby site. It uses the Oracle SOA Suite enterprise deployment shown in Figure 3-1 for the host name examples. The host name examples in this section assume that a symmetric Disaster Recovery site is being set up, where the production site and standby site have the same number of hosts. Each host at the production site and standby site has a peer host at the other site. The peer hosts are configured the same, for example, using the same ports as their counterparts at the other site.

When configuring each component, use hostname-based configuration instead of IP-based configuration, unless the component requires you to use IP-based configuration. For example, if you are configuring the listen address of an Oracle Fusion Middleware component to a specific IP address such as 123.1.2.113, use the host name SOAHOST1.MYCOMPANY.COM, which resolves to 123.1.2.113.

The following subsections show how to set up host names at the Disaster Recovery production site and standby site for the following enterprise deployments:

Note:

In this book's examples, IP addresses for hosts at the initial production site have the format 123.1.x.x and IP addresses for hosts at the initial standby site have the format 123.2.x.x.

Host Names for the Oracle SOA Suite Production Site and Standby Site Hosts

Table 3-1 shows the IP addresses and physical host names that will be used for the Oracle SOA Suite EDG deployment production site hosts. Figure 3-1 shows the configuration for the Oracle SOA Suite EDG deployment at the production site.

Table 3-1 IP Addresses and Physical Host Names for SOA Suite Production Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.1.2.111

WEBHOST1

None

123.1.2.112

WEBHOST2

None

123.1.2.113

SOAHOST1

None

123.1.2.114

SOAHOST2

None


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Table 3-2 shows the IP addresses, physical host names, and alias host names that will be used for the Oracle SOA Suite EDG deployment standby site hosts. Figure 3-2 shows the physical host names used for the Oracle SOA Suite EDG deployment at the standby site. The alias host names shown in Table 3-2 should be defined for the SOA Oracle Suite standby site hosts in Figure 3-2.

Note:

If you use separate DNS servers to resolve host names, then you can use the same physical host names for the production site hosts and standby site hosts, and you do not need to define the alias host names on the standby site hosts that are recommended in Table 3-2. See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" for more information about using separate DNS servers to resolve host names.

Table 3-2 IP Addresses, Physical Host Names, and Alias Host Names for SOA Suite Standby Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.2.2.111

STBYWEB1

WEBHOST1

123.2.2.112

STBYWEB2

WEBHOST2

123.2.2.113

STBYSOA1

SOAHOST1

123.2.2.114

STBYSOA2

SOAHOST2


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Figure 3-2 Physical Host Names Used at Oracle SOA Suite Deployment Standby Site

Description of Figure 3-2 follows
Description of "Figure 3-2 Physical Host Names Used at Oracle SOA Suite Deployment Standby Site"

Host Names for the Oracle WebCenter Production Site and Standby Site Hosts

Table 3-3 shows the IP addresses and physical host names that will be used for the Oracle WebCenter EDG deployment production site hosts. Figure 4-4 shows the configuration for the Oracle WebCenter EDG deployment at the production site.

Table 3-3 IP Addresses and Physical Host Names for WebCenter Production Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.1.2.111

WEBHOST1

None

123.1.2.112

WEBHOST2

None

123.1.2.113

SOAHOST1

None

123.1.2.114

SOAHOST2

None

123.1.2.115

WCHOST1

None

123.1.2.116

WCHOST2

None


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Table 3-4 shows the IP addresses, physical host names, and alias host names that will be used for the Oracle WebCenter EDG deployment standby site hosts. Figure 4-4 shows the configuration for the Oracle WebCenter EDG deployment at the standby site.

Note:

If you use separate DNS servers to resolve host names, then you can use the same physical host names for the production site hosts and standby site hosts, and you do not need to define the alias host names on the standby site hosts that are recommended in Table 3-4. See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" for more information about using separate DNS servers to resolve host names.

Table 3-4 IP Addresses, Physical Host Names, and Alias Host Names for WebCenter Standby Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.2.2.111

STBYWEB1

WEBHOST1

123.2.2.112

STBYWEB2

WEBHOST2

123.2.2.113

STBYSOA1

SOAHOST1

123.2.2.114

STBYSOA2

SOAHOST2

123.2.2.115

STBYWC1

WCHOST1

123.2.2.116

STBYWC2

WCHOST2


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Host Names for the Oracle Identity Management Production Site and Standby Site Hosts

Table 3-5 shows the IP addresses and physical host names that will be used for the Oracle Identity Management EDG deployment production site hosts. Figure 4-6 shows the configuration for the Oracle Identity Management EDG deployment at the production site.

Table 3-5 IP Addresses and Physical Host Names for Identity Management Production Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.1.2.111

WEBHOST1

None

123.1.2.112

WEBHOST2

None

123.1.2.117

OAMADMINHOST

None

123.1.2.118

IDMHOST1

None

123.1.2.119

IDMHOST2

None

123.1.2.120

OAMHOST1

None

123.1.2.121

OAMHOST2

None

123.1.2.122

OIDHOST1

None

123.1.2.123

OIDHOST2

None

123.1.2.124

OVDHOST1

None

123.1.2.125

OVDHOST2

None

122.12.126

OAAMHOST1

None

122.12.127

OAAMHOST2

None

122.12.128

OAPMHOST1

None

122.12.129

OAPMHOST2

None


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Table 3-6 shows the IP addresses, physical host names, and alias host names that will be used for the Oracle Identity Management EDG deployment standby site hosts. Figure 4-6 shows the configuration for the Oracle Identity Management EDG deployment at the standby site.

Note:

If you use separate DNS servers to resolve host names, then you can use the same physical host names for the production site hosts and standby site hosts, and you do not need to define the alias host names on the standby site hosts that are recommended in Table 3-6. See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" for more information about using separate DNS servers to resolve host names.

Table 3-6 IP Addresses, Physical Host Names, and Alias Host Names for Identity Management Standby Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.2.2.111

STBYWEB1

WEBHOST1

123.2.2.112

STBYWEB2

WEBHOST2

123.2.2.117

STBYADM

OAMADMINHOST

123.2.2.118

STBYIDM1

IDMHOST1

123.2.2.119

STBYIDM2

IDMHOST2

123.2.2.120

STBYOAM1

OAMHOST1

123.2.2.121

STBYOAM2

OAMHOST2

123.2.2.122

STBYOID1

OIDHOST1

123.2.2.123

STBYOID2

OIDHOST2

123.2.2.124

STBYOVD1

OVDHOST1

123.2.2.125

STBYOVD2

OVDHOST2


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Host Names for the Oracle Portal, Forms, Reports, and Discoverer Production Site and Standby Site Hosts

Table 3-7 shows the IP addresses and physical host names that will be used for the Oracle Portal, Forms, Reports, and Discoverer enterprise deployment production site hosts. Figure 4-7 shows the configuration for the Oracle Portal enterprise deployment at the production site and Figure 4-8 shows the configuration for the Oracle Forms, Reports, and Discoverer enterprise deployment at the production site.

Table 3-7 IP Addresses and Physical Host Names for Oracle Portal, Forms, Reports, and Discoverer Production Site Hosts

IP Address Physical Host NamesFoot 1  Alias Host Names

123.1.2.111

WEBHOST1

None

123.1.2.112

WEBHOST2

None

123.1.2.126

APPHOST1

None

123.1.2.127

APPHOST2

None


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Table 3-8 shows the IP addresses, physical host names, and alias host names that will be used for the Oracle Portal, Forms, Reports, and Discoverer enterprise deployment standby site hosts. Figure 4-7 shows the configuration for the Oracle Portal enterprise deployment at the production site and Figure 4-8 shows the configuration for the Oracle Forms, Reports, and Discoverer enterprise deployment at the production site.

Note:

If you use separate DNS servers to resolve host names, then you can use the same physical host names for the production site hosts and standby site hosts, and you do not need to define the alias host names on the standby site hosts that are recommended in Table 3-8. See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" for more information about using separate DNS servers to resolve host names.

Table 3-8 IP Addresses, Physical Host Names, and Alias Host Names for Oracle Portal, Forms, Reports, and Discoverer Standby Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.2.2.111

STBYWEB1

WEBHOST1

123.2.2.112

STBYWEB2

WEBHOST1

123.2.2.126

STBYAPP1

APPHOST1

123.2.2.127

STBYAPP2

APPHOST2


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

The alias host names in Table 3-2, Table 3-4, Table 3-6, and Table 3-7 are resolved locally at the standby site to the correct IP address. Section 3.1.1.1, "Host Name Resolution" describes two ways to configure host name resolution in an Oracle Fusion Middleware Disaster Recovery topology.

Host Names for the Oracle Enterprise Content Management Production Site and Standby Site Hosts

Table 3-9 shows the IP addresses and physical host names that will be used for the Oracle Enterprise Content Management EDG deployment production site hosts. Figure 4-9 shows the configuration for the Oracle Enterprise Content Management EDG deployment at the production site.

Table 3-9 IP Addresses and Physical Host Names for Oracle Enterprise Content Management Production Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.1.2.111

WEBHOST1

None

123.1.2.112

WEBHOST2

None

123.1.2.113

ECMHOST1

None

123.1.2.114

ECMHOST2

None


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

Table 3-10 shows the IP addresses, physical host names, and alias host names that will be used for the Oracle Enterprise Content Management EDG deployment standby site hosts. Figure 4-9 shows the configuration for the Oracle Enterprise Content Management EDG deployment at the standby site.

Note:

If you use separate DNS servers to resolve host names, then you can use the same physical host names for the production site hosts and standby site hosts, and you do not need to define the alias host names on the standby site hosts that are recommended in Table 3-10. See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" for more information about using separate DNS servers to resolve host names.

Table 3-10 IP Addresses, Physical Host Names, and Alias Host Names for Oracle Enterprise Content Management Standby Site Hosts

IP Address Physical Host NameFoot 1  Alias Host Name

123.2.2.111

STBYWEB1

WEBHOST1

123.2.2.112

STBYWEB2

WEBHOST2

123.2.2.113

STBYECM1

ECMHOST1

123.2.2.114

STBYECM2

ECMHOST2


Footnote 1 See Section 3.1.1.3, "Resolving Host Names Using Separate DNS Servers" and Section 3.1.1.4, "Resolving Host Names Using a Global DNS Server" for information on defining physical host names.

3.1.1.1 Host Name Resolution

Host name resolution is the process of resolving a host name to the proper IP address for communication. Host name resolution can be configured in one of the following ways:

You must determine the method of host name resolution you will use for your Oracle Fusion Middleware Disaster Recovery topology when you are planning the deployment of the topology. Most site administrators use a combination of these resolution methods in a precedence order to manage host names.

The Oracle Fusion Middleware hosts and the shared storage system for each site must be able to communicate with each other.

Host Name Resolution Precedence

To determine the host name resolution method used by a particular host, search for the value of the hosts parameter in the /etc/nsswitch.conf file on the host.

As shown in Example 3-1, make the files entry the first entry for the hosts parameter if you want to resolve host names locally on the host. When files is the first entry for the hosts parameter, entries in the host's /etc/hosts file will be used first to resolve host names:

Example 3-1 Specifying the Use of Local Host Name Resolution

hosts:   files   dns   nis

As shown in Example 3-2, make the dns entry the first entry for the hosts parameter if you want to resolve host names using DNS on the host. When dns is the first entry for the hosts parameter, DNS server entries will be used first to resolve host names:

Example 3-2 Specifying the Use of DNS Host Name Resolution

hosts:   dns    files   nis

For simplicity and consistency, it is recommended that all the hosts within a site (production site or standby site) use the same host name resolution method (resolving host names locally or resolving host names using separate DNS servers or a global DNS server).

The recommendations in the following sections are high-level recommendations that you can adapt to meet the host name resolution standards used by your enterprise.

3.1.1.2 Resolving Host Names Locally

Local host name resolution uses the host name to IP mapping defined in the /etc/hosts file of a host. When you use this method to resolve host names for your Disaster Recovery topology, the following guidelines apply:

  1. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the production site and standby site hosts looks like this:

    hosts:   files   dns   nis
    
  2. The /etc/hosts file entries on the hosts of the production site should have their physical host names mapped to their IP addresses. For the sake of simplicity and ease of maintenance, it is recommended to have the same entries on all the hosts of the production site. Example 3-3 shows the /etc/hosts file for the production site of a SOA Enterprise Deployment topology:

    Example 3-3 Making /etc/hosts File Entries for a Production Site Host

    127.0.0.1      localhost.localdomain    localhost
    123.1.2.111    WEBHOST1.MYCOMPANY.COM    WEBHOST1
    123.1.2.112    WEBHOST2.MYCOMPANY.COM    WEBHOST2
    123.1.2.113    SOAHOST1.MYCOMPANY.COM    SOAHOST1
    123.1.2.114    SOAHOST2.MYCOMPANY.COM    SOAHOST2
    
  3. The /etc/hosts file entries on the hosts of the standby site should have their physical host names mapped to their IP addresses along with the physical host names of their corresponding peer on the production site defined as the alias host names. For the sake of simplicity and ease of maintenance, it is recommended to have the same entries on all the hosts of the standby site. Example 3-4 shows the /etc/hosts file for the production site of a SOA Enterprise Deployment topology:

    Example 3-4 Making /etc/hosts File Entries for a Standby Site Host

    127.0.0.1      localhost.localdomain    localhost
    123.2.2.111    STBYWEB1.MYCOMPANY.COM    WEBHOST1
    123.2.2.112    STBYWEB2.MYCOMPANY.COM    WEBHOST2
    123.2.2.113    STBYSOA1.MYCOMPANY.COM    SOAHOST1
    123.2.2.114    STBYSOA2.MYCOMPANY.COM    SOAHOST2
    
  4. After setting up host name resolution using /etc/host file entries, use the ping command to test host name resolution. For a system configured with static IP addressing and the /etc/hosts file entries shown in Example 3-3, a ping webhost1 command on the production site would return the correct IP address (123.1.2.111) and also indicate that the host name is fully qualified.

  5. Similarly, for a system configured with static IP addressing and the /etc/hosts file entries shown in Example 3-4, a ping webhost1 command on the standby site will return the correct IP address (123.2.2.111) and it will also show that the name WEBHOST1 is also associated with that IP address.

3.1.1.3 Resolving Host Names Using Separate DNS Servers

This manual uses the term "separate DNS servers" to refer to a Disaster Recovery topology where the production site and the standby site have their own DNS servers. When you use separate DNS servers to resolve host names for your Disaster Recovery topology, the following guidelines apply:

  1. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the production site and standby site hosts looks like this:

    hosts:   dns   files   nis
    
  2. The DNS servers on the production site and standby site must not be aware of each other and must contain entries for host names used within their own site.

  3. The DNS server entries on the production site should have the physical host names mapped to their IP addresses. Example 3-5 shows the DNS server entries for the production site of a SOA Enterprise Deployment topology:

    Example 3-5 DNS Entries for a Production Site Host in a Separate DNS Servers Configuration

    WEBHOST1.MYCOMPANY.COM    IN    A    123.1.2.111
    WEBHOST2.MYCOMPANY.COM    IN    A    123.1.2.112
    SOAHOST1.MYCOMPANY.COM    IN    A    123.1.2.113
    SOAHOST2.MYCOMPANY.COM    IN    A    123.1.2.114
    
  4. The DNS server entries on the standby site should have the physical host names of the production site mapped to their IP addresses. Example 3-6 shows the DNS server entries for the standby site of a SOA Enterprise Deployment topology:

    Example 3-6 DNS Entries for a Standby Site Host in a Separate DNS Servers Configuration

    WEBHOST1.MYCOMPANY.COM    IN    A    123.2.2.111
    WEBHOST2.MYCOMPANY.COM    IN    A    123.2.2.112
    SOAHOST1.MYCOMPANY.COM    IN    A    123.2.2.113
    SOAHOST2.MYCOMPANY.COM    IN    A    123.2.2.114
    
  5. Make sure there are no entries in the /etc/hosts file for any host at the production site or standby site.

  6. Test the host name resolution using the ping command. For a system configured with the production site DNS entries shown in Example 3-5, a ping webhost1 command on the production site would return the correct IP address (123.1.2.111) and also indicate that the host name is fully qualified.

  7. Similarly, for a system configured with the standby site DNS entries shown in Example 3-6, a ping webhost1 command on the standby site will return the correct IP address (123.2.2.111) and it will also indicate that the host name is fully qualified.

3.1.1.4 Resolving Host Names Using a Global DNS Server

This manual uses the term "global DNS server" to refer to a Disaster Recovery topology where a single DNS server is used for both the production site and the standby site. When you use a global DNS server to resolve host names for your Disaster Recovery topology, the following guidelines apply:

  1. When using a global DNS server, for the sake of simplicity, a combination of local host name resolution and DNS host name resolution is recommended.

  2. In this example, it is assumed that the production site uses DNS host name resolution and the standby site uses local host name resolution.

  3. The global DNS server should have the entries for both the production and standby site hosts. Example 3-7 shows the entries for a SOA Enterprise Deployment topology:

    Example 3-7 DNS Entries for Production Site and Standby Site Hosts When Using a Global DNS Server Configuration

    WEBHOST1.MYCOMPANY.COM    IN    A    123.1.2.111
    WEBHOST2.MYCOMPANY.COM    IN    A    123.1.2.112
    SOAHOST1.MYCOMPANY.COM    IN    A    123.1.2.113
    SOAHOST2.MYCOMPANY.COM    IN    A    123.1.2.114
    STBYWEB1.MYCOMPANY.COM    IN    A    123.2.2.111
    STBYWEB2.MYCOMPANY.COM    IN    A    123.2.2.112
    STBYSOA1.MYCOMPANY.COM    IN    A    123.2.2.113
    STBYSOA2.MYCOMPANY.COM    IN    A    123.2.2.114
    
  4. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the production site hosts looks like this:

    hosts:   dns   files   nis
    
  5. Ensure that the hosts parameter in the /etc/nsswitch.conf file on all the standby site hosts looks like this:

    hosts:   files   dns   nis
    
  6. The /etc/hosts file entries on the hosts of the standby site should have their physical host names mapped to their IP addresses along with the physical host names of their corresponding peer on the production site defined as the alias host names. For the sake of simplicity and ease of maintenance, it is recommended to have the same entries on all the hosts of the standby site. Example 3-8 shows the /etc/hosts file for the production site of a SOA Enterprise Deployment topology:

    Example 3-8 Standby Site /etc/hosts File Entries When Using a Global DNS Server Configuration

    127.0.0.1      localhost.localdomain    localhost
    123.2.2.111    STBYWEB1.MYCOMPANY.COM    WEBHOST1
    123.2.2.112    STBYWEB2.MYCOMPANY.COM    WEBHOST2
    123.2.2.113    STBYSOA1.MYCOMPANY.COM    SOAHOST1
    123.2.2.114    STBYSOA2.MYCOMPANY.COM    SOAHOST2
    
  7. Test the host name resolution using the ping command. A ping webhost1 command on the production site would return the correct IP address (123.1.2.111) and also indicate that the host name is fully qualified.

  8. Similarly, a ping webhost1 command on the standby site would return the correct IP address (123.2.2.111) and also indicate that the host name is fully qualified.

3.1.1.5 Testing the Host Name Resolution

Validate that you have assigned host names properly by connecting to each host at the production site and using the ping command to ensure that the host can locate the other hosts at the production site.

Then, connect to each host at the standby site and use the ping command to ensure that the host can locate the other hosts at the standby site.

3.1.2 Load Balancers and Virtual IP Considerations

Oracle Fusion Middleware components require a hardware load balancer when deployed in high availability topologies. It is recommended that the hardware load balancer have the following features:

  1. Ability to load-balance traffic to a pool of real servers through a virtual host name: Clients access services using the virtual host name (instead of using actual host names). The load balancer can then load balance requests to the servers in the pool.

  2. Port translation configuration.

  3. Monitoring of ports (HTTP and HTTPS).

  4. Virtual servers and port configuration: Ability to configure virtual server names and ports on your external load balancer, and the virtual server names and ports must meet the following requirements:

    • The load balancer should allow configuration of multiple virtual servers. For each virtual server, the load balancer should allow configuration of traffic management on more than one port. For example, for Oracle Internet Directory clusters, the load balancer must be configured with a virtual server and ports for LDAP and LDAPS traffic.

    • The virtual server names must be associated with IP addresses and be part of your DNS. Clients must be able to access the load balancer through the virtual server names.

  5. Ability to detect node failures and immediately stop routing traffic to the failed node.

  6. Resource monitoring, port monitoring, and process failure detection: The load balancer must be able to detect service and node failures (through notification or some other means) and stop directing non-Oracle Net traffic to the failed node. If your load balancer can automatically detect failures, you should use this feature.

  7. Fault-tolerant mode: It is highly recommended that you configure the load balancer to be in fault-tolerant mode.

  8. Other: It is highly recommended that you configure the load balancer virtual server to return immediately to the calling client when the back-end services to which it forwards traffic are unavailable. This is preferred over the client disconnecting on its own after a timeout based on the TCP/IP settings on the client system.

  9. Sticky routing capability: Ability to maintain sticky connections to components based on cookies or URL.

  10. SSL acceleration: This feature is recommended, but not required.

  11. For the Identity Management configuration with Oracle Access Manager, configure the load balancer in the directory tier with higher timeout numbers (such as 59 minutes).

    Oracle Access Manager uses persistent LDAP connections for better performance and does not support using load balancers between the servers and directory servers as they tend to disrupt these persistent LDAP connections.

The virtual servers and associated ports must be configured on the load balancer for different types of network traffic and monitoring. These should be configured to the appropriate real hosts and ports for the services running. Also, the load balancer should be configured to monitor the real host and ports for availability so that the traffic to these is stopped as soon as possible when a service is down. This will ensure that incoming traffic on a given virtual host is not directed to an unavailable service in the other tiers.

It is recommended to use two load balancers when dealing with external and internal traffic. In such a topology, one load balancer is set up for external HTTP traffic and the other load balancer is set up for internal LDAP traffic. A deployment may choose to have a single load balancer device due to a variety of reasons. While this is supported, the deployment should consider the security implications of doing this and if found appropriate, open up the relevant firewall ports to allow traffic across the various DMZs. It is worth noting that in either case, it is highly recommended to deploy a given load balancer device in fault tolerant mode. The virtual servers required for the various Oracle Fusion Middleware products are described in the following tables.

Table 3-11 Virtual Servers for Oracle SOA Suite

Components Access Virtual Server Name

Oracle SOA

External

soa.mycompany.com

Oracle SOA

Internal

soainternal.mycompany.com

Administration Consoles

Internal

admin.mycompany.com


Table 3-12 Virtual Servers for Oracle WebCenter

Components Access Virtual Server Name

Oracle WebCenter

External

wc.mycompany.com

Oracle WebCenter

Internal

wcinternal.mycompany.com

Oracle SOA Internal

Internal

soainternal.mycompany.comFoot 1 

Administration Consoles

Internal

admin.mycompany.com


Footnote 1 Required when extending with SOA domain.

Table 3-13 Virtual Servers for Oracle Identity Management

Components Virtual Server Name

Oracle Internet Directory

oid.mycompany.com

Oracle Virtual Directory

ovd.mycompany.com

Oracle Identity Federation

oif.mycompany.com

Single Sign-On

sso.mycompany.com

Administration Consoles

admin.mycompany.com


Table 3-14 Virtual Servers for Oracle Portal, Forms, Reports, and Discoverer

Components Virtual Server Name

Oracle Portal

portal.mycompany.com

Oracle Forms and Oracle Reports

forms.mycompany.com

Discoverer

disco.mycompany.com

Administration Consoles

admin.mycompany.com


Table 3-15 Virtual Servers for Oracle Enterprise Content Management

Components Access Virtual Server Name

Oracle Enterprise Content Management

External

ecm.mycompany.com

Oracle Enterprise Content Management

Internal

ecminternal.mycompany.com

Oracle SOA Internal

Internal

soainternal.mycompany.comFoot 1 

Administration Consoles

Internal

admin.mycompany.com


Footnote 1 Required when extending with SOA domain.

3.1.3 Wide Area DNS Operations

When a site switchover or failover is performed, client requests must be redirected transparently to the new site that is playing the production role. To direct client requests to the entry point of a production site, use DNS resolution. To accomplish this redirection, the wide area DNS that resolves requests to the production site has to be switched over to the standby site. The DNS switchover can be accomplished by either using a global load balancer or manually changing DNS names.

Note:

A hardware load balancer is assumed to be front-ending each site. Check for supported load balancers at:

http://support.oracle.com

The following topics are described in this section:

3.1.3.1 Using a Global Load Balancer

When a global load balancer is deployed in front of the production and standby sites, it provides fault detection services and performance-based routing redirection for the two sites. Additionally, the load balancer can provide authoritative DNS name server equivalent capabilities.

During normal operations, the global load balancer can be configured with the production site's load balancer name-to-IP mapping. When a DNS switchover is required, this mapping in the global load balancer is changed to map to the standby site's load balancer IP. This allows requests to be directed to the standby site, which now has the production role.

This method of DNS switchover works for both site switchover and failover. One advantage of using a global load balancer is that the time for a new name-to-IP mapping to take effect can be almost immediate. The downside is that an additional investment must be made for the global load balancer.

3.1.3.2 Manually Changing DNS Names

This method of DNS switchover involves the manual change of the name-to-IP mapping that is originally mapped to the IP address of the production site's load balancer. The mapping is changed to map to the IP address of the standby site's load balancer. Follow these instructions to perform the switchover:

  1. Make a note of the current Time to Live (TTL) value of the production site's load balancer mapping. This mapping is in the DNS cache and it will remain there until the TTL expires. As an example, let's assume that the TTL is 3600 seconds.

  2. Modify the TTL value to a short interval (for example, 60 seconds).

  3. Wait one interval of the original TTL. This is the original TTL of 3600 seconds from Step 1.

  4. Ensure that the standby site is switched over to receive requests.

  5. Modify the DNS mapping to resolve to the standby site's load balancer, giving it the appropriate TTL value for normal operation (for example, 3600 seconds).

This method of DNS switchover works for switchover or failover operations. The TTL value set in Step 2 should be a reasonable time period where client requests cannot be fulfilled. The modification of the TTL is effectively modifying the caching semantics of the address resolution from a long period of time to a short period. Due to the shortened caching period, an increase in DNS requests can be observed.

3.2 Storage Considerations

This section provides recommendations for designing storage for the Disaster Recovery solution for your enterprise deployment.

3.2.1 Oracle Fusion Middleware Artifacts

The Oracle Fusion Middleware components in a given environment are usually interdependent on each other, so it is important to have the components in the topology be in sync. This is an important consideration for designing volumes and consistency groups. Some of the artifacts are static while others are dynamic.

Static Artifacts

Static artifacts are files and directories are that do not change frequently. These include:

  • MW_HOME: The Oracle Middleware home usually consists of an Oracle home and an Oracle WebLogic Server home.

  • Oracle Inventory: The oraInst.loc and oratab files, which are located in the /etc directory.

  • BEA Home List: On UNIX, this is located at user_home/bea/beahomelist.

Dynamic or Run-Time Artifacts

Dynamic or run-time artifacts are files that change frequently. Run-time artifacts include:

  • Domain Home: Domain directories of the Administration Server and the Managed Servers.

  • Oracle Instances: Oracle Instance home directories.

  • Application artifacts, such as .ear or .war files.

  • Database artifacts such as the MDS repository.

  • Database metadata repositories used by Oracle Fusion Middleware.

  • Persistent stores, such as JMS Providers and transaction logs.

3.2.2 Oracle Home and Oracle Inventory

Oracle Fusion Middleware allows creating multiple Managed Servers from one single binary installation. This allows the installation of binaries in a single location on a shared storage and the reuse of this installation by the servers in different nodes. However, for maximum availability, Oracle recommends using redundant binary installations.

When an ORACLE_HOME or a WL_HOME is shared by multiple servers in different nodes, it is recommended to keep the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches.

To update the oraInventory in a node and attach an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh.

To update the Middleware home list to add or remove a WL_HOME, edit the user_home/bea/beahomelist file. This would be required for any nodes installed additionally to the ones shown in this topology.

3.2.3 Storage Replication

This section provides guidelines on creating volumes on the shared storage. Depending on the capabilities of the storage replication technology available with your preferred storage device you may need to create mount points, directories and symbolic links on each of the nodes within a tier.

If your storage device's storage replication technology guarantees consistent replication across multiple volumes:

  • Create one volume per server running on that tier. For example, on the application tier, you can create one volume for the WebLogic Administration Server and another volume for the Managed Servers.

  • Create one consistency group for each tier with the volumes for that tier as its members.

  • Note that if a volume is mounted by two systems simultaneously, a clustered file system may be required for this, depending on the storage subsystem. However, there is no known case of a single file or directory tree being concurrently accessed by Oracle processes on different systems. NFS is a clustered file system, so no additional clustered file system software is required if you are using NFS-attached storage.

If your storage device's storage replication technology does not guarantee consistent replication across multiple volumes:

  • Create a volume for each tier. For example, you can create one volume for the application tier, one for the web tier, and so on.

  • Create a separate directory for each node in that tier. For example, you can create a directory for SOAHOST1 under the application tier volume; create a directory for WEBHOST1 under the web tier volume, and so on.

  • Create a mount point directory on each node to the directory on the volume.

  • Create a symbolic link to the mount point directory. A symbolic link should be created so that the same directory structure can be used across the nodes in a tier.

  • Note that if a volume is mounted by two systems simultaneously, a clustered file system may be required for this, depending on the storage subsystem. However, there is no known case of a single file or directory tree being concurrently accessed by Oracle processes on different systems. NFS is a clustered file system, so no additional clustered file system software is required if you are using NFS-attached storage.

Note:

Before you set up the shared storage for your Disaster Recovery sites, read the high availability chapter in the Oracle Fusion Middleware Release Notes to learn of any known shared storage-based deployment issues in high availability environments.

The release notes for Oracle Fusion Middleware can be found at this URL:

http://www.oracle.com/technology/documentation/middleware.html

3.2.4 File-Based Persistent Store

The WebLogic Server application servers are usually clustered for high-availability. For the local site high availability of the Oracle SOA Suite topology, a file-based persistent store is used for the Java Message Services (JMS) and Transaction Logs (TLogs). This file-based persistent store must reside on shared storage that is accessible by all members of the cluster.

A SAN storage system should use either a host based clustered or shared file system technology such as the Oracle Clustered File System (OCFS2). OCFS2 is a symmetric shared disk cluster file system which allows each node to read and write both metadata and data directly to the SAN.

Additional clustered file systems are not required when using NAS storage systems.

3.3 Database Considerations

This section provides the recommendations and considerations for setting up Oracle databases that will be used in the Oracle Fusion Middleware Disaster Recovery topology.

  1. Oracle recommends creating Real Application Cluster databases on both the production site and standby site as required by your topology.

  2. Oracle Data Guard is the recommended disaster protection technology for the databases running the metadata repositories.

  3. The Oracle Data Guard configuration used should be decided based on the data loss requirements of the database as well as the network considerations such as the available bandwidth and latency when compared to the redo generation. Make sure that this is determined correctly before setting up the Oracle Data Guard configuration.

    Please refer to Oracle Data Guard Concepts and Administration as well as related Maximum Availability Architecture collateral at the following URL for more information:

    http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
    
  4. Ensure that your network is configured for low latency with sufficient bandwidth, since synchronous redo transmission can cause an impact on response time and throughput.

  5. The LOG_ARCHIVE_DEST_n parameter on standby site databases should have the LGRW SYNC and AFFIRM archive attributes.

  6. The standby site database should be in the Managed Recovery mode. This ensures that the standby site databases are in a constant state of media recovery. The managed recovery mode is enables for shorter failover times.

  7. The tnsnames.ora file on the production site and the standby site must have entries for databases on both the production and standby sites.

  8. It is strongly recommended to force Data Guard to perform manual database synchronization whenever middle tier synchronization is performed. This is especially true for components that store configuration data in the metadata repositories.

  9. It is strongly recommended to set up aliases for the database host names on both the production and standby sites. This enables seamless switchovers, switchbacks and failovers.

3.3.1 Making TNSNAMES.ORA Entries for Databases

Because Oracle Data Guard is used to synchronize production and standby databases, the production database and standby database must be able to reference each other.

Oracle Data Guard uses tnsnames.ora file entries to direct requests to the production and standby databases, so entries for production and standby databases must be made to the tnsnames.ora file. See Oracle Data Guard Concepts and Administration in the Oracle Database documentation set for more information about using tnsnames.ora files with Oracle Data Guard.

3.3.2 Manually Forcing Database Synchronization with Oracle Data Guard

For Oracle Fusion Middleware components that store middle tier configuration data in Oracle database repositories, use Oracle Data Guard to manually force a database synchronization whenever a middle tier synchronization is performed. Use the SQL alter system archive log all statement to switch the logs, which forces the synchronization of the production site and standby site databases.

Example 3-9 shows the SQL statement to use to force the synchronization of a production site database and standby site database.

Example 3-9 Manually Forcing an Oracle Data Guard Database Synchronization

ALTER SYSTEM ARCHIVE LOG ALL;

3.3.3 Setting Up Database Host Name Aliases

Optionally, you can set up database host name aliases for the databases at your production site and standby site. The alias must be defined in DNS or in the /etc/hosts file on each node running a database instance.

In a Disaster Recovery environment, the site that actively accepts connections is the production site. At the completion of a successful failover or switchover operation, the standby site becomes the new production site.

This section includes an example of defining an alias for database hosts named custdbhost1 and stbycustdbhost1. Table 3-16 shows the database host names and the connect strings for the databases before the alias is defined.

Table 3-16 Database Host Names and Connect Strings

Site Database Host Name Database Connect String

Production

custdbhost1.us.oracle.com

custdbhost1.us.oracle.com:1521:orcl

Standby

stbycustdbhost1.us.oracle.com

stbycustdbhost1.us.oracle.com:1521:orcl


In this example, all database connect strings on the production site take the form "custdbhost1.us.oracle.com:1521:orcl." After a failover or switchover operation, this connect string must be changed to "stbycustdbhost1.us.oracle.com:1521:orcl." However, by creating an alias of "proddb1" for the database host name as shown in Table 3-17, you can avoid manually changing the connect strings, which enables seamless failovers and switchovers:

Table 3-17 Specifying an Alias for a Database Host

Site Database Host Name Alias Database Connect String

Production

custdbhost1.us.oracle.com

proddb1.us.oracle.com

proddb1.us.oracle.com:1521:orcl

Standby

stbycustdbhost1.us.oracle.com

proddb1.us.oracle.com

proddb1.us.oracle.com:1521:orcl


In this example, the production site database host name and the standby site database host name are aliased to "proddb1.us.oracle.com" and the connect strings on the production site and the standby site can take the form "proddb1.us.oracle.com:1521:orcl". On failover and switchover operations, the connect string does not need to change, thus enabling a seamless failover and switchover.

The format for specifying aliases in /etc/hosts file entries is:

<IP>    <ALIAS WITH DOMAIN> <ALIAS>    <HOST NAME WITH DOMAIN> <HOST NAME>

In this example, you create a database host name alias of proddb1 for host custdbhost1 at the production site and for host stbycustdbhost1 at the standby site. The hosts file entry should specify the fully qualified database host name alias with the <ALIAS WITH DOMAIN> parameter, the short database host name alias with the <ALIAS> parameter, the fully qualified host name with the <HOST NAME WITH DOMAIN> parameter, and the short host name with the <HOST NAME> parameter.

So, in the /etc/hosts files at the production site, make sure the entry for host custdbhost1 looks like this:

152.68.196.213 proddb1.us.oracle.com proddb1 custdbhost1.us.oracle.com custdbhost1

And, in the /etc/hosts files at the standby site, make sure the entry for host stbycustdbhost1 looks like this:

140.87.25.40   proddb1.us.oracle.com proddb1 stbycustdbhost1.us.oracle.com stbycustdbhost1

3.4 Starting Points

Before setting up the standby site, the administrator must evaluate the starting point of the project. The starting point for designing an Oracle Fusion Middleware Disaster Recovery topology is usually one of the following:

3.4.1 Starting with an Existing Site

When the administrator's starting point is an existing production site, the configuration data and the Oracle binaries for the production site already exist on the file system. Also, the host names, ports, and user accounts are already defined. When a production site exists, the administrator can choose to:

3.4.1.1 Migrating an Existing Production Site to Shared Storage

The Oracle Fusion Middleware Disaster Recovery solution relies on shared storage to implement storage replication for disaster protection of the Oracle Fusion Middleware middle tier configuration. When a production site has already been created, it is likely that the Oracle home directories for the Oracle Fusion Middleware instances that comprise the site are not located on the shared storage. If this is the case, then these homes have to be migrated completely to the shared storage to implement the Oracle Fusion Middleware Disaster Recovery solution.

Follow these guidelines for migrating the production site from the local disk to shared storage:

  1. All backups performed must be offline backups. For more information, see "Types of Backups" and "Recommended Backup Strategy" in Oracle Fusion Middleware Administrator's Guide.

  2. The backups must be performed as the root user and the permissions must be preserved. See the "Overview of the Backup Strategies" section in Oracle Fusion Middleware Administrator's Guide.

  3. This is a one-time operation, so it is recommended to recover the entire domain.

  4. The directory structure on the shared storage must be set up as described in Section 4.1.1, "Directory Structure and Volume Design."

  5. For Oracle SOA Suite, see "Backup and Recovery Recommendations for Oracle SOA Suite" in Oracle Fusion Middleware Administrator's Guide.

  6. For Oracle WebCenter, see "Backup and Recovery Recommendations for Oracle WebCenter" in Oracle Fusion Middleware Administrator's Guide.

  7. For Oracle Identity Management, see "Backup and Recovery Recommendations for Oracle Identity Management" in Oracle Fusion Middleware Administrator's Guide.

  8. For Oracle WebLogic Server, see "Backup and Recovery Recommendations for Oracle JRF Installations" in Oracle Fusion Middleware Administrator's Guide.

  9. For the Web Tier, see "Backup and Recovery Recommendations for Web Tier Installations" in Oracle Fusion Middleware Administrator's Guide.

  10. For Oracle Portal, Oracle Forms, Oracle Reports, and Discoverer backup and recovery recommendations, see "Backup and Recovery Recommendations for Oracle Portal, Oracle Forms Services, and Oracle Reports Installations" in Oracle Fusion Middleware Administrator's Guide.

3.4.2 Starting with New Sites

This section presents the logic to implementing a new production site for an Oracle Fusion Middleware Disaster Recovery topology. It describes the planning and setup of the production site by pre-planning host names, configuring the hosts to resolve the alias host names and physical host names, and ensuring that storage replication is set up to copy the configuration based on these names to the standby site. When you design the production site, you should also plan the standby site, which can be a symmetric standby site or an asymmetric standby site.

When you are designing a new production site (not using a pre-existing production site), you will use Oracle Universal Installer to install software on the production site, and parameters such as alias host names and software paths must be carefully designed to ensure that they are the same for both sites.

The flexibility you have when you create a new Oracle Fusion Middleware Disaster Recovery production site and standby site includes:

  1. You can design your Oracle Fusion Middleware Disaster Recovery solution so that each host at the production site and at the standby site has the desired alias host name and physical host name. Host name planning was discussed in Section 3.1.1, "Planning Host Names."

  2. When you design and create your production site from scratch, you can choose the Oracle home name and Oracle home directory for each Fusion Middleware installation.

    Designing and creating your site from scratch is easier than trying to modify an existing site to meet the design requirements described in this chapter.

  3. You can assign ports for the Fusion Middleware installations for the production site hosts that will not conflict with the ports that will be used at the standby site hosts.

    This is easier than having to check for and resolve port conflicts between an existing production site and standby site.

3.5 Topology Considerations

This section describes design considerations for:

3.5.1 Design Considerations for a Symmetric Topology

A symmetric topology is an Oracle Fusion Middleware Disaster Recovery configuration that is completely identical across tiers on the production site and standby site. In a symmetric topology, the production site and standby site have the identical number of hosts, load balancers, instances, and applications. The same ports are used for both sites. The systems are configured identically and the applications access the same data. This manual describes how to set up a symmetric Oracle Fusion Middleware Disaster Recovery topology for an enterprise configuration.

3.5.2 Design Considerations for an Asymmetric Topology

An asymmetric topology is an Oracle Fusion Middleware Disaster Recovery configuration that is different across tiers on the production site and standby site. In an asymmetric topology, the standby site can use less hardware (for example, the production site could include four hosts with four Fusion Middleware instances while the standby site includes two hosts with four Fusion Middleware instances. Or, in a different asymmetric topology, the standby site can use fewer Fusion Middleware instances (for example, the production site could include four Fusion Middleware instances while the standby site includes two Fusion Middleware instances). Another asymmetric topology might include a different configuration for a database (for example, using a Real Application Clusters database at the production site and a single instance database at the standby site).