Oracle® Fusion Middleware Enterprise Deployment Guide for Oracle WebCenter Portal 11g Release 1 (11.1.1.6.0) Part Number E12037-06 |
|
|
PDF · Mobi · ePub |
This chapter describes how to prepare your file system for an Oracle WebCenter Portal enterprise deployment. It provides information about recommended directory structure and locations, and includes a procedure for configuring shared storage.
This chapter includes the following topics:
Section 4.1, "Overview of Preparing the File System for Enterprise Deployment"
Section 4.2, "Terminology for Directories and Directory Environment Variables"
Section 4.3, "About Recommended Locations for the Different Directories"
It is important to set up your file system in a way that makes the enterprise deployment easier to understand, configure, and manage. Oracle recommends setting up your files system according to information in this chapter. The terminology defined in this chapter is used in diagrams and procedures throughout the guide.
Use this chapter as a reference to help understand the directory variables used in the installation and configuration procedures. Other directory layouts are possible and supported, but the model adopted in this guide is chosen for maximum availability, providing both the best isolation of components and symmetry in the configuration and facilitating backup and disaster recovery. The rest of the document uses this directory structure and directory terminology.
This section describes the directory environment variables used throughout this guide for configuring the Oracle WebCenter Portal enterprise deployment. The following directory variables are used to describe the directories installed and configured in the guide:
ORACLE_BASE: This environment variable and related directory path refers to the base directory under which Oracle products are installed.
MW_HOME: This environment variable and related directory path refers to the location where Fusion Middleware (FMW) resides.
WL_HOME: This environment variable and related directory path contains installed files necessary to host a WebLogic Server.
ORACLE_HOME: This environment variable and related directory path refers to the location where either Oracle SOA Suite or Oracle WebCenter Portal is installed.
ORACLE_COMMON_HOME: This environment variable and related directory path refers to the Oracle home that contains the binary and library files required for the Oracle Enterprise Manager Fusion Middleware Control and Java Required Files (JRF).
DOMAIN Directory: This directory path refers to the location where the Oracle WebLogic Domain information (configuration artifacts) is stored. Different Oracle WebLogic Servers can use different domain directories even when in the same node as described below.
ORACLE_INSTANCE: An Oracle instance contains one or more system components, such as Oracle Web Cache, Oracle HTTP Server, or Oracle Internet Directory. An Oracle instance directory contains updateable files, such as configuration files, log files, and temporary files.
Tip:
You can simplify directory navigation by using environment variables as shortcuts to the locations in this section. For example, you could use an environment variable called $ORACLE_BASE in Linux to refer to /u01/app/oracle (that is, the recommended ORACLE_BASE location). In Windows, you would use %ORACLE_BASE% and use Windows-specific commands.
With Oracle Fusion Middleware 11g you can create multiple SOA or WebCenter Portal servers from one single binary installation. This allows the installation of binaries in a single location on a shared storage and the reuse of this installation by the servers in different nodes. However, for maximum availability, Oracle recommends using redundant binary installations. In the Enterprise Deployment model, two MW HOMEs (each of which has a WL_HOME and an ORACLE_HOME for each product suite) are installed in a shared storage. Additional servers (when scaling out or up) of the same type can use either one of these two locations without requiring more installations. Ideally, users should use two different volumes (referred to as VOL1 and VOL2 below) for redundant binary location, thus isolating as much as possible the failures in each volume. For additional protection, Oracle recommends that these volumes are disk mirrored. If multiple volumes are not available, Oracle recommends using mount points to simulate the same mount location in a different directory in the shared storage. Although this does not guarantee the protection that multiple volumes provide, it does allow protection from user deletions and individual file corruption.
When an ORACLE_HOME or a WL_HOME is shared by multiple servers in different nodes, it is recommended to maintain the Oracle Inventory and Middleware home list in those nodes updated for consistency in the installations and application of patches. To update the oraInventory in a node and "attach" an installation in a shared storage to it, use ORACLE_HOME/oui/bin/attachHome.sh
. To update the Middleware home list to add or remove a WL_HOME, edit the <user_home>/bea/beahomelist
file. This would be required for any nodes installed additionally to the two ones used in this Enterprise Deployment. An example of the oraInventory and beahomelist updates is provided in the scale-out steps included in this guide.
Oracle recommends also separating the domain directory used by the Administration Server from the domain directory used by managed servers. This allows a symmetric configuration for the domain directories used by managed server, and isolates the failover of the Administration Server. The domain directory for the Administration Server must reside in a shared storage to allow failover to another node with the same configuration. The managed servers' domain directories can reside in a local or shared storage.
You can use a shared domain directory for all managed servers in different nodes or use one domain directory per node. Sharing domain directories for managed servers facilitates the scale-out procedures. In this case, the deployment should conform to the requirements (if any) of the storage system to facilitate multiple machines mounting the same shared volume. The configuration steps provided in this Enterprise Deployment Topology assume that a local (per node) domain directory is used for each managed server
All procedures that apply to multiple local domains apply to a single shared domain. Hence, this enterprise deployment guide uses a model where one domain directory is used per node. The directory can be local or reside in shared storage.
JMS file stores and JTA transaction logs need to be placed on a shared storage in order to ensure that they are available from multiple boxes for recovery in the case of a server failure or migration.
Based on the above assumptions, the following paragraphs describe the directories recommended. Wherever a shared storage location is directly specified, it is implied that shared storage is required for that directory. When using local disk or shared storage is optional the mount specification is qualified with "if using a shared disk." The shared storage locations are examples and can be changed as long as the provided mount points are used. However, Oracle recommends this structure in the shared storage device for consistency and simplicity.
ORACLE_BASE:
/u01/app/oracle
Domain Directory for Administration Server Domain Directory:
ORACLE_BASE/admin/domain_name/aserver/domain_name (The last "domain_name" is added by Configuration Wizard)
Mount point on machine: ORACLE_BASE/admin/domain_name/aserver
Shared storage location: ORACLE_BASE/admin/domain_name/aserver
Mounted from: Only the node where the Administration Server is running needs to mount this directory. When the Administration Server is relocated (failed over) to a different node, the node then mounts the same shared storage location on the same mount point. The remaining nodes in the topology do not need to mount this location
Domain Directory for Managed Server Domain Directory:
ORACLE_BASE/admin/domain_name/mserver/domain_name
If you are using a shared disk, the mount point on the machine is ORACLE_BASE/admin/domain_name/mserver mounted to ORACLE_BASE/admin/domain_name/Noden/mserver/ (each node uses a different domain directory for managed servers).
Note:
This procedure is really shared storage dependent. The above example is specific to NAS, but other storage types may provide this redundancy with different types of mappings.
Location for JMS file-based stores and Tlogs (SOA only):
ORACLE_BASE/admin/domain_name/soa_cluster_name/jms
ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs
Mount point: ORACLE_BASE/admin/domain_name/soa_cluster_name/
Shared storage location: ORACLE_BASE/admin/domain_name/soa_cluster_name/
Mounted from: All nodes running SOA must mount this shared storage location so that transaction logs and JMS stores are available when server migration to another node take place.
Location for Application Directory for the Administration Server
ORACLE_BASE/admin/domain_name/aserver/applications
Mount point: ORACLE_BASE/admin/domain_name/aserver/
Shared storage location: ORACLE_BASE/admin/domain_name/aserver
Mounted from: Only the node where the Administration Server is running must mount this directory. When the Administration Server is relocated (failed over) to a different node, the node then mounts the same shared storage location on the same mount point. The remaining nodes in the topology do not need to mount this location
Location for Application Directory for Managed Server
ORACLE_BASE/admin/domain_name/mserver/applications
Note:
This directory is local in the context of a SOA enterprise deployment.
MW_HOME (application tier)
ORACLE_BASE/product/fmw
Mount point: ORACLE_BASE/product/fmw
Shared storage location: ORACLE_BASE/product/fmw (VOL1 and VOL2)
Note:
When there is just one volume available in the shared storage, you can provide redundancy using different directories to protect from accidental file deletions and for patching purposes. Two MW_HOMEs would be available; at least one at ORACLE_BASE/product/fmw1, and another at ORACLE_BASE/product/fmw2. These MW_HOMEs are mounted on the same mount point in all nodes.
Mounted from: Nodes alternatively mount VOL1 or VOL2 so that at least half of the nodes use one installation, and half use the other.
In a WebCenter Portal enterprise deployment topology, SOAHOST1 and WCPHOST1 mounts VOL1 and SOAHOST2 and WCPHOST2 mounts VOL2. When only one volume is available, nodes mount the two suggested directories in shared storage alternately. For example, SOAHOST1 would use ORACLE_BASE/product/fmw1 as a shared storage location, and SOAHOST2 would use ORACLE_BASE/product/fmw2 as a shared storage location)
MW_HOME (web tier):
ORACLE_BASE/product/fmw/web
Mount point: ORACLE_BASE/product/fmw
Shared storage location: ORACLE_BASE/product/fmw (VOL1 and VOL2)
Note:
Web tier installation is typically performed on local storage to the WEBHOST nodes. When using shared storage, consider the appropriate security restrictions for access to the storage device across tiers.
This enterprise deployment guide assumes that the Oracle web tier will be installed onto local disk. You may install the Oracle Web Tier binaries (and the ORACLE_INSTANCE) onto shared disk. If so, the shared disk MUST be separate from the shared disk used for the application tier.
Mounted from: For Shared Storage installations, nodes alternatively mount VOL1 or VOL2 so that at least half of the nodes use one installation, and half use the other.
In a WebCenter Portal enterprise deployment topology, WEBHOST1 mounts VOL1 and WEBHOST2 mounts VOL2. When only one volume is available, nodes mount the two suggested directories in shared storage alternately. For example, WEBHOST1 would use ORACLE_BASE/product/fmw1 as a shared storage location, and WEBHOST2 would use ORACLE_BASE/product/fmw2 as a shared storage location).
WL_HOME:
MW_HOME/wlserver_10.3
ORACLE_HOME:
MW_HOME/wc (Oracle home for WebCenter Portal)
MW_HOME/soa (Oracle home for SOA Suite)
MW_HOME/wcc (Oracle home for WebCenter Content)
ORACLE_COMMON_HOME:
MW_HOME/oracle_common
ORACLE_INSTANCE (OHS instance):
ORACLE_BASE/admin/instance_name
If you are using a shared disk, the mount point on the machine is:
ORACLE_BASE/admin/instance_name
Mounted to:
ORACLE_BASE/admin/instance_name (VOL1)
Note:
(VOL1)
is optional; you could also use (VOL2)
.
Figure 4-1 shows this directory structure in a diagram.
The directory structure in Figure 4-1 does not show other required internal directories, such as oracle_common
and jrockit
.
Table 4-1 explains what the various color-coded elements in the diagram mean.
Table 4-1 Directory Structure Elements
Element | Explanation |
---|---|
|
The Administration Server domain directories, applications, deployment plans, file adapter control directory, JMS and TX logs, and the entire MW_HOME are on a shared disk. |
|
The managed server domain directories can be on a local disk or a shared disk. Further, if you want to share the managed server domain directories on multiple nodes, then you must mount the same shared disk location across the nodes. The |
|
Fixed name. |
|
Installation-dependent name. |
Figure 4-2 shows an example configuration for shared storage with multiple volumes for WebCenter Portal. The example shows SOAHOST1 and SOAHOST2. In addition, managed server directories on WCPHOST1 and WCPHOST2 appear on VOL1 and VOL2 as shown.
Table 4-2 summarizes the directory structure for the domain. In the table:
WLS_WCP refers to all the WebCenter Portal managed servers: WC_Spaces, WC_Portlet, WC_Utilities, WC_Collaboration
WLS_WCC refers to the WebCenter Content managed server WLS_WCC, and includes WLS_IBR.
Table 4-2 Content of Shared Storage
Server | Type of Data | Volume in Shared Storage | Directory | Files |
---|---|---|---|---|
WLS_SOA1 |
Tx Logs |
VOL1 |
ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs |
The transaction directory is common (decided by WebLogic Server), but the files are separate. |
WLS_SOA2 |
Tx Logs |
VOL1 |
ORACLE_BASE/admin/domain_name/soa_cluster_name/tlogs |
The transaction directory is common (decided by WebLogic Server), but the files are separate. |
WLS_SOA1 |
JMS Stores |
VOL1 |
ORACLE_BASE/admin/domain_name/soa_cluster_name/jms |
The transaction directory is common (decided by WebLogic Server), but the files are separate; for example: SOAJMSStore1, UMSJMSStore1, and so on. |
WLS_SOA2 |
JMS Stores |
VOL1 |
ORACLE_BASE/admin/domain_name/soa_cluster_name/jms |
The transaction directory is common (decided by WebLogic Server), but the files are separate; for example: SOAJMSStore2, UMSJMSStore2, etc. |
WLS_SOA1 |
WLS Install |
VOL1 |
MW_HOME |
Individual in each volume, but both servers see same directory structure. |
WLS_SOA2 |
WLS Install |
VOL2 |
MW_HOME |
Individual in each volume, but both servers see same directory structure. |
WLS_WCP1 |
WLS Install |
VOL1 |
MW_HOME |
Individual in each volume, but both servers see same directory structure. |
WLS_WCP2 |
WLS Install |
VOL2 |
MW_HOME |
Individual in each volume, but both servers see same directory structure. |
WLS_SOA1 |
SOA Install |
VOL1 |
MW_HOME/soa |
Individual in each volume, but both servers see same directory structure. |
WLS_SOA2 |
SOA Install |
VOL2 |
MW_HOME/soa |
Individual in each volume, but both servers see same directory structure. |
WLS_WCP1 |
WebCenter Portal Install |
VOL1 |
MW_HOME/wc |
Individual in each volume, but both servers see same directory structure. |
WLS_WCP2 |
WebCenter Portal Install |
VOL2 |
MW_HOME/wc |
Individual in each volume, but both servers see same directory structure. |
WLS_WCC1 |
WebCenter Content Install |
VOL1 |
MW_HOME/wcc |
Individual in each volume, but both servers see same directory structure. |
WLS_WCC2 |
WebCenter Content Install |
VOL2 |
MW_HOME/wcc |
Individual in each volume, but both servers see same directory structure. |
WLS_SOA1 |
Domain Config |
VOL1 |
ORACLE_BASE/admin/domain_name/aserver/domain_name |
Used by only one Server where the Administration server is running. |
WLS_SOA1 |
Domain Config |
VOL1 |
ORACLE_BASE/admin/domain_name/mserver/domain_name |
Individual in each volume, but both servers see same directory structure. |
WLS_SOA2 |
Domain Config |
VOL2 |
ORACLE_BASE/admin/domain_name/mserver/domain_name |
Individual in each volume, but both servers see same directory structure. |
WLS_WCP1 |
Domain Config |
VOL1 |
ORACLE_BASE/admin/domain_name/mserver/domain_name |
Individual in each volume, but both servers see same directory structure. |
WLS_WCP2 |
Domain Config |
VOL2 |
ORACLE_BASE/admin/domain_name/mserver/domain_name |
Individual in each volume, but both servers see same directory structure. |
WLS_WCC1 |
Web and Vault Files |
VOL3 |
ORACLE_BASE/admin/domain_name/wcc_cluster_name/vault |
Directory for vault files on a separate volume with locking disabled. |
WLS_WCC1 |
Web and Vault Files |
VOL3 |
ORACLE_BASE/admin/domain_name/wcc_cluster_name/weblayout |
Directory for weblayout files on a separate volume with locking disabled. |
WLS_WCC2 |
Web and Vault Files |
VOL3 |
ORACLE_BASE/admin/domain_name/wcc_cluster_name/vault |
Directory for vault files on a separate volume with locking disabled. |
WLS_WCC2 |
Web and Vault Files |
VOL3 |
ORACLE_BASE/admin/domain_name/wcc_cluster_name/weblayout |
Directory for weblayout files on a separate volume with locking disabled. |
WLS_IBR1 |
Inbound Refinery Files |
VOL3 |
ORACLE_BASE/admin/domain_name/ibr_cluster_name/ibrn |
Directory for all inbound refinery files on a separate volume with locking disabled. |
Note:
VOL3 is mounted as an NFS nolock volume. For details, see Section 4.4, "Configuring Shared Storage".
Use the following commands to create and mount shared storage locations so that SOAHOST1, SOAHOST2, WCPHOST1, and WCPHOST2 can see the same location for binary installation in two separate volumes.
Note:
The user ID used to create a shared storage file system owns and has read, write, and execute privileges for those files. Other users in the operating system group can read and execute the files, but they do not have write privileges. For more information about installation and configuration privileges, see the "Understanding Installation and Configuration Privileges and Users" section in the Oracle Fusion Middleware Installation Planning Guide.
nasfiler
is the shared storage filer.
From SOAHOST1 and WCPHOST1:
mount nasfiler:/vol/vol1/ORACLE_BASE/product/fmw ORACLE_BASE/product/fmw -t nfs
From SOAHOST2 and WCPHOST2:
mount nasfiler:/vol/vol2/ORACLE_BASE/product/fmw ORACLE_BASE/product/fmw -t nfs
If only one volume is available, users can provide redundancy for the binaries by using two different directories in the shared storage and mounting them to the same directory in the SOA Servers:
From SOAHOST1:
mount nasfiler:/vol/vol1/ORACLE_BASE/product/fmw1 ORACLE_BASE/product/fmw -t nfs
From SOAHOST2:
mount nasfiler:/vol/vol1/ORACLE_BASE/product/fmw2 ORACLE_BASE/product/fmw -t nfs
The following commands show how to share the SOA TX logs location across different nodes:
mount nasfiler:/vol/vol1/ORACLE_BASE/stores/soadomain/soa_cluster/tlogs ORACLE_BASE/stores/soadomain/soa_cluster/tlogs -t nfs mount nasfiler:/vol/vol1/ORACLE_BASE/stores/soadomain/soa_cluster/tlogs ORACLE_BASE/stores/soadomain/soa_cluster/tlogs -t nfs
From WCPHOST1 and WCPHOST2:
The following commands show how to share WebCenter Content and Inbound Refinery files across different nodes:
mount nasfiler:/vol/vol3/ORACLE_BASE/admin/wcdomain/wcc_cluster/vault -t nfs -o rw,bg,hard,vers=3,nolock mount nasfiler:/vol/vol3/ORACLE_BASE/admin/wcdomain/ibr_cluster/ nfs -o rw,bg,hard,vers=3,nolock
And likewise for WCPHOST2.
Validating the Shared Storage Configuration
Ensure that you can read and write files to the newly mounted directories by creating a test file in the shared storage location you just configured.
For example:
$ cd newly mounted directory
$ touch testfile
Verify that the owner and permissions are correct:
$ ls -l testfile
Then remove the file:
$ rm testfile
Note:
The shared storage can be a NAS or SAN device. The following illustrates an example of creating storage for a NAS device from SOAHOST1. The options may differ depending on the specific storage device.
mount nasfiler:/vol/vol1/fmw11shared ORACLE_BASE/wls -t nfs -o rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768
Contact your storage vendor and machine administrator for the correct options for your environment.