Skip Headers

Oracle® Database Readme
10g Release 1 (10.1)
Part No. B12304-01
 

 

Copyright © 2004,  Oracle. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Oracle® Database

Readme

10g Release 1 (10.1)

Part No. B12304-01

January 2004

Purpose of this Readme

This Readme file is relevant only to the delivered Oracle Database 10g Release 1 (10.1) product and its integral parts, such as SQL, PL/SQL, the Oracle Call Interface (OCI), SQL*Loader, Import/Export utilities, and so on.

This Readme documents differences between the server and its integral parts and its documented functionality, as well as known problems and workarounds.

For additions and corrections to the server documentation set, please refer to the Oracle® Databse Documentation Addendum, which is available on the product CD. A complete list of open known bugs is also available on the product CD.Operating system releases, such as UNIX, Windows, and OpenVMS, often provide readme documents specific to that operating system. Additional readme files may also exist. This Readme file is provided in lieu of system bulletins or similar publications.

Cover Letter and Licensing

Please read the cover letter included with your Oracle Database 10g Release 1 (10.1) package.

Documentation

The full list of books provided with this release is available at the following Web site:

http://otn.oracle.com/documentation


Contents

Compatibility, Upgrading, and Downgrading

Default Behavior Changes

Automatic Storage Management

Block Change Tracking

Companion CD

Configuration Assistants

Database Security

Globalization Support

Java and Web Services

Media Management Software

Oracle Advanced Security

Oracle Call Interface

Oracle Change Data Capture

Oracle Data Guard

Oracle Data Mining

Oracle HTML DB

Oracle interMedia

Oracle Internet Directory

Oracle Label Security

Oracle-Managed Files

Oracle Net Services

Oracle OLAP

Oracle Real Application Clusters

Oracle Sample Schemas

Oracle Spatial

Oracle Streams

Oracle Text

Oracle Ultra Search

Oracle XML Developer's Kit

PL/SQL

Pro*C

Pro*COBOL

Pro*FORTRAN

Replication

SQL

SQL*Module for ADA

SQL*Plus

Summary Management

Table Compression

Types

Utilities

Documentation Addendum

Open Bugs

Documentation Accessibility

Legal Notices

1 Compatibility, Upgrading, and Downgrading

Please note the following items for compatibility, upgrading and downgrading:

1.1 Standard Edition Starter Database Upgrade

When the Standard Edition starter database is upgraded, the following components cannot be upgraded by the SE server because they require options that are not installed in the Standard Edition:

  • Oracle Data Mining

  • Spatial

  • OLAP Catalog

  • OLAP Analytic Workspace

  • Oracle OLAP API

After the upgrade, these components will have a STATUS value of 'OPTION OFF' in the DBA_REGISTRY view, and there will be some invalid objects in the associated component schemas. The Database Upgrade Assistant will show unsuccessful upgrades for these components.

1.2 Tablespace Sizing

The new Oracle Database 10g Pre-Upgrade Information Utility (utlu101i.sql) estimates the additional space required in the SYSTEM tablespace and in any tablespaces associated with the components that are in the database. However, there are some tablespace issues that are not addressed:

  • Temporary Tablespace: At least 50 MB of temporary tablespace is required to run the upgrade.

  • Tablespace with Queue Tables: Any user tablespaces with queue tables require some additional space for each queue.

  • Rollback Segments: One large (70 MB) public rollback segment is required for the upgrade; other smaller rollback segments should be OFFLINE.

The Database Upgrade Assistant automatically handles the rollback segment issue but does not handle the temporary tablespace issue or the queue table issue.

To avoid potential space problems during the upgrade, you can set one data file for each affected tablespace to AUTOEXTEND ON MAXSIZE UNLIMITED for the duration of the upgrade.

1.3 Downgrading to Oracle9i Database Release 2

Customers with Oracle XML DB or OLAP Catalog can only downgrade to Oracle9i Release 9.2.0.5, or more recent patch sets.

1.4 JServer JAVA Virtual Machine Upgrade

The JServer NCOMP files, JAccelerator, are now on the Companion CD. If JServer is in the database that is being upgraded, the following error will occur during a manual upgrade, provided the Companion CD is not installed:

ORA-29558: JAccelerator (NCOMP) not installed.

JAccelerator can be installed either before or after the upgrade; installing it prior to the upgrade avoids the error.

This error will not be displayed if you are upgrading using the Database Upgrade Assistant (DBUA). Instead, DBUA shows a warning message at the end of the upgrade in the UpgradeResults page, stating which components should be installed from the Companion CD.

1.5 Converting Databases to 64-bit Oracle Database Software

If you are installing 64-bit Oracle Database 10g software but were previously using a 32-bit Oracle Database installation, your databases will be converted to 64-bit automatically during the upgrade.

1.6 Compatibility with Oracle8i Database

There is no support for Oracle Database 10g client or server connection to Oracle8i Database release 8.0.6 servers. Similarly, there is no support for connecting Oracle8i Database servers Oracle Database 10g servers.

2 Default Behavior Changes

This section describes some of the differences in behavior between Oracle Database 10g and previous releases. The majority of the information about upgrading and downgrading is already included in the Oracle Database Upgrade Guide.


QUERY_REWRITE_ENABLED Parameter

The default value of the initialization parameter QUERY_REWRITE_ENABLED has changed. See Oracle Database Reference for details.


PARALLEL_ADAPTIVE_MULTI_USER Parameter

The default value of the initialization parameter PARALLEL_ADAPTIVE_MULTI_USER has changed. See Oracle Database Reference for details.


LOG_ARCHIVE_DEST_n Parameter

The LOG_ARCHIVE_DEST_n parameter can now specify standby destinations that are running Oracle Standard Edition, but only when specifying local destinations with the LOCATION attribute. See Section 2.3.2 "Oracle Software Requirements" in Oracle Data Guard Concepts and Administration for details.


Log Transport Services Password

Log transport services now require that all databases in an Oracle Data Guard configuration use a password file. The password for the SYS user must be identical in the password file of every system that is in the same Oracle Data Guard configuration. For more details, refer to Oracle Data Guard Concepts and Administration.


SHARED_SERVERS Parameter

When the initialization parameter SHARED_SERVERS is dynamically changed to 0, no new clients can connect in shared mode, but existing shared server connections can continue to operate. Prior to Oracle Database 10g, existing shared server connections would hang in this situation.


Offset for CLOB and NCLOB APIs

Starting with this release, APIs that write to a CLOB or NCLOB will cause error ORA-22831 when the offset specified for the beginning of the write is not on a character boundary in the existing LOB data.

LOB APIs use codepoint semantics for the amount and offset parameters when the database default or national character set is Unicode. For example, if the starting offset is in the middle of a surrogate pair, error ORA-22831 occurs and the data is not written. This avoids corrupting the character in the target LOB.

To configure the database so that it does not throw ORA-22831, you can set event 10973 to any level. When this event is set, data is written to the target LOB regardless of whether the offset is on a character boundary. Note that when supplemental logging is enabled, setting event 10973 does not disable error ORA-22831.


SHARED_POOL_SIZE Parameter

The amount of shared pool memory allocated by previous Oracle Database releases was equal to the sum of the value of the SHARED_POOL_SIZE initialization parameter and the internal SGA overhead computed during instance startup. This overhead was based on the values of several other initialization parameters. As an example, if the SHARED_POOL_SIZE parameter is 64m and the internal SGA overhead is 12m, the real size of shared pool in the SGA would be 76m, although the value of the SHARED_POOL_SIZE parameter would still be displayed as 64m.

Starting with this release, the size of internal SGA overhead is included in the value of SHARED_POOL_SIZE parameter; the shared pool memory allocated at startup is exactly the value of SHARED_POOL_SIZE. Therefore, this parameter must be set such that it includes both the internal SGA overhead and the desired effective value of the shared pool size. Assuming that the internal SGA overhead remains unchanged, the effective available value of shared pool after startup would be 12m less than the value of the SHARED_POOL_SIZE parameter, or 52m. To maintain 64m for the effective value of shared pool memory, set the parameter to 76m.Migration utilities for this release recommend new values for SHARED_POOL_SIZE based on the value of internal SGA overheads in the pre-upgrade environment, which you can determine by running the following query before upgrading to Oracle Database 10g:

SQL> select sum(bytes) from v$sgastat where pool = 'shared pool';

In Oracle Database 10g, the exact value of internal SGA overhead, or Startup overhead in Shared Pool, is listed in the new v$sgainfo view.

In the manual SGA mode, values of SHARED_POOL_SIZE that are too small to accommodate the internal SGA overhead result in an ORA-00371 error during startup. This generated error message includes a suggested value for the SHARED_POOL_SIZE parameter.If you are using automatic shared memory management, the size of the shared pool is tuned automatically, and the ORA-00371 error is never generated.

3 Automatic Storage Management

Please note the following changes in Automatic Storage Management (ASM):

3.1 Spare Capacity Recommendation for ASM Normal and High Redundancy Disk Groups

ASM does not require hot spare drives to protect against disk failures, but it does require spare capacity. You must have sufficient spare capacity in your disk group to handle the largest failure you are willing to tolerate. After a disk fails, the reconstruction process depletes the space capacity and as a result, you may not have enough space to successfully create new files. The following guidelines will ensure that you have sufficient space to create files even if you suffer a disk failure.

  • In a normal redundancy disk group, you should have enough free space in your disk group to tolerate the loss of disks in one failure group.

  • In a high redundancy disk group, you should have enough free space to cope with the loss of two failure groups.

3.2 ASM Deinstall Related Precautions

ASM uses Cluster Synchronization Services (CSS) for inter-instance synchronization. In an Oracle Real Application Cluster (RAC) environment, CSS runs out of the CRS_HOME on each node. In a single node environment, start CSS by running the root.sh script on the first installed $ORACLE_HOME. ASM requires CSS in a single node environment for synchronization between the ASM and database instances.

If deinstalling an $ORACLE_HOME from a non-RAC machine that is running ASM, first check to see if the CSS daemon (CSSD) is running on the $ORACLE_HOME that you intend to deinstall. If it is and you wish to continue using ASM on this machine, migrate the CSS service to the $ORACLE_HOME from which you run your ASM instance by executing this script:

$ORACLE_HOME/bin/localconfig reset new_$ORACLE_HOME

Detailed instructions on how to migrate CSSD to a new $ORACLE_HOME are provided in the Oracle Database Installation Guide specific to your platform and the Oracle Real Application Clusters Installation and Configuration Guide Chapter 12 "Understanding the Real Application Clusters Installed Configuration".

If more than one database instance on a node will use ASM storage, you should have a separate $ORACLE_HOME for the ASM instance. Doing this will reduce the likelihood of accidentally deinstalling the ASM instance when deinstalling a database's $ORACLE_HOME. If a single database uses ASM on a machine, then both the database and ASM instances can run from the same $ORACLE_HOME.

4 Block Change Tracking

Please note the following change in block change tracking:

4.1 Block Change Tracking and Physical Standby Database

Block Change Tracking can be enabled at a physical standby database. However, you should note that changed blocks are not recorded, and incremental backups will not be faster during the physical standby as they are receiving and applying redo transactions from the primary database.

When the physical standby becomes a primary database, changes are once again recorded. Following the subsequent Level 0 backup, incremental backups take advantage of change tracking.

5 Companion CD

Although the installer for the Oracle Companion CD gives the user the option of installing Oracle Workflow, it will be made available on a separate CD, or as a download from Oracle Technology Network, at:

http://otn.oracle.com/software/index.html

6 Configuration Assistants

Cloning a database using Database Configuration Assistant (DBCA) is only supported for a database that has tablespaces with blocks of equal size.

Enterprise Manager DB Cloning functionality does not have any restrictions on block size; it should be used as the primary tool for cloning a database.

7 Database Security

Please note the following changes in Database Security.

7.1 Changes in Default Security Settings

  • Customers are strongly recommended to discontinue using CONNECT and RESOURCE roles, as they will be deprecated in future Oracle Database releases.

  • Grants of password protection or application role to another role will not be allowed in future Oracle Database releases.

  • Use of PL/SQL packages with UTL_ prefix should not depend on execute privileges granted to PUBLIC, as these privileges will be revoked in future Oracle Database releases.Failed login attempts will be limited by default in future Oracle Database releases.

  • Java class oracle.security.rdbms.server.AppCtx will be deprecated in future Oracle Database releases.

  • Certificate based proxy authentication using OCI_ATTR_CERTIFICATE will not be supported in future Oracle Database releases. Use OCI_ATTR_DISTINGUISHED_NAME or OCI_ATTR_USERNAME attribute instead.

7.2 Enterprise User Security


Enterprise Security Manager

In Oracle Database 10g, Enterprise Security Manager only supports login using distinguished name (DN). Login using nickname only works in some circumstances.

Only members of the Oracle Context Administrators group, OracleContextAdmins, can use Enterprise Security Manager to change the default database to directory authentication method. Note that attempts to change this setting in the Enterprise Security Manager cause it to overwrite the realm user search base and nickname attribute with the tool's cached values. If these realm attributes have recently been changed using a tool based on Oracle Internet Directory Delegated Administration Services (DAS) tool, you must use another DAS-based tool to confirm these settings. DAS-based tools include Enterprise Security Manager Console and Oracle Internet Directory Self-Service Console.

To manage enterprise domains and enterprise roles in the directory, use only the versions of Enterprise Security Manager that are current with your Oracle Database release. For example, use only Enterprise Security Manager 10g to manage enterprise domains and roles for Oracle Database 10g. Alternatively, if you are using an Oracle9i Database, use only the Oracle9i Enterprise Security Manager to administer enterprise roles and domains for that database.


Oracle Internet Directory Changes and Issues that Affect Enterprise User Security
Default Nickname Attribute Change

In this release, the default nickname attribute changed from cn to uid, and it is set in each identity management realm.

orcladmin User Identity Changes

In a previous Oracle Internet Directory release, the orcladmin user represented a "virtual" user who possessed root privileges in the directory. In this release, each identity management realm includes an orcladmin user who is the root user of that realm only. These realm-specific orcladmin users are represented by the directory entries cn=orcladmin,cn=Users,<realm_DN>. Note that when you are logged in to Enterprise User Security administration tools as a realm-specific orcladmin user, then you can only manage directory objects for that realm. To manage objects in another realm, you must log in to administration tools as the orcladmin user for that realm.


Databases with Different Authentication Methods to Oracle Internet Directory Cannot Share the Same ldap.ora Files

The second port setting in the ldap.ora file, the SSL port setting, must be set to one of the following options:

  • SSL with no authentication

  • SSL with mutual password-based authentication

  • SSL authentication between the database and the directory.

Databases with different methods of authenticating to Oracle Internet Directory cannot share the same ldap.ora file.

If these databases use the same $ORACLE_HOME, one of them should have a separate TNS_ADMIN directory with its own ldap.ora file (Bug 3327626).


Sharing Wallets and sqlnet.ora Files among Multiple Databases

Multiple nonreplicated databases cannot share wallets, and when sqlnet.ora files contain wallet location information, databases cannot share sqlnet.ora files either, unless you are using the following configuration:Password-authenticated or Kerberos-authenticated Enterprise User Security with default database-to-directory connection configuration that uses passwords keeps database wallets in the default location, where Database Configuration Assistant creates them. In this situation, no wallet location information is stored in the sqlnet.ora file, and the wallet can be shared among multiple databases.

Note that when using SSL for enterprise user authentication, the wallet location must be specified in the sqlnet.ora file, so sqlnet.ora files cannot be shared by multiple databases for SSL-authenticated enterprise users.


Use the Database Configuration Assistant Regenerate Password Button to Resolve ORA-28043 Error

If you receive the error ORA-28043: Invalid bind credentials for DB/OID connection, then the database's directory password no longer synchronizes with the directory. Use the Regenerate Password button in Database Configuration Assistant to generate a new directory password for the database, synchronize it with the directory, and store it in the database wallet. See "To change the database's directory password" in Chapter 12 of the Oracle Advanced Security Administrator's Guide (Bug 3331096).


Use Only Administrative Tools from the Current Release to Create Enterprise Users

Do not use Enterprise Security Manager tools from previous Oracle Database releases, such as Oracle9i Database or earlier, to create enterprise users in the Oracle Database 10g identity management realm. Use only DAS-based tools that ship with Oracle Application Server 10g to create enterprise users in identity management realms. DAS-based tools include Enterprise Security Manager Console and Oracle Internet Directory Self-Service Console.


Disable Bind Plug-ins When Using Oracle Internet Directory in SSL with Mutual Authentication Mode

If Oracle Internet Directory is set up with external bind plug-ins, such as those used by the Directory Integration Services to synchronize with Microsoft Active Directory, then SSL connections between the database and Oracle Internet Directory fail. Disable all bind plug-ins if you need to use Oracle Internet Directory in SSL mode with mutual authentication (Bug 3292587).


Upgrading from Oracle9i

If you are upgrading from an Oracle9i Database (Release 9.1 or Release 9.2) to an Oracle Database 10g Release 1 (10.1), or if you are upgrading from Oracle Internet Directory Release 9.2 to Release 9.0.4, then use the following steps. Note that an Oracle9i Database Release 2 (9.2) will work with an Oracle Internet Directory Release 9.2 or Release 9.0.4, but that an Oracle Database 10g Release 1 (10.1) only works with Oracle Internet Directory Release 9.0.4.

Upgrading Oracle Internet Directory from Release 9.2 to Release 9.0.4

  1. Upgrade Oracle Internet Directory from Release 9.2 to Release 9.0.4 by using Oracle Internet Directory Configuration Assistant. Note that this is required if you want to register Oracle Database 10g instances in this directory.

    If you are not planning to use the Oracle Context for Oracle Database 10g, you can skip the following step.

  2. If they are not root contexts, you should upgrade all Oracle Contexts used for Enterprise User Security to Identity Management Realms. Use the Oracle Internet Directory Configuration Assistant command-line utility as follows:

    oidca mode=CTXTOIMR
    
    

    This step is required if you want to register an Oracle Database 10g database in this realm.

    You cannot use the root Oracle Context for Oracle Database 10g databases because it is not an Identity Management Realm. See Oracle Advanced Security Administrator's Guide for further information.

  3. Use Oracle Internet Directory tools, such as ldapmodify and bulkmodify, to add the orcluserV2 objectclass to existing user entries. This objectclass is required for users to change their database passwords, and for Kerberos authentication to the database.

  4. In a realm that contains both an Oracle9i Database (Release 9.1 or Release 9.2) and an Oracle Database 10g Release 1 (10.1), use a DAS-based tool from Oracle Internet Directory Release 9.0.4 (either Oracle Internet Directory Self-Service Console or Enterprise Security Manager Console) to create and manage users, including their passwords. Do not use Enterprise Security Manager or Enterprise Login Assistant from Oracle9i Database installations.

Upgrading Oracle Databases from Release 9.2 to Release 10.1

For each Oracle9i Database instance that you upgrade to Oracle Database 10g, perform the following steps:

  1. Use Oracle Wallet Manager to disable automatic login for the database wallet.

  2. Copy the database distinguished name (DN) from the initialization parameter rdbms_server_dn to a file in a secure location.

  3. Upgrade the database to Oracle Database 10g.

  4. Depending on where your database admin directory is stored, move the database wallet either to $ORACLE_HOME/admin/olddbuniquename/wallet or $ORACLE_BASE/admin/olddbuniquename/wallet. Note that $ORACLE_HOME is for the new Oracle Database 10g. You may have to create the wallet directory.

  5. Copy the old $ORACLE_HOME/network/admin/ldap.ora file to the new $ORACLE_HOME/ldap/admin/ldap.ora file. Alternatively, you can use Oracle Net Configuration Assistant to create a new ldap.ora file.

  6. Use the command-line utility, mkstore, to put the database DN (from the file in the previously created secure directory location) into the wallet by using the following syntax; you will be prompted for the wallet password.

    mkstore -wrl database_wallet_location -createEntry
       ORACLE.SECURITY.DN database_DN
    
    

    If you make a mistake, use the -modifyEntry option to correct it.

  7. Use Database Configuration Assistant to generate the database-to-directory password in the database wallet. Choose the Modify Database option.

  8. Use Oracle Wallet Manager to re-enable automatic login for the database wallet.

  9. Use Oracle Net Manager to set the new wallet location in the sqlnet.ora file to the directory previously specified in step 4.

The default for the nickname attribute, such as cn, remains unchanged; the upgrade process does not change the default nickname attribute setting.

After upgrading from Oracle Internet Directory Release 9.2 to Release 9.0.4, if you are unable to log into an Oracle Database 10g, you must use the DAS-based Oracle Internet Directory Self-Service Console to reset your password.

For more information about upgrading to an Identity Management Realm Release 9.0.4, see the Oracle Internet Directory section.


Do Not Use the Root Oracle Context for Enterprise User Security

Do not use the root Oracle Context in Oracle Internet Directory Release 9.0.4 for Enterprise User Security. This also applies for Oracle9i Database and earlier releases (Bug 3192487).


Database Registration and Unregistration Issues

To successfully unregister an Oracle Database 10g from Oracle Internet Directory by using Database Configuration Assistant, a user should be a member of one of the following directory administrative groups or combination of groups:

  • A member of the Oracle Context Admin group

  • A member of both the Database Admin group (for the database you are unregistering) and the Database Security Admin group

  • A member of both the Database Admin group (for the database you are unregistering) and the Domain Admin group (for the enterprise domain that contains the database)

When using Database Configuration Assistant to register or unregister a database in the directory, restart the database to complete the process (Bug 3338107).

Database Configuration Assistant does not properly set the ldap_directory_access initialization parameter during database registration if the user performing database registration is a member of the OracleDBCreators group only (Bugs 3369834 and 3373789). To resolve this, use one of the following workarounds:

  • Use the ALTER SYSTEM command to set the ldap_directory_access parameter appropriately.

  • Add the user performing the database registration to the Oracle Database Security Administrators, OracleDBSecurityAdmins, identity management realm administrators group in the directory.


Issues related to Using Oracle Database Wallets

You cannot use earlier versions of Oracle Wallet Manager to manage Oracle Database 10g wallets that contain the database's password-based credentials for authentication to Oracle Internet Directory. These credentials are placed in the wallet when an Oracle Database 10g is registered in Oracle Internet Directory.

The database wallet that Database Configuration Assistant automatically generates during database registration can only be used with an Oracle Database 10g (10.1) instance. You cannot use this database wallet for earlier versions of the database, nor can you use it for Oracle Internet Directory Release 9.0.4 or earlier.


DAS-based Tools Do Not Allow Changes in Case for Kerberos Principal Name Directory Attributes

To use DAS-based tools to change the case, lower or upper, of Kerberos Principal Names, change the name to something entirely different, and then change it again back to the actual name with the correct case. For example, if you want to change USER1@DOMAIN.COM to user1@DOMAIN.COM, you must first change the Kerberos Principal Name attribute to temp@DOMAIN.COM, and then change it to user1@DOMAIN.COM (Bug 3356955).

8 Globalization Support

Please note the following items when working with Globalization Support.

8.1 Updates to the Time Zone Files

When Oracle Database 10g starts, it will load the large time zone file, timezlrg.dat, by default, while in the Oracle9i Database release, the small time zone file, timezone.dat, will be loaded. To override this behavior and continue loading the small time zone file, set the environment variable ORA_TZFILE to the absolute location of the small file, $ORACLE_HOME/oracore/zoneinfo/timezone.dat.

In Oracle Database 10g, the contents of the files timezone.dat and timezlrg.dat are updated to the version 2 to reflect the transition rule changes for some time zone regions. Refer to $ORACLE_HOME/oracore/zoneinfo/readme.txt for detailed information about time zone file updates.

The transition rule changes of some time zones might affect the column data of TIMESTAMP WITH TIME ZONE datatype. For example, if users enter TIMESTAMP '2003-02-17 09:00:00 America/Sao_Paulo', Oracle Database converts the data to UTC based on the transition rules in the time zone file, and stores them on the disk. So, 2003-02-17 11:00:00, along with the time zone Id for America/Sao_Paulo, is stored because the offset for this particular time is -02:00. The transition rules are now modified and the offset for this particular time is changed to -03:00. When users retrieve the data, they get 2003-02-17 08:00:00 America/Sao_Paulo. There is a one hour difference compared to the original value.

To find all columns of TIMESTAMP WITH TIME ZONE datatype in the database, run the script $ORACLE_HOME/rdbms/admin/utltzuv2.sql before you update your database's time zone file to the new version. The result is stored in the table sys.sys_tzuv2_temptab. The table has five columns, table_owner, table_name, column_name, rowcount and nested_tab.

If your database has column data that will be affected by the time zone file update, back up this data in another table or use the export utility to export this data for the backup. You can then upgrade to the new version. After the upgrade, update the data to make sure it is stored according to the new rules. Refer to the comments in utltzuv2.sql for more information.

Although the transition rule changes of some time zones might affect the column data of the TIMESTAMP WITH LOCAL TIME ZONE datatype, the data is already normalized to the database time zone and therefore cannot be updated.

For time zone regions in Brazil and Israel, the transition rules might change very frequently, sometimes as often as once every year. Oracle recommends that you use the time zone offset instead of the time zone region name to avoid future upgrade problems.

Customers using time zone regions that have been updated in version 2 of the time zone files are required to update all Oracle9i Database clients and databases that will communicate with an Oracle Database 10g server. This ensures that all environments will have the same version of the time zone file, version 2. This is not a requirement for other customers, but Oracle still recommends that you do so. Users who need to update their time zone files to version 2 can find the following information on OracleMetaLink, http://metalink.oracle.com:

  • readme.txt contains the list of time zone regions that have changed from version 1 to version 2.

  • Actual time zone files for version 2 for the Oracle9i Database release.

  • utltzuv2.sql script that must be run on the server side to find out if the database already has a column of type TIMESTAMP WITH TIME ZONE. It contains time zones that have changed from version 1 to version 2.

Oracle Database 10g clients that communicate with Oracle Database 10g servers automatically get version 2 of the time zone file, so there is no need to download the new time zone file.

8.2 Updates to the Oracle Language and Territory Definition Files

Changes have been made to the content in some of the language and territory definition files in Oracle Database 10g. These updates are necessary to correct legacy definitions that no longer meet the local conventions in some of the Oracle supported languages and territories. These changes include modifications to the currency symbols, month names, group separators, and so on. One example is the local currency symbol for Brazil, which has been updated from Cr$ to R$ in Oracle Database 10g.

To maintain backward compatibility, we are shipping a set of Oracle9i Database locale definition files that will run on Oracle Database 10g. For more information, please see the following file:

$ORACLE_HOME/nls/data/old/README.txt

8.3 Locale Variants

In previous database releases, Oracle defined language and territory definitions separately. This resulted in the definition of a territory being independent of the language setting. In Oracle Database 10g, some territories can have different date, time, number, and monetary formats, depending on the language setting of the session. One example is the number format (decimal and thousand separators) for Canada. When using English, a comma is a thousand separator; when using French, a comma is a decimal separator.

8.4 Using Oracle9i Database Language and Territory Definition Files with Oracle Database 10g

In addition to the approach described in the $ORACLE_HOME/nls/data/old/README.txt file, which involves the copying of all definition files from $ORACLE_HOME/nls/data/old directory to the $ORACLE_HOME/nls/data directory, you can achieve the same result by setting the environment variable ORA_NLS10 to the $ORACLE_HOME/nls/data/old directory. All Oracle Database 10g clients need to be updated using one of these solutions.

9 Java and Web Services

Please note the following items when working with Java.

9.1 JavaVM

The JavaVM readme file is located at:

$ORACLE_HOME/javavm/doc/readme.txt

9.2 JDBC

For Instant Client operation of the JDBC Driver, the following files must be copied from the $ORACLE_HOME/jdbc/lib directory:

  • classes12.jar if JDK 1.2 or 1.3 will be used

  • orai18n.jar for Globalization and NLS support

  • ocrs12.jar for Oracle JDBC rowset implementation

The JDBC readme file is located at:

$ORACLE_HOME/jdbc/Readme.txt

9.3 JPublisher

The JPublisher software, including Database Web services, is provided on the Companion CD.

The JPublisher readme is at the following locations:

$ORACLE_HOME/sqlj/READMEJPub.txt
$ORACLE_HOME/sqlj/READMEJPub.html

9.4 Web Services

As an alternative to Oracle Net, Oracle Database Web services provides non-connected access to the database through standard Web services mechanisms. These include XML, SOAP, and WSDL, and can turn the database into a Web services provider. Similarly, the database itself can act as a Web service consumer and invoke external Web services. Important features of Web services include:

  • A JAX-RPC based SOAP Client library supports invocation of external Web services from within the database, and applies the power of SQL to the results.

  • Web Services Call-In: Deploying a JPublisher-generated Java class against Oracle Application Server 10g enables you to invoke database operations like Java and PL/SQL procedures and packages, SQL queries, and DML operations.

  • Web Services Call-Out: Deploying a JPublisher-generated Web services client from a WSDL and its PL/SQL wrapper supports invocation of external Web services from within the database.

10 Media Management Software

Oracle Database 10g bundles Legato Single Server Version (LSSV) software to provide tape backups of your Oracle Database. It is fully integrated with Recovery Manager (RMAN) to backup your database on a single host. Legato NetWorker documentation can be obtained directly from Legato and can be found at the following Web site:

http://www.legato.com/lssv/

This site also contains any product updates for this NetWorker version.

If you have previously installed and used Legato Storage Manager (LSM) on your Oracle Database server, you can uninstall it and install this new version of Legato NetWorker. Any backups made by LSM can still be used by the new Legato NetWorker software.

11 Oracle Advanced Security

Please note the following items when working with Oracle Advanced Security. The Enterprise User Security feature is discussed in the Database Security section.

11.1 Data Encryption and Integrity

In this release, the features of Multiplexing and Connection Pooling do not work with SSL transport. See Oracle Database JDBC Developer's Guide and Reference for details of encryption support available in JDBC.


Encryption Algorithms in Java

In this release, the Oracle thin JDBC driver supports Triple-DES with 112- and 168-bit keys. To configure encryption that uses Triple-DES, specify either 3DES112 or 3DES168 for the ORACLE.NET.ENCRYPTION_TYPES_CLIENT parameter in the properties file on the server, and set the connection property, oracle.net.encryption_types_client, to the same value in the Java client.

Thick JDBC (OCI Driver) Supported Algorithms include RC4_256, RC4_128, RC4_56, RC4_40, 3DES112, 3DES168, AES256, AES192, and AES128.

Thin JDBC Driver Supported Algorithms include RC4_256, RC4_128, RC4_56, RC4_40, 3DES112, and 3DES168.

For more information about configuring these encryption algorithms for JDBC, refer to Chapter 4 in the Oracle Advanced Security Administrator's Guide and Chapter 23 in the Oracle Database JDBC Developer's Guide and Reference.

11.2 External Authentication and Single Sign-on

The Cybersafe adapter is desupported beginning with this release. You should use Oracle's Kerberos adapter in its place. Kerberos authentication with the Cybersafe KDC (Trust Broker) continues to be supported when using the Kerberos adapter.


Changes to the Startup Command

In addition to setting the REMOTE_OS_AUTHENT initialization parameter file to FALSE, you should issue the startup command with a PFILE option. This ensures that the parameters from your initSID.ora are used.

11.3 Secure Sockets Layer

There is a known bug in which an OCI client requires a wallet even when using a cipher suite with DH_ANON, which does not authenticate the client.

11.4 Oracle Wallet Manager

Oracle Wallet Manager Online Help becomes unresponsive when modal dialog boxes appear, such as the one for entering certificate request information. The Online Help becomes responsive once the modal dialog box is closed. To use Oracle Wallet Manager with PKCS #11 integration on the 64-bit Solaris Operating System, enter the following at the command line:

owm -pkcs11

11.5 JAVASSL and JSSE

Use the jsse.jar file provided by your platform vendors for Java SSL requests. Oracle JavaSSL is desupported starting with this release.

11.6 Entrust Support

This release supports Entrust on HP-UX 64-bit, Solaris Operating System 64-bit, and HP Tru64 UNIX platforms.

12 Oracle Call Interface

Please note the following items when working with Oracle Call Interface.

12.1 Header Files

With the current release, the OCI/OCCI header files that are required for OCI and OCCI client application development on UNIX platforms reside in the $ORACLE_HOME/rdbms/public directory. The demo_rdbms.mk file remains in the $ORACLE_HOME/rdbms/demo directory and continues to serve as an example makefile.

Unless you significantly modified the demo_rdbms.mk file, you are not affected. This is because the demo_rdbms.mk file already includes the $ORACLE_HOME/rdbms/public directory. Ensure that your highly customized makefiles have the $ORACLE_HOME/rdbms/public directory in the INCLUDE path.

All demonstration programs and header files continue to reside in the $ORACLE_HOME/rdbms/demo directory. As with all demonstrations, these files are only installed from the Companion CD.

The OCI/OCCI header files required for development, located in $ORACLE_HOME/rdbms/public, are available both with the Oracle Database 10g Server installation, and with the Oracle Database 10g Client Administration and Custom installations.

12.2 XA Header Files

The xa.h header file resides at the same location as all other OCI/OCCI header files:

  • For UNIX, the new path is $ORACLE_HOME/rdbms/public.

  • For Windows, the new path is $ORACLE_HOME/oci/include.

Users of the demo_rdbms.mk file on UNIX are not affected because $ORACLE_HOME/rdbms/public is already in the makefile.

12.3 Instant Client Installation Over an Existing Oracle Database Server Installation Fails

Instant Client can be installed from the Oracle Client CD either through the Client Admin and Instant Client options. In both cases, installation over a pre-existing Oracle Database server fails. You should install Instant Client into a clean directory, or onto a different machine.

12.4 OCIBreak()

OCIBreak() aborts a running OCI call on certain connections. It can be called by a user thread in multi-threaded applications, or by a user signal handler on UNIX systems. OCIBreak() is the only OCI call allowed in a user signal handler.

13 Oracle Change Data Capture

Please note the following items when working with Oracle Change Data Capture.

13.1 Database Configuration Assistant Considerations

If using the Database Configuration Assistant to create a database that uses one of the predefined templates, you can choose any of the following templates:

  • General Purpose

  • Data Warehouse

  • Transaction Processing

  • New Database

The General Purpose, Data Warehouse, and Transaction Processing database templates support the Oracle Change Data Capture feature.

If you choose the New Database option to build a custom database, you must select the database feature Oracle JVM from the Additional database configurations dialog box. Oracle JVM is already selected by default; do not deselect it. Oracle Change Data Capture requires the Oracle JVM feature.

13.2 Using Source Tables from Oracle9i Database

It is possible to use Oracle Streams or Advanced Replication and Change Data Capture to capture changes from a source table in versions prior to Oracle Database 10g.

  • First, replicate your Oracle9i Database Release 2 source table to an Oracle Database 10g database using Streams. Alternatively, you can use Advanced Replication to replicate your source table from Oracle Database versions prior to Oracle9i Database Release 2 to an Oracle Database 10g database.

  • Next, capture the changes from the replicated table in the Oracle Database 10g using Oracle Change Data Capture in any of its modes: synchronous, HotLog or AutoLog.

Be aware that the control columns of such a change table contain values from the replicated table, not from the original Oracle9i Database source table. For example, a given CSCN$ value reflects the commit SCN from the replicated source table rather than the original commit SCN from the Oracle9i Database instance. This is also the case for the ROW_ID$ and USERNAME$ control columns.

13.3 Removing Multiple DDLs from an Asynchronous Change Set that Stops on DDL

When an asynchronous Change Data Capture change set stops on a DDL, the DDL must be removed by calling DBMS_CDC_PUBLISH.ALTER_CHANGE_SET() with both the recover_after_error and the remove_ddl parameters set to Y, and then Change Data Capture must be reenabled for the change set.

If there are multiple consecutive DDLs, the change set stops for each one separately. In a case where there are two consecutive DDLs, the change set stops for the first DDL. You must remove the first DDL and reenable capture for the change set. The change set stops again for the second DDL. At this point, you must remove the second DDL and reenable capture for the change set.

13.4 Drop User Cascade and Oracle Change Data Capture Objects

If an Oracle Change Data Capture publisher is dropped with a DROP USER CASCADE, then all Change Data Capture objects owned by that publisher are dropped, unless they are containers for Oracle Change Data Capture objects owned by other publishers.

For example, if publisher CDCPUB1 owns the change set CDCPUB1_SET that contains the change table CDCPUB2.SALES_CT, a DROP USER CASCADE on CDCPUB1 does not drop CDCPUB1_SET. CDCPUB1_SET can be dropped by any publisher using the DBMS_CDC_PUBLISH.DROP_CHANGE_SET() interface when all of its change tables have been dropped.

13.5 Recreating AutoLog Change Data Capture Objects After Data Pump Import

AutoLog Change Data Capture objects are not supported for Data Pump import and export. AutoLog change sources, change sets and change tables, as well as subscriptions to AutoLog change sets, are not exported. However, these objects use other objects that are exported: the table underlying a change table, subscriber views, a sequence used by the change set, and a Streams apply process, queue, and queue table.

In order to recreate an AutoLog Change Data Capture configuration after a Data Pump import, you must clean up these underlying objects before using Change Data Capture interfaces to recreate these objects. The following table summarizes each underlying object and the method used to clean it up after a Data Pump import.

Method Description
DROP TABLE
Drops the table underlying a change table.
DROP VIEW
Drops the subscriber view.
DROP SEQUENCE
Drops the sequence used by a change set.
dbms_apply_adm.drop_apply()
Drops the Streams apply process.
dbms_aqadm.drop_queue()
Drops the Streams queue.
dbms_aqadm.drop_queue_table()
Drops the Streams queue table.

The name of the sequence used by a change set can be obtained by querying ALL_SEQUENCES for a sequence name that begins with CDC$ and contains at least the initial characters of the change set name.

The names of the Streams objects can be obtained by querying the DBA_APPLY, DBA_QUEUES, and DBA_QUEUE_TABLES views for names that begin with CDC$ and contain at least the initial characters of the change set name.

13.6 Synchronous Oracle Change Data Capture Limitation on Source Tables Restored from Recycle Bin

If the source table for a synchronous Oracle Change Data Capture change table is dropped and then restored from the recycle bin, changes are no longer captured in that change table. You must create a new synchronous change table to capture future changes to the restored source table.

13.7 SUBSCRIPTION_NAME and SUBSCRIBER_VIEW Parameters in DBMS_CDC_SUBSCRIBE

If using the DBMS_CDC_SUBSCRIBE.CREATE_SUBSCRIPTION() interface, you must now provide an explicit value for the subscription_name parameter. Similarly, subscribers using either form of the DBMS_CDC_SUBSCRIBE.SUBSCRIBE() interface must now provide an explicit value for the subscriber_view parameter.

13.8 {ALL,DBA,USER}_SOURCE_TAB_COLUMNS Views Replaced by {ALL,DBA,USER}_PUBLISHED_COLUMNS Views

The ALL_SOURCE_TAB_COLUMNS, DBA_SOURCE_TAB_COLUMNS and USER_SOURCE_TAB_COLUMNS data dictionary views have been replaced with the ALL_PUBLISHED_COLUMNS, DBA_PUBLISHED_COLUMNS and USER_PUBLISHED_COLUMNS data dictionary views.

13.9 Incorrect References to DBA_CAPTURE.SAFE_PURGE_SCN

In the Oracle Data Warehousing Guide, section "Asynchronous Change Data Capture and Redo Log Files", there are some incorrect references to the data dictionary view column DBA_CAPTURE.SAFE_PURGE_SCN. These references should instead be to the column DBA_CAPTURE.REQUIRED_CHECKPOINT_SCN.

14 Oracle Data Guard

Please note the following items when working with Oracle Data Guard.

14.1 Upgrade Behavior

Oracle Data Guard now requires that you set the DB_UNIQUE_NAME initialization parameter to a unique value for every database in an Oracle Data Guard configuration that uses the same DB_NAME. The DB_UNIQUE_NAME initialization parameter has replaced the LOCK_NAME_SPACE initialization parameter because the value of DB_UNIQUE_NAME does not change even when the primary and standby databases reverse roles. For more details, refer to Oracle Data Guard Concepts and Administration.

14.2 DDL Statements that Use DBLINKS

On logical standby databases, avoid using SQL statements such as CREATE TABLE tablename AS SELECT * FROM bar@dblink as they may fail.

When a statement is executed on the logical standby database, it will access the database link at that time. It is not possible to know if the information on the logical standby database is the same as it was at the time the statement was executed on the primary database. For example, additional columns may have been added or dropped; this can make it impossible to apply the rows that follow. Assuming that the network was set up so that the initial creation succeeded, you may see the following error: ORA-26689: column datatype mismatch in LCR for a table containing nested table columns. Also, the ORA-02019: connection description for remote database not found error may be returned if the database link or the TNS service was undefined on the logical standby database.

When this happens, use the DBMS_LOGSTDBY.INSTANTIATE_TABLE procedure for the table being created, and then restart SQL APPLY operations.

14.3 Logical Standby Databases on the Same Node as the Primary Database

If a logical standby database is located on the same computer system as the primary database, it is likely that both Oracle Database instances have access to the same directory structure. There are Oracle Database commands that reuse datafiles. If Oracle Database commands that reuse datafiles are applied on the primary database, the commands may also be applied on the logical standby database. If that happens while the primary database is shut down, it is possible for the logical standby database to claim the file as part of its database and possibly cause damage to the primary database.

For this reason only, Oracle recommends using the following settings when running the primary and logical standby databases on the same computer system:

EXECUTE DBMS_LOGSTDBY.SKIP('ALTER TABLESPACE');

14.4 Logical Standby Stops with Error ORA-00955

When this error occurs, check the alert log for the following output:

LOGSTDBY stmt: Create table anyddl.anyobj... 
LOGSTDBY status: ORA-16542: unrecognized operation 
LOGSTDBY status: ORA-16222: automatic Logical Standby retry of last action 
LOGSTDBY status: ORA-16111: log mining and apply setting up 
LOGSTDBY stmt: Create table anyddl.anyobj... 
LOGSTDBY status: ORA-00955: name is already used by an existing object 

The initial attempt to apply the DDL actually completed, but failed to be recorded. The ORA-16542 error identifies this problem.

You can restart the logical standby database using the following command:

ALTER DATABASE START LOGICAL STANDBY APPLY SKIP FAILED TRANSACTION;

14.5 ALTER DATABASE GUARD Issues on Oracle Real Application Clusters

For logical standby databases running on an Oracle Real Application Clusters system, you must issue the ALTER DATABASE GUARD statement on each active instance for it to be effective on all instances in the cluster.

14.6 Defining Destinations for Standby Log Files When Using a Flash Recovery Area with Logical Standby Databases

If you have enabled a flash recovery area on a logical standby database, you must set the following initialization parameters on each logical standby database in the Oracle Data Guard configuration:

  • Define the STANDBY_ARCHIVE_DEST parameter to point to a location other than the flash recovery area. Doing so ensures that standby redo log files received from the primary database are not accidentally archived in the flash recovery area.

    For example, when creating a logical standby database as described in Section 4.2.3.2 of Oracle Data Guard Concepts and Administration, define the following parameter on the logical standby database: STANDBY_ARCHIVE_DEST='/arch2/boston/'

  • Define a LOG_ARCHIVE_DEST_n parameter and include the LOCATION and the VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) attributes to direct archiving of standby redo log files to a destination that is not the flash recovery area. For example,

    LOG_ARCHIVE_DEST_3=
       'LOCATION=/arch2/boston/
       VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
       DB_UNIQUE_NAME=boston'
    
    

In addition, Oracle recommends that you set the same initialization parameters on the primary database to prepare it for a future switchover:

  • Define the STANDBY_ARCHIVE_DEST to point to a location other than the flash recovery area.

    For example, when creating a logical standby database as described in Section 4.2.2.2 of Oracle Data Guard Concepts and Administration, define the following parameter on the primary database: STANDBY_ARCHIVE_DEST='/arch2/chicago/'.

  • Define a LOG_ARCHIVE_DEST_n parameter and include the LOCATION and the VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) attributes to direct archiving of standby redo log files to a directory that is not the flash recovery area. For example,

    LOG_ARCHIVE_DEST_3=
       'LOCATION=/arch2/chicago/
       VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE)
       DB_UNIQUE_NAME=chicago'
    

14.7 Database Guard Stays On In the New Primary Database If a Prepared Switchover Operation Detects a Network Outage

If a logical standby database performs a switchover to the primary database role running in maximum protection mode, and the LGWR process encounters an error with a destination, LGWR will re-evaluate all destinations to ensure at least one of them is working properly.

If the LGWR process does not find a destination to which it can successfully write the standby redo log file, and if it does not contain any missing log files, or gaps, then the primary instance shuts down. Although the switchover completes successfully, the database guard remains enabled to prevent data divergence.

The solution, which appears in the alert log for the database instance, is to:

  • Start the database in a lower protection mode, such as maximum performance.

  • Manually disable the database guard by issuing the ALTER DATABASE GUARD NONE command.

14.8 Defining StandbyArchiveLocation in Oracle Data Guard Broker When Using a Flash Recovery Area with a Logical Standby Database

If you have enabled a flash recovery area on a logical standby database, you must explicitly set the StandbyArchiveLocation property in Oracle Data Guard Broker to point to the location where you want to store redo log files received from the primary database. You must set the StandbyArchiveLocation property, even if you have already set the STANDBY_ARCHIVE_DEST initialization parameter for the logical standby database. If you do not define the StandbyArchiveLocation property, the broker will use a default value of dgsby_db_unique_name, and the redo log files received from the primary database is stored in the $ORACLE_HOME/dbs directory.

For example, after adding a logical standby database to the broker configuration, as described in Section 6.2 "Scenario 1: Creating a Configuration" of Oracle Data Guard Broker, define the following property for the logical standby database:

DGMGRL> EDIT DATABASE 'DR_Sales' SET PROPERTY
   'StandbyArchiveLocation'='/arch2/boston/'

You should also define this property for the primary database to prepare it for a future switchover.

14.9 Error ORA-16627 During Oracle Data Guard Broker Protection Mode Upgrade

If you encounter the error ORA-16627 while attempting to upgrade the protection mode of the broker configuration, ensure at least one standby database in the configuration has been set up in the following manner:

  • The standby database has standby redo logs.

  • The LogXptMode property for that standby database is set to SYNC.

In addition, if the protection mode is upgraded to MaxProtection, ensure that there are no gaps in the archived redo log files on the standby database. For more details, see Section 5.8 "Managing Archive Gaps" in Oracle Data Guard Concepts and Administration.

Once you verify that the above criteria have been met, try to upgrade the protection mode again. The following example shows the ORA-16627 error after issuing the DGMGRL command to upgrade to MaxProtection.

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MaxProtection ;
Error: ORA-16627: No standby databases support the overall protection mode.

14.10 MAX PROTECTION SWITCHOVER WITH A BYSTANDER LOGICAL STANDBY

When performing a zero data loss switchover in a maximum protection configuration that involves more than one logical standby database, keep the following items in mind:

  • Ensure that all transport settings (specified by the LOG_ARCHIVE_DEST_n parameters on the current primary database) for logical standby instances not participating in the switchover are set to ARCH, not LGWR, for the duration of the switchover.

  • After the transport setting is modified, but before the switchover operation, perform an ALTER SYSTEM ARCHIVE LOG CURRENT to ensure that any active standby redo logfiles are archived.

  • Once the switchover is complete, the destination setting for the bystanders should be changed back to the LGWR transport mode.

15 Oracle Data Mining

Please note the following item when working with Oracle Data Mining:

15.1 Change to Demo Instructions

The demo program needs the following correction on line 186 of the textfe.sql, located at

  • $ORACLE_HOME/dm/demo/sample/plsql/texfe.sql for Solaris

  • %ORACLE_HOME%\dm\demo\sample\plsql\texfe.sql for Windows

Step 8 in this demo program demonstrates how to cast the text features extracted into a DM_Nested type, so that they can be supplied to DBMS_DATA_MINING. In this step, the demo shows these being cast as DM_Nested_Categoricals, but the features and their values must be cast to DM_Nested_Numericals instead.

16 Oracle HTML DB

Please note the following items when working with Oracle HTML DB.

16.1 Invalid Database Objects

Immediately after installing Oracle HTML DB, there may be several database objects that have a status of INVALID. These objects compile successfully upon first use of the Oracle HTML DB development environment.

16.2 Database Access Descriptor Configuration

The Database Access Descriptor (DAD) for Oracle HTML DB is defined in the file $ORACLE_HOME/Apache/modplsql/conf/marvel.conf. $ORACLE_HOME refers to the Oracle home where the Oracle HTTP Server is installed. Modifications to the Oracle HTML DB modplsql configuration need to be made directly to this file.

16.3 Database Access Descriptor Character Set Compatibility

Oracle HTML DB operates in the character set used for the corresponding Database Access Descriptor (DAD). When the character set used in the DAD is UTF8, the HTML pages rendered from Oracle HTML DB are encoded in UTF8. This encoding also applies to comma-separated values (CSV) data you can export from SQL queries and Oracle HTML DB reports. If the DAD character set is not compatible with the one supported in the localized version of Microsoft Excel, Excel cannot properly read the CSV data. To remedy this problem, change the DAD character set to the one supported by Excel. You should also enable the DAD RAW transfer mode in Oracle Application Server 10g by specifying the PlsqlNlsLanguage and PlsqlTransferMode properties of the corresponding DAD. For more information on configuring a DAD, see the Oracle HTTP Server Administrator's Guide.

16.4 Adobe SVG Plugin Support

The Adobe SVG plugin can properly handle data encoded in UTF-8, UTF-16, ISO-8859-1, and US-ASCII. Encoding of an SVG chart is determined by the DAD database character set. If the DAD character set is not UTF8, AL32UTF8, AL16UTF16, WE8ISO8859P1, or US7ASCII, SVG charts may not render properly in the Adobe SVG plug-in.

16.5 SVG Charts Containing Multibyte Characters

When creating SVG charts that may contain Japanese, Chinese, Korean, or other multibyte characters, you must perform an extra step to ensure that the Adobe SVG Viewer can properly display the multibyte characters. In the Application Builder, navigate to the Page Definition containing the SVG Chart. Click the Chart link to edit the SVG Chart attributes and scroll down to the Font Settings section. In the Font Color field, enter the appropriate string; note the double semicolons (;;) after the first hexadecimal number and the lack of a semicolon (;) at the end of the string:

  • For Japanese: #000000;;font-family:MS Gothic,MS Mincho;fill:#000000

  • For Chinese: #000000;;font-family:SimHei,MS-Hei;fill:#000000

  • For Korean: #000000;;font-family:Gulim,Batang;fill:#000000

To specify a particular font color, replace fill:#000000 at the end of the string with the new color's hexadecimal number, such as fill:#336699.

16.6 Language Support

Oracle HTML DB only supports languages that can be encoded in the DAD character set. If the DAD character set is changed to one that is incompatible with the language you want to support, Oracle HTML DB may not properly render language-specific data. For example, if a DAD character set is set to JA16SJIS, Chinese is not supported in that instance of Oracle HTML DB because Chinese cannot be encoded in JA16SJIS.

16.7 Support Other Database Character Sets

When using Oracle HTML DB with a database character set other than UTF8 or AL32UTF8, but with an application that uses multibyte characters, some operations may fail in Microsoft Internet Explorer. This happens when multibyte data is passing directly in the URL, and when the client operating system character set is different from the DAD character set.

The item type Popup LOV, which fetches first row set and filters, may use multibyte data directly in the URL. For example, assume the default encoding in Windows is Shift JIS. An Oracle HTML DB application that passes multibyte data in the URL (from a Popup LOV with a filter or from a report link) may fail if the DAD character set is JA16EUC. It does not fail if the DAD character set is JA16SJIS.

16.8 Translated Applications in Oracle HTML DB

When using translated applications in Oracle HTML DB, use the following rules to determine which translated version to use:

  • Look for an exact match between the user language preference and the language code of the translated application.

  • Look for a truncated match. That is, see if the language and locale exist. For example, if the user language preference is en-us and the translated version of en-us does not exist, look for a translated application that has the language code en.

  • Use the primary application.

For example, suppose you create an application with the primary language of German, de, and you create a translated version of the application with a language code of en-us. Users accessing this application with a browser language of en-us execute the English en-us version of the application. Users accessing the application with a browser language of en-gb view the application in the application primary language. In this example, these users see the application in German, which is the application's primary language. For this example, you should create the translated English version using language code en to encompass all variations of en.

16.9 Modifying the Select List of a Tabular Form

The select list of a SQL statement of a tabular form should not be modified after it has been generated; doing so can result in a checksum error when altering data on the form and applying updates. For example, the generated SQL query select ename from emp should not be altered to select lower(ename) from emp.

16.10 Monthly Calendar Wizard

When using the Monthly Calendar wizard, the on-screen help for the Day Link on the Identify Query Values step is not complete. Use the Day Link to create a link on the day of the month. The link only supports substitutions for the date that was clicked. The following are the supported substitutions:

  • _calendar_date is the day in the format MM/DD/YYYY

  • _calendar_dateYYYY is the day in the format DD-MON-YYYY

  • _calendar_dateYYYYMMDD is the day in the format YYYYMMDD

As an example, assume you want to link the day in your calendar to a report on the EMP table in page 2 of your application. The report is optionally constrained by the HIREDATE column with the value of the hidden item called P2_HIREDATE. The SQL query in the report region expects the date contained in P2_HIREDATE to be in the format MM/DD/YYYY. The link is expressed in f?p syntax as follows:

f?p=&APP_ID.:2:&SESSION.::::P2_HIREDATE:_calendar_date_

16.11 Installing Oracle HTML DB in Other Languages

The Oracle HTML DB interface is translated into German, Spanish, French, Italian, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese. A single instance of Oracle HTML DB can be installed with one or more of these translated versions. At runtime, the user's Web browser language settings determine the specific language version.

The translated version of Oracle HTML DB should be loaded into a database that has a character set that can support the specific language. Attempts to install a translated version of Oracle HTML DB into a database that cannot support the character encoding of the language may fail, or the translated version of Oracle HTML DB may appear corrupt when run. Database character set AL32UTF8 can support all the translated versions of Oracle HTML DB.

The installation files for the translated versions of Oracle HTML DB are located in appropriate subdirectories in $ORACLE_HOME/marvel/builder, and are identified by a language code. For example, the German version is located in $ORACLE_HOME/marvel/builder/de, while the Japanese version is located in $ORACLE_HOME/marvel/builder/ja. Within each of these directories, there is a language loading script identified by the language code, such as load_de.sql or load_ja.sql.

You can manually install translated versions of Oracle HTML DB using SQL*Plus. The installation files are encoded in UTF8. To install a translated version of the Oracle HTML DB interface, the character set value of the NLS_LANG environment variable must be set to AL32UTF8 prior to invoking SQL*Plus, regardless of the target database character set. The following examples illustrate valid NLS_LANG settings for loading Oracle HTML DB translations:

  • American_America.AL32UTF8

  • Japanese_Japan.AL32UTF8

To install a translated version of Oracle HTML DB, follow these steps:

  1. Set the NLS_LANG environment variable; ensure that the character set is AL32UTF8.

  2. Connect as SYS to the target database.

  3. Execute the following statement:

    ALTER SESSION SET CURRENT_SCHEMA = FLOWS_010500;
    
    
  4. Execute the language-specific loading SQL script. For example:

    @load_de.sql
    

16.12 Generate DDL Feature of SQL Workshop

The Generate DDL feature of SQL Workshop requires that Oracle HTML DB be installed with Oracle Database 10g Release 1.

16.13 Securing the WWV_FLOW_FILE_OBJECTS Table

The WWV_FLOW_FILE_OBJECTS table stores all uploaded files across all workspaces of an Oracle HTML DB instance. All privileges on the WWV_FLOW_FILE_OBJECTS table have been granted to PUBLIC. As a result, an Oracle HTML DB user could use the SQL Command Processor to delete files from this table.

To secure the WWV_FLOW_FILE_OBJECTS table, follow these steps:

  1. Connect to the database where Oracle HTML DB is installed as SYS or SYSTEM.

  2. Execute these commands in the following order:

    ALTER SESSION SET CURRENT_SCHEMA = FLOWS_FILES
    REVOKE ALL ON WWV_FLOW_FILE_OBJECTS$ FROM PUBLIC
    GRANT ALL ON WWV_FLOW_FILE_OBJECTS$ TO HTMLDB_PUBLIC_USER
    

16.14 Item Naming Conventions

Note these rules that apply to names of Oracle HTML DB items:

  • Item names must not use quotes.

  • Item names must begin with a letter or a numeral, and subsequent characters can be letters, numerals, or underscore characters.

  • Item names are case insensitive.

  • Item names should not exceed 30 characters.

  • Item names cannot contain letters outside the base ASCII character set.

17 Oracle interMedia

Performance related components of Oracle interMedia are now packaged on the Companion CD. Although interMedia functions properly without the Companion CD, the following components must be installed from the Companion CD in order to achieve acceptable performance of image processing:

The interMedia readme file is located at:

$ORACLE_HOME/ord/im/admin/README.txt

18 Oracle Internet Directory

The Oracle Internet Directory product ships only with Oracle Application Server, not the Oracle Database 10g product set. The following information is included because Oracle Network functionality may use Oracle Internet Directory. Many of the administrative activities for Oracle Internet Directory have been consolidated into a single tool, Oracle Internet Directory Configuration Assistant (OIDCA). OIDCA should be used with Enterprise User Security and Network Names features under these conditions:

  1. Enterprise User Security

    • Enterprise User Security only works with Identity Management Realms in this release. You must convert Oracle Contexts used in prior releases to Identity Management Realms using the OIDCA tool.

    • Use OIDCA when creating or updating the ldap.ora configuration file for discovering the Oracle Internet Directory server in the environment.

  2. Network Names

    • Use OIDCA when creating, upgrading and deleting Oracle Contexts.

    • Use OIDCA when converting an Oracle Context from an earlier release to an Identity Management Realm

    • Use OIDCA when setting up the ldap.ora configuration file for discovering the Oracle Internet Directory server in the environment.

Please note the following items when working with Oracle Internet Directory.

18.1 Using the Oracle Internet Directory Configuration Assistant

The Oracle Internet Directory Configuration Assistant (OIDCA) enables you to create, upgrade, and delete an Oracle Context, configure the file ldap.ora, and convert an Oracle Context to an Identity Management Realm.

The OIDCA syntax is:

oidca oidhost=host 
      nonsslport=port |
      sslport=SSL Port
      dn=binddn 
      pwd=bindpwd 
      propfile=properties file

The following table lists parameters of the OIDCA. To see the usage of OIDCA, enter oidca -help at the command prompt.

Parameters Description
oidhost
OID server host; default is localhost
nonsslport
OID server port; default is 389
sslport
OID SSL port; default is 636
dn
OID user, such as cn=orcladmin
pwd
OID user password
propfile
File containing a list of properties to determine the mode of operation and the required operation-specific parameters

18.2 Creating an Oracle Context

The following syntax is used to create an Oracle Context in OIDCA; the parameters are described in the subsequent table.

oidca oidhost=host
      nonsslport=port
      sslport=SSL Port
      dn=binddn
      pwd=bindpwd
      mode=CREATECTX 
      contextdn=OracleContext DN
Parameters Description
oidhost
OID server host; if not specified, default is localhost
nonsslport
OID server port; if not specified, default is 389
sslport
OID SSL port; if not specified, default is 636
dn
OID user, such as cn=orcladmin
pwd
OID user password
mode Mode of the OIDCA; always set to CREATECTX
contextdn
DN under which OracleContext must be created, such as dc=acme, dc=com

Note the following points:

  • The contextdn must exist for this operation to be successful.

  • This valid DN should not exist in OID: "cn=oraclecontext,dc=acme, dc=com".

  • This valid DN must exist in OID: "dc=acme,dc=com".

  • The parameters mode and contextdn can also be passed as a properties file.

  • Specify the parameter nonsslport=port if you want to perform the operation using non-SSL mode.

  • Specify the parameter sslport=sslport if you want to perform the operation using SSL mode.

  • Either the nonsslport or the sslport parameter must be specified, but not both.


Functionality
  1. The OIDCA verifies that contextdn has a valid DN syntax and that the entry exists in Oracle Internet Directory. Note that the OIDCA cannot create a root OracleContext explicitly. If there is no root Oracle Context, then OIDCA exits with an error.

  2. If DN exists, then OIDCA verifies that the Oracle Context already exists.

    • If the Oracle Context already exists and is up-to-date, then OIDCA exits with the message Oracle Context already exists and is up to date.

    • If the Oracle Context already exists, but it is an older version, then OIDCA exits with the message Oracle Context already exists and is of an older version.

    • If the Oracle Context does not exist, then OIDCA creates the Oracle Context under this DN.

18.3 Upgrading an Oracle Context

To upgrade an OracleContext instance, use the following syntax; the parameters are listed in the subsequent table.

oidca oidhost=host
      nonsslport=port 
      sslport=SSL Port 
      dn=binddn
      pwd=bindpwd
      mode=UPGRADECTX
      contextdn=OracleContext DN
Parameters Description
oidhost
OID server host; if not specified, default is localhost
nonsslport
OID server port; if not specified, default is 389
sslport
OID SSL port; if not specified, default is 636
dn
OID user, such as cn=orcladmin
pwd
OID user password
mode Mode of the OIDCA; always set to UPGRADECTX
contextdn
DN under which OracleContext must be created, such as dc=acme, dc=com

Note the following points:

  • The contextdn must contain an OracleContext for this operation to be successful.

  • The DNs "cn=oraclecontext,dc=acme,dc=com" and "dc=acme,dc=com" are both valid.

  • The parameters mode and contextdn can also be passed as a properties file.

  • Specify the parameter nonsslport=port if you want to perform the operation using a non-SSL mode.

  • Specify the parameter sslport=sslport if you want to perform the operation using SSL mode.

  • Either the nonsslport or the sslport parameter must be specified, but not both.


Functionality
  1. OIDCA verifies that the contextdn has valid DN syntax and that OracleContext exists in Oracle Internet Directory. OIDCA cannot upgrade a root OracleContext explicitly. If there is no root OracleContext, then OIDCA sends an error message.

  2. If OracleContext exists under contextdn,

    • The OIDCA checks if the OracleContext belongs to a realm, in which case it exits with the appropriate message. Note that OracleContext instances that belong to a realm cannot be upgraded.

    • The OIDCA verifies that the OracleContext is up-to-date, then exits with the message Oracle Context already exists and is up to date.

    • If the OracleContext is not up-to-date, then the OIDCA upgrades the OracleContext under this DN.

18.4 Deleting an Oracle Context

To delete an OracleContext, use the following syntax; the parameters are listed in the subsequent table.

oidca oidhost=host
      nonsslport=port
      sslport=SSL Port
      dn=binddn
      pwd=bindpwd
      mode=DELETECTX
      contextdn=OracleContext DN
Parameters Description
oidhost
OID server host; if not specified, default is localhost
nonsslport
OID server port; if not specified, default is 389
sslport
OID SSL port; if not specified, default is 636
dn
OID user, such as cn=orcladmin
pwd
OID user password
mode Mode of the OIDCA; always set to DELETECTX
contextdn
DN under which OracleContext must be created, such as dc=acme, dc=com

Note the following points:

  • The contextdn must contain an OracleContext for this operation to be successful.

  • The DNs "cn=oraclecontext, dc=acme,dc=com" and "dc=acme,dc=com" are both valid.

  • The parameters mode and contextdn can also be passed as a properties file.

  • Specify the parameter nonsslport=port if you want to perform the operation using a non-SSL mode.

  • Specify the parameter sslport=sslport if you want to perform the operation using SSL mode.

  • Either the nonsslport or the sslport parameter must be specified, but not both.


Functionality
  1. OIDCA verifies that the contextdn has valid DN syntax and that OracleContext exists in Oracle Internet Directory.

  2. If OracleContext exists under contextdn,

    • The OIDCA checks if the OracleContext belongs to a realm, in which case it exits with the appropriate message. Note that OracleContext instances that belong to a realm cannot be deleted.

    • If OracleContext does not belong to a realm, OIDCA deletes it.

18.5 Configuring the file ldap.ora

To configure the file ldap.ora, use the following syntax; the parameters are listed in the subsequent table.

oidca oidhost=host
      nonsslport=port
      sslport=SSL Port
      adminctx=Administrative context
      mode=LDAPORA  
      dirtype=OID or AD
      -update
Parameters Description
oidhost
OID server host; if not specified, default is localhost.
nonsslport
OID server port; determined using discovery APIs.
sslport
OID SSL port; determined using discovery APIs.
mode Mode of the OIDCA; always set to LDAPORA.
dirtype
Directory type; possible values are OID and AD; mandatory attribute.
adminctx
Default administrative context, such as dc=acme,dc=com. If not specified, determined using discovery.
-update
If this flag is specified, then overwrite existing ldap.ora; if not, then create ldap.ora only if it does not already exist.

Note the following points:

  • At least either non-SSL or SSL port must be specified. The other port is discovered.

  • The parameters mode, dirtype, and adminctx can also be passed in within a properties file.


Functionality
  1. Using the Discovery API, the OIDCA determines all the parameters not specified on the command line.

  2. The OIDCA checks for the ldap.ora location using Discovery APIs.

    • If ldap.ora exists and the -update parameter is not specified, then exit with message ldap.ora exists.

    • If ldap.ora exists and the -update parameter is not specified, then update the existing ldap.ora using Discovery API.

    • If ldap.ora does not exist, create a new ldap.ora file in a location in the following order:

      LDAP_ADMIN
      $ORACLE_HOME/ldap/admin
      

18.6 Converting an Oracle Context to an Identity Management Realm

Oracle Database 10g entries must be stored in Oracle Internet Directory Release 9.0.4 server. An Identity Management Realm Release 9.0.4 is also required for Enterprise User Security, a feature of the Oracle Database 10g.

To convert an existing OracleContext to an Identity Management Realm, use the following syntax. The parameters are listed in the subsequent table. Note that the root of the OracleContext object is not converted.

oidca oidhost=host
      nonsslport=port
      sslport=SSL Port
      dn=binddn
      pwd=bindpwd
      mode=CTXTOIMR
      contextdn=OracleContext DN
Parameters Description
oidhost
OID server host; default is localhost
nonsslport
OID server port; default is 389
sslport
OID SSL port; default is 636
dn
OID user, such as cn=orcladmin
pwd
OID user password
mode Mode of the OIDCA; always set to CTXTOIMR
contextdn
DN under which OracleContext must be created, such as dc=acme, dc=com

Note the following points:

  • The OracleContext must exist under the specified contextdn.

  • The DNs "cn=oraclecontext, dc=acme,dc=com" and "dc=acme, dc=com" are both valid.

  • The parameters mode and contextdn can also be passed in a properties file.

  • Specify the parameter nonsslport=port if you want to perform the operation using a non-SSL mode.

  • Specify the parameter sslport=sslport if you want to perform the operation using SSL mode.

  • Either the nonsslport or the sslport parameter must be specified, but not both.


Functionality
  1. The OIDCA checks if contextdn has valid DN syntax, and if it contains a valid OracleContext.

  2. If OracleContext exists under contextdn,

    • The OIDCA checks if the OracleContext belongs to a realm. If it does, then it exits with an appropriate error message.

    • If OracleContext does not belong to a realm, OIDCA upgrades it to the latest version, and converts it to a realm.

Note also:

  • If the nickname attribute is not cn, configure it as a user configuration attribute using the Oracle Internet Directory Self-Service Console.

  • If you want to use the Oracle Internet Directory Self-Service Console to manage the users and groups in the converted realm, you must set up the administrative privileges appropriately. For details, see the chapter on "Delegation of Privileges for an Oracle Technology Deployment" in Oracle Internet Directory Administrator's Guide, 10g Application Server (9.0.4).

19 Oracle Label Security

Please note the following items when working with Oracle Label Security (OLS).

19.1 Granting Permissions for OID Enabled OLS Configuration

Users who perform OID enabled OLS using the Database Configuration Assistant (DBCA) need additional privileges. The following steps describe what permissions are needed, and how to grant them.

  • Use Enterprise Security Manager (ESM) console to add the user to the OracleDBCreators group.

  • Add the user to the Provisioning Admins group. This is necessary because DBCA creates a DIP provisioning profile for OLS. Use ldapmodify command with the following .ldif file to add a user to the Provisioning Admins group:

    dn: cn=Provisioning Admins,cn=changelog subscriber, cn=oracle internet directory
    changetype: modify
    add: uniquemember
    uniquemember: DN of the user who needs to be added
    
    
  • Add the user to the policyCreators group using the command line tool olsadmintool. DBCA bootstraps the database with the OLS policy information from OID, and only policyCreators can perform this bootstrap.

  • If the database is already registered with the OID using DBCA, use the ESM tool to add the user to the OracleDBAdmins group of that database.

Note that the above permissions are also needed by the administrator who unregisters the database that has OID enabled OLS configuration.

19.2 Restriction on Policy Creators with directory enabled Oracle Label Security

A user who belongs to the Policy Creators group can only create, browse and delete Oracle Label Security policies. This user cannot perform policy administrative tasks, such as creating label components and adding users, even if the user is explicitly added to the Policy Admins group of that policy. In short, a policy creator cannot be the administrator of any policy.

19.3 Removing Oracle Label Security from Database

To remove OID enabled OLS from a database, first unregister the database using DBCA, and then run the following script:

$ORACLE_HOME/rdbms/admin/catnools.sql

19.4 Policy Propagation Using Oracle Real Application Clusters

If a user creates or modifies a policy, that policy or change is not automatically propagated to other instances until they are restarted. To ensure the application of policies and packages SA_USER_ADMIN and SA_SYSDBA, restart all other instances as soon as any policy is created or modified in an Oracle Real Application Clusters environment.

19.5 Session Information Propagation Using Oracle Real Application Clusters

Failover uses the default session settings for each user, even if some users have changed settings during their current session. Changes to session settings do not survive a failover event unless they have been saved as default settings. To ensure uniform application of session settings and the SA_SESSION package, Save changed session settings as default settings whenever changes are made in an Oracle Real Application Clusters environment. Otherwise, a failover event will revert the environment to the default session settings and may generate results that are inconsistent with the settings that the user believes are in effect.

20 Oracle-Managed Files

When you rename an Oracle-managed file using the ALTER DATABASE RENAME FILE command, the file with the original filename is deleted. On UNIX platforms, if you rename the Oracle-managed file to a symbolic (soft) link to the original file, then no copy of the original file is left on disk.

To avoid this situation, create a hard link to the Oracle-managed file, and then create a soft link to the hard link. Then, rename the Oracle-managed file to the symbolic (soft) link.

21 Oracle Net Services

The Oracle Net readme file is located at:

$ORACLE_HOME/network/doc/README_OracleNet.htm

22 Oracle OLAP

Please note the following items when working with Oracle OLAP.

22.1 Javadoc

Updated versions of the Java API reference documentation for the Oracle OLAP API and the Oracle OLAP Analytic Workspace API will be made available on the Oracle Technology Network Web site at http://www.otn.oracle.com/products/bi/olap/olap.html. The names of the Java API reference documents are Oracle OLAP Analytic Workspace Java API Reference and Oracle OLAP Java API Reference.

22.2 OLAP Catalog

There are two new procedure sets for the OLAP Catalog, Export and Delete.


Export Description

The Export procedures generate the CWM1 or CWM2 commands (RDBMS command and CWM APIs) and RDBMS Export Utility Parameter File needed to export one or more dimensions, cubes, or measure catalogs. The input parameters accept a standard SQL wild card, such as % or _, in the owner and name parameter. The output can be sent to the screen or written to a file.


Export Procedure Language Specification
create or replace package cwm2_olap_export as
procedure Export_Dimension(
   p_Dimension_Owner   varchar2,
   p_Dimension_Name    varchar2,
   p_Directory_Name    varchar2 default null,
   p_Command_File_Name varchar2 default null,
   p_Table_File_Name   varchar2 default null
   );
procedure Export_Cube(
   p_Cube_Owner        varchar2,   p_Cube_Name         varchar2,
   p_Directory_Name    varchar2 default null,
   p_Command_File_Name varchar2 default null,
   p_Table_File_Name   varchar2 default null
   );
procedure Export_OLAP_Catalog(
   p_Directory_Name    varchar2 default null,
   p_Command_File_Name varchar2 default null,
   p_Table_File_Name   varchar2 default null
   );
end cwm2_olap_export;

p_Directory_Name is an optional parameter that names the directory where the command file and table file are to be written.

p_Command_File_Name is an optional parameter that names the output APIs command file.

p_Table_File_Name is an optional parameter that names the output RDBMS Export Utility Parameter File.


Export Restrictions

Export procedures export only base tables in the initial release. They will not export views or tables used by views.


Delete Description

Delete procedures generate the CWM1 or CWM2 commands (RDBMS command and CWM APIs) needed to delete one or more dimensions, cubes, or measure catalogs. The input parameters accept a standard SQL wildcard, such as % and _, in the owner and name parameter. Required parameters instruct the utility to delete the RDBMS part of CWM1 dimensions, to generate a report, and to perform the actual delete.


Delete Procedure Language Specification
create or replace package cwm2_olap_delete as
procedure Delete_Dimension(
   p_Dimension_Owner       varchar2,
   p_Dimension_Name        varchar2,
   p_Delete_CWM1_Dimension varchar2,
   p_Report                varchar2,
   p_Delete                varchar2
   );
procedure Delete_Cube(
   p_Cube_Owner varchar2,
   p_Cube_Name  varchar2,
   p_Report     varchar2,
   p_Delete     varchar2
   );
procedure Delete_Measure_Catalog(
   p_Measure_Catalog_Name varchar2,
   p_Report               varchar2,
   p_Delete               varchar2
   );
procedure Delete_OLAP_Catalog(
   p_Delete_CWM1_Dimension varchar2,
   p_Report                varchar2,
   p_Delete                varchar2
   );

end cwm2_olap_delete;

p_Delete_CWM1_Dimension is a required YES or NO parameter that instructs the utility to delete the RDBMS portion of a CWM1 dimensions. CWM1 dimensions are composed of two parts, the RDBMS part and the CWM part. The RDBMS portion may be used by other RDBMS functions. The DBA must decide if it is correct to delete the RDBMS portion.

p_Report is a required YES/NO parameter that instructs the utility to generate a report. This allows the DBA to review previous and subsequent deletions.

p_Delete is a required YES/NO parameter that instructs the utility to do the actual delete. This parameter is used in conjunction with the p_Report parameter to review what will be deleted without actually deleting.


Validate

The Validate_Dimension and Validate_Cube procedures have been modified to accept a standard SQL wildcard, such as % or _, in the owner and name parameter.


Validate Report

The validate procedure randomly generates the same validate report twice. Both reports are correct.

22.3 Analytic Workspaces


Upgrading from Oracle9i Database Release 2 to Oracle Database 10g

If you are upgrading from Oracle9i Database Release 2 to Oracle Database 10g, the database will be in Oracle9i Database Release 2 compatibility mode and the analytic workspaces will work as they did in that release. The workspaces continue to use Oracle9i Database Release 2 storage format. If you want to use new Oracle Database 10g OLAP features, such as multi-writer, you must convert these workspaces to the new storage format.

To convert Oracle9i Database Release 2 workspaces to Oracle Database 10g storage format, follow these steps:

  1. Change the compatibility mode of the database to 10.0.0 or higher. For more information on compatibility mode, see Chapter 5, "Compatibility and Interoperability" in the Oracle Database Upgrade Guide.

  2. Log into the database with the identity of the analytic workspace.

  3. Use the conversion utility in DBMS_AW to convert the workspace to the new storage format in Oracle Database 10g SQL*Plus:

    • Rename the analytic workspace to a name like awname_backup.

      execute dbms_aw.rename (awname, awname_backup);
      
      
    • Create a version of the workspace that has the original name but uses the new Oracle Database 10g storage format.

      execute dbms_aw.convert (awname_backup, awname, tablespace_name);
      
      

      Note that many standard for analytic workspaces include the workspace name in fully-qualified logical object names. For this reason, the upgraded workspace must have the same name as the original Oracle9i Database Release 2 workspace.

  4. Because you changed the database compatibility mode to Oracle Database 10g, any new workspaces that you create are in the new storage format.


Importing Oracle9i Database Release 2 Analytic Workspaces into a New Oracle Database 10g Installation

If you install Oracle Database 10g separately from your old Oracle9i Database Release 2 installation, you must export the Oracle9i Database Release 2 workspaces and import them into Oracle Database 10g.

Use the following procedure in SQL*Plus in Oracle9i Database Release 2:

   execute dbms_aw.execute('aw attach awname');
   execute dbms_aw.execute('allstat');
   execute dbms_aw.execute('export all to eif file ''filename''');

In SQL*Plus in Oracle Database 10g, create a new workspace with the same name and schema, and import the EIF file:

   execute dbms_aw.execute('aw create awname');
   execute dbms_aw.execute('import all from eif file ''filename''');
   execute dbms_aw.execute('update');

Use the EIF file import/export utility instead of the import/export in Analytic Workspace Manager. Refer to Bug 3313073.

For more information about upgrading your Oracle9i Database Release 2 analytic workspaces to Oracle Database 10g storage format, see OracleMetaLink at: http://metalink.oracle.com.


AGGREGATE Command
  • WPREAGG is now the default for all weighted operators. WAGG and WNOAGG have been deprecated.

  • Arbitrary LIMIT syntax in PRECOMPUTE statements has been deprecated. The following are the supported formats:

    • PRECOMPUTE(ALL)

    • PRECOMPUTE(NA)

    • PRECOMPUTE(valueset-name)

    • PRECOMPUTE(level-relation-name 'Level1' 'Level2')

    • PRECOMPUTE('dimvalue1' 'dimvalue2')

    For example, the following syntax is not supported:

    PRECOMPUTE(limit(b to first 5))
    
    

    Instead, use VALUESET and manually set their limits before running AGGREGATE:

      DEFINE time.precomp VALUESET time
      DEFINE myaggmap AGGMAP
      AGGMAP
      RELATION time.parentrel PRECOMPUTE(time.precomp)
      END
      LIMIT time.precomp TO FIRST 5
      AGGREGATE myvar USING myaggmap
    

22.4 OLAP Java API

The following three new classes are in the Oracle OLAP Java API.


Class oracle.olapi.metdata.DuplicateMetadataIDException

Class oracle.olapi.metdata.DuplicateMetadataIDException indicates that a BaseMetadataObject with the specified identification already exists. This exception usually occurs when you try to create a custom MdmObject or custom MtmObject with the name of an object that already exists.


Class oracle.olapi.metdata.mdm.Mdm9iNamingConvention

Class oracle.olapi.metdata.mdm.Mdm9iNamingConvention implements the MdmNamingConvention interface and uses the same naming convention for transient custom metadata MdmObject or custom MtmObject that Oracle OLAP uses for the persistent metadata objects it generates from the OLAP Catalog entities. For example, if the owner of an OLAP Catalog subschema is named GLOBAL, and a dimension in that schema is named PRODUCTS, then the identification String for the MdmPrimaryDimension object for that dimension is D_GLOBAL_PRODUCTS. If you create a custom dimension with the name MYPRODUCTS, then the identification of that dimension is D_TRANSIENT_MYPRODUCTS.

If you want custom metadata objects to have a different owner, you can create an instance of Mdm9iNamingConvention and specify an owner name. You can then pass your Mdm9iNamingConvention to the setMdmNamingConvention method of your MdmMetadataProvider.

The methods of an Mdm9iNamingConvention are not used by an application; they are called internally by Oracle OLAP.


Interface oracle.olapi.metdata.mdm.MdmNamingConvention

Interface oracle.olapi.metdata.mdm.MdmNamingConvention is an interface for an object that provides unique identifications for custom MdmObject and custom MtmObject objects. If you want to have a metadata naming convention that is different than the default, you can implement this interface and then pass an instance of your MdmNamingConvention to the setMdmNamingConvention method of your MdmMetadataProvider.

The methods of an MdmNamingConvention are not used by an application; they are called internally by Oracle OLAP.

23 Oracle Real Application Clusters

Please note the following items when working with Oracle Real Application Clusters (RAC). The readme file is located at:

$ORACLE_HOME/srvm/doc/README.doc
$ORACLE_HOME/relnotes/README_svrm.doc

23.1 Virtual IP Addresses

RAC now manages Virtual IP (VIP) addresses on the cluster nodes. Before installing RAC, acquire one unused IP address for each node. Enter these VIP addresses into the VIP Configuration Assistant run from the RAC root.sh file. Do not enter IP addresses of the public or private network interfaces into VIP Configuration Assistant.

23.2 Public and Private Names in Cluster Ready Services

When entering public and private names into the Oracle Universal Installer Node Entry page during Cluster Ready Services (CRS) installation, include the DNS domain on the public and private node names.

23.3 Shared Recovery Area

When a database recovery area is configured in a RAC environment, the database recovery area must be in shared storage.

When DBCA configures automatic disk backup, it uses a database recovery area which must be shared. If the database files are stored on a cluster file system, the recovery area can also be shared through the cluster file system. If the database files are stored on an Automatic Storage Management (ASM) disk group, then the recovery area can also be shared through ASM. If the database files are stored on raw devices, a shared directory should be configured using NFS.

23.4 Minimum CRS Storage Requirements

Use following minimum shared storage capacities for installing CRS on cluster file systems or shared raw storage:

  • For the Oracle Cluster Repository (OCR), use 100 MB files.

  • For Voting Disk, use 20 MB files.

23.5 Callout Ordering

RAC callouts are not executed with guaranteed ordering. They are done asynchronously, and are subject to scheduling variability.

23.6 Storage Recommendations

OCR and voting disk should be on redundant, reliable storage, such as RAID.

23.7 Threads as Separate Processes

On Linux, CRSD has many threads that show up as separate process IDs. This is normal.

23.8 Hostname

Do not change a hostname after CRS installation. This includes adding or deleting a domain qualification.

23.9 CRS Log File Size

The growth of CRS log files in the CRS home is not limited, and can fill the disk where the CRS home is located. The growth of these log files should be monitored and the log files truncated when necessary.

23.10 CLUSTER_DATABASE_INSTANCES Setting

The CLUSTER_DATABASE_INSTANCES parameter must be set to the same value on all instances. Normally, you should set this parameter to the number of instances in your RAC database. The instance startup fails if this parameter is set to a value different from the value set in other instances.

23.11 32-bit and 64-bit Compatibility

On Sun Cluster, all RAC databases on the same cluster must either be all 64-bit, as in Oracle Database 10g and Oracle9i Database, or all 32-bit, as in Oracle9i Database and Oracle8i Database. A mix of 32-bit RAC databases and 64-bit RAC databases on the same cluster is not supported.

23.12 Distributed Transactions in Oracle Real Application Clusters

You can recover failed transactions from any instance of a RAC database. You can also heuristically commit in-doubt transactions from any instance. An XA recover call gives a list of all prepared transactions for all instances. The following steps must be performed on the same instance:

  1. xa_start

  2. xa_commit

  3. SQL operations

  4. xa_end

  5. xa_prepare

  6. xa_commit or xa_rollback

Under normal circumstances, an xa_prepare, xa_rollback, or xa_commit operation performed on a branch must be performed on the same instance that created the branch. This restriction affects load balancing. If load balancing is enabled, then it is possible for sequences like the following to be performed on two different nodes. For this reason, load balancing must not be used for the XA connection.


Node 1
  1. xa_start

  2. SQL operations

  3. xa_end (SUSPEND)


Node 2
  1. xa_start (RESUME)

  2. SQL operations

  3. xa_end

If an error occurs, xa_recover must be performed before any other XA operation. You should open the XA connection with xa_open using the option RAC_FAILOVER=true.

Global uniqueness of transaction IDs (XIDs) is not guaranteed; the transaction monitor must maintain the global uniqueness of XIDs. According to the XA specification, the RM must accept XIDs from the transaction monitor. However, XA on RAC cannot determine if a given XID is unique throughout the cluster. For example, if there is a transaction Tx(1).Br(1) on Node 1, and another Tx(1).Br(1) on Node 2, both transactions can start and execute SQL, even though the XID is not unique.

Note that operation xa_recover cannot be used for switchover in a normal operation.

24 Oracle Sample Schemas

In previous releases of the Sample Schemas, the SH schema contained data for pharmaceutical and beauty care products. In this release, the SH schema contains consumer electronics and computer products.The Oracle Sample Schemas readme file is located at:

$ORACLE_HOME/rdbms/demo/schema/README.txt

The scripts for manually creating Sample Schemas or any their components are on the Companion CD.

The SH component of the Sample Schemas does not create the SALES and COSTS cubes. To create these cubes, run olp_v3.sql after installing the database. The script completes in approximately 30 seconds.

25 Oracle Spatial

The Oracle Spatial readme file supplements the information in the following manuals: Oracle Spatial User's Guide and Reference, Oracle Spatial Topology and Network Data Models, and Oracle Spatial GeoRaster. The Oracle Spatial readme file is located at:

$ORACLE_HOME/md/doc/README.txt

26 Oracle Streams

Please note the following items when working with Oracle Streams:

26.1 Oracle Streams Local Capture Processes and LOG_ARCHIVE_DEST_n

LogMiner supports the LOG_ARCHIVE_DEST_n initialization parameter, and Streams capture processes use LogMiner to capture changes from the redo log. If an archived log file is inaccessible from one destination, a local capture process can read it from another accessible destination.

On an Oracle Real Application Clusters database, this ability also can enable cross instance archival such that each instance archives its files to all other instances. This solution cannot detect or resolve gaps caused by missing archived log files. Hence, it can be used only to complement an existing solution to have the archived files shared between all instances.

26.2 Using Triggers with Queues

The use of triggers on queue Index-Organized Tables is not supported. The use of triggers on queue tables may significantly impact performance. Oracle discourages the use of triggers on queue tables.

26.3 Oracle Streams Advanced Queuing Object Type Support

Oracle Streams Advanced Queuing supports enqueue, dequeue, and propagation operations where the queue type is an abstract datatype, ADT. It also supports enqueue and dequeue operations if the types are inherited types of a base ADT. Propagation between two queues where the types are inherited from a base ADT is not supported.

26.4 Upgrading an Existing XML LCR Schema

This section only applies to the upgrade of existing Oracle9i Release 2 databases where Oracle XML DB is installed and the XML LCR schema has been loaded. To check if the database has the XML LCR schema loaded, run the following query:

SELECT count(*)
   FROM xdb.xdb$schema s
   WHERE s.xmldata.schema_url =
      'http://xmlns.oracle.com/streams/schemas/lcr/streamslcr.xsd';

If the above query returns 0, the XML LCR schema is not registered, and the information in this section does not apply.

As part of the upgrade process, the existing XML LCR schema is dropped by calling dbms_xmlschema.DeleteSchema, and the new version of the XML LCR schema is registered. The DeleteSchema call fails if the XML LCR schema has any dependent objects. These objects could be XMLType schema-based tables, columns, or other schemas referencing the XML LCR schema. If the DeleteSchema call succeeds and the new schema is registered, no further action is required to upgrade the XML LCR schema. Users can confirm that the new schema is successfully registered by running this query:

SELECT count(*)
   FROM xdb.xdb$element e, xdb.xdb$schema s
   WHERE s.xmldata.schema_url =
         'http://xmlns.oracle.com/streams/schemas/lcr/streamslcr.xsd' 
      AND
         ref(s) = e.xmldata.property.parent_schema 
      AND
         e.xmldata.property.name = 'extra_attribute_values' ;

If the query returns 1, the database has the most current version of the schema. No further action is necessary; users can skip the rest of this section.

If the query returns 0, the XML LCR schema upgrade failed because of objects that depend on the schema. Further action is needed to upgrade the XML LCR schema and dependent objects.

To upgrade the XML LCR schema and dependent objects, use Oracle XML DB for this release. It supports schema evolution through the CopyEvolve procedure in the DBMS_XMLSCHEMA package. Users can upgrade the XML LCR schema by calling the procedure CopyEvolve with event 22830 set to level 8 in the database session that calls this procedure. For details on completing this XML LCR schema upgrade, see the description of the CopyEvolve procedure in Oracle XML DB Developer's Guide chapter "XML Schema Evolution".

26.5 JMS Types and XMLTypes

JMS Types and XMLType access to the Streams queue table is not enabled by default to minimize the impact of Bug 2248652, which causes export of the Oracle Streams queue table to fail. Users can enable this access by calling the DBMS_AQADM.ENABLE_JMS_TYPES(queue_table) procedure where the VARCHAR2 parameter queue_table is the name of the queue table. This procedure should be invoked after the call to DBMS_STREAMS_ADM.SET_UP_QUEUE. Sites dependent on the Export utility as their backup strategy should avoid enabling this access.

26.6 XML Logical Change Record Schema Modification

The ROW_LCR and DDL_LCR element definitions in the XML Logical Change Record (LCR) Schema have been modified to include an additional clause, xdb:defaultTable. These lines now appear as:

<element name="ROW_LCR" xdb:defaultTable="">
<element name="DDL_LCR" xdb:defaultTable="">

27 Oracle Text

Please note the following items when working with Oracle Text. You should also check entries for Oracle Text Application Developer's Guide and Oracle Text Reference in the Documentation Addendum.

27.1 Unsupervised Classification (KMEAN Clustering)

Oracle Text now produces hierarchical clustering by default. Therefore, the attributes MIN_SIMILARITY and HIERARCHY_DEPTH referred to in the documentation are not used, and the definition for the CLUSTER_NUM attribute should simply be the total number of leaf clusters produced.

27.2 COPY_POLICY Procedure

Oracle Text includes a new procedure, CTX_DDL.COPY_POLICY, which creates a new policy from an existing policy or index. The syntax is:

ctx_ddl.copy_policy(
    source_policy       VARCHAR2,
    policy_name         VARCHAR2
    );

  • source_policy is the name of the policy or index being copied

  • policy_name is the name of the new policy copy

The preference values are copied from the source_policy. Both the source policy or index and the new policy must be owned by the same database user.

27.3 Parallel Local Partitioned Indexes

If you attempt to create a local partitioned index in parallel, and the attempt fails, you will receive the following error message: ORA-29953: error in the execution of the ODCIIndexCreate routine for one or more of the index partitions. To determine the specific reason why the index creation failed, query the CTX_USER_INDEX_ERRORS view.

27.4 Parallel Sync and Optimize Cannot Run Concurrently

You can run sync and optimize or sync and parallel optimize, but not parallel sync and optimize or parallel sync and parallel optimize at the same time. No error is generated; one operation will wait until the other one is done.

27.5 Changes to USER_LEXER Query Procedure

The user-defined lexer query procedure includes a new compMem element in addition to the word and num elements. The purpose of compMem is to enable USER_LEXER queries to return multiple forms for a single query. For example, if a user-defined lexer indexes the word bank as BANK(FINANCIAL) and BANK(RIVER), the query procedure can return the first term as a word and the second as a compMem element:

<tokens>
  <word>BANK(RIVER)</word>
  <compMem>BANK(FINANCIAL)</compMem>
</tokens>

The compMem element is similar to the word element, but its implicit word offset is the same as the previous word token. Oracle Text will equate this token with the previous word token and with subsequent compMem tokens using the query EQUIV operator.

The XML document returned by the user-defined query procedure must be valid with respect to a predefined XML schema.The compMem element must be preceded by a word element or by another compMem element. To accommodate the compMem element, this schema has been modified:

<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
  <xsd:element name="tokens">
    <xsd:complexType>
      <xsd:sequence>
        <xsd:choice minOccurs="0" maxOccurs="unbounded"> 
          <xsd:element name="eos" type="EmptyTokenType"/>
          <xsd:element name="eop" type="EmptyTokenType"/>
          <xsd:element name="num" type="xsd:token"/> 
          <xsd:group ref="IndexCompositeGroup"/>
        </xsd:choice>
      </xsd:sequence>
    </xsd:complexType>
  </xsd:element>
  <!-- 
  Enforce constraint that compMem element must be preceeded by word element
  or compMem element for indexing 
  -->
  <xsd:group name="IndexCompositeGroup">
    <xsd:sequence>
      <xsd:element name="word" type="xsd:token"/>
      <xsd:element name="compMem" type="xsd:token" minOccurs="0"
       maxOccurs="unbounded"/>
    </xsd:sequence>
  </xsd:group>
...

27.6 Oracle Text Supplied Knowledge Bases

An Oracle Text knowledge base is a hierarchical tree of concepts used for theme indexing, ABOUT queries, and deriving themes for document services. The following Oracle Text services require that a knowledge base be installed:

  • Index creation using a BASIC_LEXER preference where INDEX_THEMES=YES

  • SYNCing of an index where INDEX_THEMES=YES

  • CTX_DOC.THEME

  • CTX_DOC.POLICY_THEME

  • CTX_DOC.GIST

  • CTX_DOC.POLICY_GIST

  • CTX_QUERY.HFEEDBACK

  • CTX_QUERY.EXPLAIN, if using ABOUT or THEMES with TRANSFORM

  • CONTAINS queries that use ABOUT or THEMES with TRANSFORM

  • The Knowledge Base Extension Compiler, ctxkbtc

  • Clustering and classification services, if themes are specified

If you plan to use any of these Oracle Text features, you should install the supplied knowledge bases, English and French, from the Oracle Database 10g Companion CD.

Note that you can extend the supplied knowledge bases, or create your own knowledge bases, possibly in languages other than English and French. For more information about creating and extending knowledge bases, see the Oracle Text Reference.

For information on how to install products from the Companion CD, see the Companion CD Installation Guide and the Oracle Database Installation Guide specific to your platform.


Supplied Knowledge Bases and Upgrades

Because the supplied knowledge bases are contained on the Oracle Database 10g Companion CD, they are not immediately available after an upgrade to Oracle Database 10g. Oracle Text features that depend on the supplied knowledge bases available before the upgrade will not function after the upgrade, so you have to install the supplied knowledge bases from the Companion CD.

After an upgrade, you must regenerate all user extensions to the supplied knowledge bases. These changes affect all databases installed in the given $ORACLE_HOME.

For more information on upgrading Oracle Text and supplied knowledge bases, see the Oracle Database Upgrade Guide, Chapter 4, "After Upgrading a Database", section "Upgrading Oracle Text". The Oracle Text Application Developer's Guide contains both general instructions for upgrading from previous releases of Oracle Text and information on supplied knowledge bases.

27.7 New Character Set Support

New character set support has been added for Chinese and Korean lexers:

  • The CHINESE_VGRAM_LEXER now supports the AL32UTF8 and ZHS32GB18030 character sets.

  • The KOREAN_MORPH_LEXER now supports the AL32UTF8 and KO16MSWIN949 character sets.

28 Oracle Ultra Search

Please ignore the following file that is shipped with this product. It contains information that is relevant for the previous Oracle Database release, but not for Oracle Database 10g:

$ORACLE_HOME/ultrasearch/doc/README.html

Please note the following items when working with Oracle Ultra Search.

28.1 VeriSign Class 3 and Class 2 PCA Root Certificate Expiring

Oracle Ultra Search is SSL/HTTPS enabled, and cannot crawl the Web sites or register on forms of Web sites that identify themselves using a certificate based on the VeiSign Class 3 and Class 2 PCA Root Certificate. Attempts to access these sites will result in the following error:

javax.net.ssl.SSLHandshakeException:
   sun.security.validator.ValidatorException: No trusted certificate found

You need to update the truststore in the JDK installed in your $ORACLE_HOME directory. See the following Sun Microsystems site for patch instructions:

http://sunsolve.sun.com/pub-cgi/retrieve.pl?doc=fsalert%2F57436

28.2 Manually Configuring the Administration Tool

With a software-only or a custom database installations, you must manually configure the Ultra Search administration tool by editing the file $ORACLE_HOME/ultrasearch/webapp/config/ultrasearch.properties. You must replace %THIN_JDBC_CONN_STR% with a JDBC string to the database, and replace %DOMAIN% with the domain name.

A non-configured ultrasearch.properties file looks like this:

connection.driver=oracle.jdbc.driver.OracleDriver
connection.url=jdbc:oracle:thin:@%THIN_JDBC_CONN_STR%
domain=%DOMAIN%

After instantiation, the file should look something like this:

connection.driver=oracle.jdbc.driver.OracleDriver
connection.url=jdbc:oracle:thin:@myhost:1521:mysid
domain=mydomain.com

28.3 Sample Query Application

The Oracle Ultra Search sample query application has been redesigned to showcase keyword in context and highlighting features, as well as a new look-and-feel. These changes are made to search.jsp and its dependent files. Keyword in context shows a section of the original document that contains the search terms. Highlighting shows the entire document with the search terms in a different color. In order for highlighting to work, the crawler must be configured to keep cache file as one of its settings.

Highlighting is implemented in cache.jsp and can be customized by the customer.

Note that framed HTML documents contain only the frame layout and frame content specification, but not the actual content. Therefore, the cached version of these documents appears blank in a browser.

28.4 Default Search Attributes

Oracle Ultra Search now has the following default search attributes: Title, Author, Description, Subject, Mimetype, Language, Host, and LastModifedDate. They can be incorporated in search applications for a more detailed search and richer presentation.

28.5 Document Service

Oracle Ultra Search provides a set of new crawler agent APIs, the Document Service. The Document Service has the following features:

  • It allows generation of any attribute data based on the document contents.

  • It accepts robot meta tag instructions from the agent for the target document.

  • It transforms the original document contents for indexing control.

For details, see the following file:

$ORACLE_HOME/ultrasearch/sample/agent/README.html

28.6 Post-Upgrade Configuration Steps

After upgrading to the current release, follow these configuration steps:

  1. Set the ORACLE_HOME and ORACLE_SID environment variables to Oracle Database 10g.

  2. Change directories to: ORACLE_HOME/ultrasearch/admin/.

  3. Issue this command: sqlplus "sys/password as sysdba"

  4. Issue this command:

    @wk0config.sql WKSYSPW JDBC_CONNSTR LAUNCH_ANYWHERE NET_SERVICE_NAME
    
    

    where:

    • WKSYSPW is the password for the WKSYS schema.

    • JDBC_CONNSTR is the JDBC connection string. Use the format hostname:port:sid, such as machine1:1521:iasdb, if the database is not in the Oracle Real Application Clusters (RAC) environment.

      If the database is in a RAC environment, use the TNS keyword-value format instead, because it allows connection to any node of the system:

      (DESCRIPTION=(LOAD_BALANCE=yes)
         (ADDRESS_LIST=
                   (ADDRESS=(PROTOCOL=TCP)(HOST=cls02a)(PORT=3001))
                   (ADDRESS=(PROTOCOL=TCP)(HOST=cls02b)(PORT=3001)))
         (CONNECT_DATA=(SERVICE_NAME=sales.us.acme.com)))
      
      
    • LAUNCH_ANYWHERE is the mode of the database. Setting it to TRUE indicates that the database is in RAC mode; FALSE indicates that the database is not in RAC mode.

    • NET_SERVICE_NAME is the network service name used by wk0config.sql to establish the database connection. Setting it to ""(empty string) while running wk0config.sql from the database host eliminates the need to specify the network service name.


Running in non-RAC Mode
@wk0config.sql welcome1 machine1:1521:iasdb FALSE ""

Running in RAC Environment
@wk0config.sql welcome1 
"(DESCRIPTION=(LOAD_BALANCE=yes)
   (ADDRESS_LIST=
      (ADDRESS=(PROTOCOL=TCP)(HOST=cls02a)(PORT=3001))
      (ADDRESS=(PROTOCOL=TCP)(HOST=cls02b)(PORT=3001)))
   (CONNECT_DATA=(SERVICE_NAME=sales.us.acme.com)))" FALSE ""

28.7 Oracle Ultra Search Failover in a RAC Environment

When RAC uses the Cluster File System (CFS), Oracle Ultra Search crawler can be launched from any of the RAC nodes. As long as at least one RAC node is up and running, Ultra Search is fine.

When RAC is not using CFS, the Oracle Ultra Search crawler always runs on a specified node. If this node stops operating, you must run the wk0reconfig.sql script to move Oracle Ultra Search to another RAC node.

sqlplus wksys/wksys_passwd
ORACLE_HOME/ultrasearch/admin/wk0reconfig.sql 
instance_name connect_url

  • instance_name is the name of the RAC instance that Oracle Ultra Search uses for crawling. After connecting to the database, simply SELECT instance_name FROM v$instance to get the name of the current instance.

  • connect_url is the jdbc connection string that guarantees a connection only to the specified instance:

    "(DESCRIPTION=
       (ADDRESS_LIST=
         (ADDRESS=(PROTOCOL=TCP)
                  (HOST=<nodename>)
                  (PORT=<listener_port>)))
       (CONNECT_DATA=(SERVICE_NAME=<service_name>)))"
    
    

    When preserving the crawler cache, if Oracle Ultra Search is switched from one RAC node to another, you loose the contents of the cache. Force a re-crawl of the documents after switching instances.

28.8 Oracle Ultra Search Uses SSL

Oracle Ultra Search supports SSL; all content crawling, indexing, and querying is encrypted using SSL, and Web sites are crawled through HTTPS. The Oracle Ultra Search administration tool registers HTML Forms accessible through HTTPS.

This SSL service is based on Sun Microsystems' JDK, and includes Sun's default Trust and Key Managers, and a default truststore, cacerts file. Ultra Search treats the SSL as a service of the JVM, inheriting the JRE's default SSL configuration. Any customizations of the SSL services should be done at the level of the JDK.

The truststore is a list of public SSL certificates that identifies trusted entities with which Ultra Search can communicate over SSL. You will have to maintain your truststore. Sun Microsystems ships a "keytool" that lets you add, update, remove, import, or create your own self-signed certificates.

Ultra Search runs across several JVMs for the crawler, the remote crawler, and the midtiers, potentially using different JRE/JDK installations and truststores. Oracle Database 10g uses one JDK for both midtier and crawler JVM.

Additional information is provided in the Java Secure Socket Extension (JSSE) Reference Guide, at http://java.sun.com/j2se/1.4.2/docs/guide/security/jsse/JSSERefGuide.html.

29 Oracle XML Developer's Kit

The Oracle XML Developer's Kit readme file is located at:

$ORACLE_HOME/xdk/readme.html

30 PL/SQL

Please note the following items when working with PL/SQL.

30.1 Native Compilation

When upgrading a database, remove the obsolete parameters related to native compilation to avoid errors. Note that PL/SQL native compilation for this release is more robust and easier to use than in previous releases. See PL/SQL User's Guide and Reference for details.

Specifically, these four initialization parameters have become obsolete: PLSQL_NATIVE_C_COMPILER, PLSQL_NATIVE_LINKER, PLSQL_NATIVE_MAKE_FILE_NAME, and PLSQL_NATIVE_MAKE_UTILITY.

Use of these parameters in the initialization parameter file or in ALTER SYSTEM statements now results in an error. If you are upgrading an existing database to Oracle Database 10g, you must remove these parameters from the parameter file.

The directory denoted by the parameter PLSQL_NATIVE_LIBRARY_DIR now has a different significance. With Oracle Database 10g, generated DLLs are stored in the database catalog. The directory specified by PLSQL_NATIVE_LIBRARY_DIR is now used as a temporary staging area for DLLs prior to mapping them into the Oracle process for execution. Storing the generated DLLs in the database catalog means that backup procedures transparently accommodate PL/SQL native compilation. Further, it guarantees that PL/SQL native compilation can be used with Oracle Real Application Clusters on all supported platforms.

The use of PLSQL_COMPILER_FLAGS to specify native compilation is deprecated. Instead, use the parameter PLSQL_CODE_TYPE to specify the compilation mode.

Prior releases of Oracle included a makefile spnc_makefile.mk to generate DLLs from the generated C file. This file is now obsolete and is replaced by the file spnc_commands. It contains one or more command templates to generate a DLL from the generated C file. This file is located in the directory $ORACLE_HOME/plsql on all platforms; the contents of the file are port-specific. Before using native compilation, you must verify that the full path name of the C compiler specified in the spnc_commands file is correct. The commands in the spnc_commands file are directly executed by Oracle Database without resorting to a make utility or shell script interpreter. If native compilation fails for any reason, the errors are recorded like other PL/SQL compilation errors and can be queried out of the user_errors view or by using the SQL*Plus SHOW ERRORS command.

31 Pro*C

The Pro*C readme file is located at:

$ORACLE_HOME/precomp/doc/proc/readme.doc

32 Pro*COBOL

The Pro*COBOL readme file is located at:

$ORACLE_HOME/precomp/doc/procob2/readme.doc

33 Pro*FORTRAN

The Pro*FORTRAN readme file is located at:

$ORACLE_HOME/precomp/doc/prolx/readme.txt

34 Replication

Please note the following items when working with replication.

34.1 Globalization Support (NLS) and Replication

In a replication environment involving Oracle Database 10g and pre-Oracle9i Database releases at sites utilizing NCHAR and NVARCHAR2 datatypes, an Oracle Database patch must be installed at the earlier release (pre-Oracle9i Database) site. Contact Oracle Support Services to obtain the appropriate NLS patch as recommended in ALERT 140014.1 Oracle8, Oracle8i to Oracle9i using New AL16UTF16 available on OracleMetaLink, http://metalink.oracle.com.

34.2 Virtual Private Database (VPD) and Replication

For multimaster replication, there must be no VPD restrictions on the replication propagator and receiver schemas. For materialized views, the defining query for the materialized view may not be modified by VPD. VPD must return a NULL policy for the schema that performs both the create and refresh of the materialized view. Creating a remote materialized view with a non-NULL VPD policy does not generate an error, but may yield incorrect results.

35 SQL

Please note the following items when working with SQL.

35.1 DESCRIBE Behavior Change

DESCRIBE behavior of invalidated objects has been changed to be more user-friendly. In previous releases, a DESCRIBE on an invalidated object failed with an error ORA-24372: invalid object for describe; this error would continue to be generated even after the object was re-validated. In Oracle Database 10g, a DESCRIBE operation automatically revalidates the object, and lets it work if the revalidation is successful.

35.2 SELECT ANY TRANSACTION System Privilege for FLASHBACK_TRANSACTION_QUERY

SELECT ANY TRANSACTION is a new system privilege in this release. This privilege allows the grantee to view the contents of the view FLASHBACK_TRANSACTION_QUERY. This is a very powerful privilege since it allows the grantee view of all data in the database, including past data. This privilege should be granted only to users who need to use the Flashback Transaction Query feature.

35.3 MODEL Clause Added to the SELECT Statement

MODEL clause FOR loops can be used on both sides of model rule definitions.

MODEL clause FOR loops generate dimension values. The dimension value combinations generated by FOR loops on the left side of a rule are counted as part of the MODEL clause's 10,000 rule limit.

The rule count of a model is independent of the number of partition values returned by its query.

35.4 Storage of Large Objects, LOBs

In several locations in the documentation, the maximum size of a large object, LOB, is stated as being 4 gigabytes -1 x (database block size). This is correct only if the tablespaces in the database are of standard block size, and if a nondefault value for the CHUNK parameter of LOB storage was not specified when creating a LOB column. The correct value for the maximum size of a LOB is:

(4 gigabytes - 1) x (the value of CHUNK)

Oracle Database 10g allows you to create tablespaces with block sizes different from the database block size. The the maximum size of a LOB depends on the size of the tablespace blocks. CHUNK is a parameter of LOB storage; its value is controlled by the block size of the tablespace in which the LOB is stored.When you create a LOB column, you can specify a value for CHUNK, which is the number of bytes that should be allocated for LOB manipulation. The value must be a multiple of the tablespace block size; otherwise, Oracle Database 10g rounds it up to the next multiple. If the tablespace block size is the same as the database block size, then CHUNK is also a multiple of the database block size. The default CHUNK size is one tablespace block, and its maximum value is 32K.

As an example, suppose that the database block size is 32K and you create a tablespace with a nonstandard block size of 8K. Further suppose that you create a table with a LOB column and specify a CHUNK size of 16K, a multiple of the 8K tablespace block size. In this case, the maximum size of a LOB in this column is (4 gigabytes - 1) x 16K.The documentation also incorrectly describes CHUNK as a multiple of the database block size. This is true only if the tablespace has the same block size as the database. If the tablespace uses a nonstandard block size, then CHUNK is a multiple of the tablespace block size.

36 SQL*Module for ADA

The SQL*Module for ADA readme file is located at:

$ORACLE_HOME/precomp/doc/mod/readme.txt

37 SQL*Plus

The SQL*Plus readme file is located at:

$ORACLE_HOME/sqlplus/doc/README.htm

38 Summary Management

Please note the following items when working with Summary Management.

38.1 Feature Availability

The creation and refresh features of materialized views are supported in both the Standard and Enterprise Editions. However, query rewrite and materialized view advice from the SQLAccess Advisor are available in the Enterprise Edition only.

38.2 NLS Parameters

When using or refreshing certain materialized views, you must ensure that your NLS parameters are the same as when you created them in the materialized view. Materialized views that fall under this restriction contain the following constructs:

  1. Expressions that may return different values, depending on NLS parameter settings For example, (date > ?01/02/03?) or (rate <= ?2.150?) are NLS parameter dependent expressions.

  2. Equijoins where one side of the join is character data The result of this equijoin depends on collation and this can change on a session basis, giving an incorrect result in the case of query rewrite or an inconsistent materialized view after a refresh operation.

  3. Expressions that generate internal conversion to character data in the select list of a materialized view, or inside an aggregate of a materialized aggregate view This restriction does not apply to expressions that involve only numeric data, for example, a+b where a and b are numeric values.

39 Table Compression

Table compression does not support the compression of tables with more than 255 columns. All compression operations, such as ALTER TABLE MOVE COMPRESS, work and succeed with such tables, but they are implemented as no-operations. After such an operation, the content is still stored in an uncompressed format, although the dictionary shows these tables as compressed.

40 Types

Please note the following item when working with types.

40.1 Object Type Translator

The Object Type Translator (OTT) readme file is located at:

$ORACLE_HOME/precomp/doc/ott/readme.doc

41 Utilities

Please note the following items when working with utilities.

41.1 Data Pump Export/Import and Rollback Consumption

Data Pump Export and Import consume more rollback segments or undo tablespace than then Original Export and Import. This is due to additional metadata queries (Export) and some relatively long-running master table queries (Import). As a result, for databases with large amounts of metadata, you may receive the following error: ORA-01555: snapshot too old.

To avoid this error, consider adding additional rollback segments, adding additional undo tablespace, or increasing your undo_retention parameter for the database.

41.2 Data Pump Export using Automatic Storage Management

There is no support for extracting Data Pump Export dumpfiles from Automatic Storage Management (ASM) diskgroups into operating system files. Be aware of this restriction if you are using ASM diskgroups and need to move Data Pump Export dumpfiles between systems.

41.3 Data Pump Export/Import using Automatic Storage Management and LOGFILE

To perform a Data Pump Export or Import using Automatic Storage Management (ASM), you must specify a LOGFILE parameter that includes a DIRECTORY object that does not include the ASM + notation. That is, the logfile must be written to a diskfile, and not written into the ASM storage. Alternatively, you can specify NOLOGFILE=Y. However, this prevents the writing of the logfile.

41.4 Data Pump Import with FLASHBACK_SCN and FLASHBACK_TIME

The Data Pump Import FLASHBACK_SCN and FLASHBACK_TIME options pertain only to the Flashback Query capability in the current release. They are not applicable to the Flashback Database, Flashback Drop, and other new Flashback capabilities.

41.5 Data Pump Export/Import and Database Link types

The following types of database links are supported for use with Data Pump Export/Import:

  • Public database link (both public and shared)

  • Fixed-user database link

  • Connected user database link

The following type of database link is not supported for use with Data Pump Export/Import:

  • Current user database link

Refer to the Oracle Database SQL Reference for further information about database links.

41.6 Data Pump Export/Import Roles with NETWORK_LINK

When performing a Data Pump Import using the NETWORK_LINK option, if the USERID that is executing the job has the IMP_FULL_DATABASE role on the target database, then that user must also have the EXP_FULL_DATABASE role on the source database.

41.7 Data Pump Export and Compressed Tables

Compressed tables are supported in Data Pump Export. However, the default size estimation using ESTIMATE=BLOCKS is inaccurate for that type of table, since the size estimate does not reflect that the data was stored in a compressed form. Use ESTIMATE=STATISTICS to get a more accurate size estimation for compressed tables.

41.8 Data Pump Export and ATTACH

When attaching to a stopped Data Pump Export job, the dumpfile set and master table must be undisturbed, otherwise the ATTACH fails. If the dumpfile set was deleted before the job completed, then the Data Pump master table must be manually dropped using the instructions in Oracle Database Utilities.

41.9 DBMS_METADATA MODIFY Transform Restriction

A maximum of ten instances of any remap parameter may be specified for a MODIFY transform. That is, you may specify up to ten REMAP_DATAFILE parameters, up to ten REMAP_SCHEMA parameters, and so on. Additional instances are ignored. As a workaround, perform another ADD_TRANSFORM for the MODIFY transform and specify the additional remap parameters:

th1 := dbms_metadata.add_transform(h,'MODIFY');
dbms_metadata.set_transform_param(th1,'REMAP_SCHEMA','USER1','USER101');
dbms_metadata.set_transform_param(th1,'REMAP_SCHEMA','USER2','USER102');
...
dbms_metadata.set_transform_param(th1,'REMAP_SCHEMA', 'USER10', 'USER110');
th2 := dbms_metadata.add_transform(h,'MODIFY');
dbms_metadata.set_transform_param(th2,'REMAP_SCHEMA','USER11', 'USER111');
...

42 Documentation Addendum

This section contains corrections to Oracle Documentation for this release.

42.1 Oracle Database Upgrade Guide


Obsolete NCHAR Character Sets

In Chapter 5, "Compatibility and Interoperability", section "Database Character Sets" should contain the following statement: "In Oracle Database 10g, the NCHAR datatypes such as NCHAR, NVARCHAR2, and NCLOB, are limited to the Unicode character set encoding, UTF8 and AL16UTF16."

42.2 Oracle Database New Features

This document refers to the Resonance feature, which is not available in Oracle Database 10g Release 1, but will be made available at a later date.

42.3 Oracle 2 Day DBA

In Chapter 10, "Monitoring and Tuning the Database", section "Diagnosing Performance Problems", the description of the Automatic Database Diagnostic Monitor (ADDM) states that the default interval is every half hour. The correct default snapshot interval for the ADDM is 1 hour. You can view ADDM analysis with Enterprise Manager.

42.4 Oracle Ultra Search User's Guide

In the "What's New" section, subsection "Monitoring Oracle Ultra Search Components with Oracle Enterprise Manager" refers to the Oracle Enterprise Manager Administrator's Guide. It should refer to Oracle Enterprise Manager Concepts instead.

42.5 Oracle Text Application Developer's Guide


KMEAN Clustering

In Chapter 6, "Document Classification", all references to the KMEAN_CLUSTER cluster type should be replaced by KMEAN_CLUSTERING.


Rule-Based Classification

In Chapter 6, "Document Classification, section "Rule-Based Classification", the example under "Step 5 Classify Documents" is incorrect. The code should read:

create or replace package classifier as
procedure this;
end;
/
 
show errors
 
create or replace package body classifier as
 
 procedure this
 is
  v_documentclob;
  v_itemnumber;
  v_doc number;
 begin
 
  for doc in (select tk, text from news_table)
 loop
v_document := doc.text;
v_item := 0;
v_doc  := doc.tk;
for c in (select queryid, category from news_categories
 where matches(query, v_document) > 0 )
  loop
v_item := v_item + 1;
insert into news_id_cat values (doc.tk,c.queryid);
  end loop;
   end loop;
 
 end this;
 
end;/
 
show errors
exec classifier.this

42.6 Oracle Text Reference


CONTAINS Template

In Chapter 1, "SQL Statements and Operators", section "CONTAINS", the fifth line of the template has errors. The following line

<seq><<rewrite>transform((TOKENS, "{", "}", " ; "))</rewrite>/seq>

should be replaced by

<seq><rewrite>transform((TOKENS, "{", "}", " ; "))</rewrite></seq>

KMEAN Clustering

In Chapter 2, "Oracle Text Indexing Elements", all references to the KMEAN_CLUSTER cluster type should be replaced by KMEAN_CLUSTERING.


CTX_CLS Package

In Chapter 6 "CTX_CLS Package",

  • In CTX_CLS.TRAIN, argument doc_id should be docid, and argument preference_name should be preference.

  • In "Syntax for Support Vector Machine Rules", argument preference_name should be preference.


Supported Document Formats

In Appendix B, "Supported Document Formats", make the following replacements:

  • In the table "Word Processing Formats - Windows":

    • Replace Novell/Corel WordPerfect for Windows - Versions through 10 by Novell/Corel WordPerfect for Windows - Versions through 11.

    • Replace Microsoft Word for Windows - Versions through 2002 by Microsoft Word for Windows - Versions through 2003.

  • In the table "Spreadsheet Formats":

    • Replace Microsoft Excel Windows - Versions 2.2 through 2002 by Microsoft Excel Windows - Versions 2.2 through 2003.

    • Replace Quattro PRero for Windows - Versions through 10 by Quattro Pro for Windows - Versions through 11.

  • In the table "Display Formats":

    • Replace PDF - Portable Document Format - Adobe Acrobat Versions through 5.0 ... by PDF - Portable Document Format - Adobe Acrobat Versions through 6.0 ....

  • In the table "Presentation Formats":

    • Replace Corel/Novell Presentations - Versions through 10 by Corel/Novell Presentations - Versions through 11.

    • Replace Microsoft PowerPoint for Windows - Versions 3.0 through 2002 by Microsoft PowerPoint for Windows - Versions 3.0 through 2003.

  • In table "Other Document Formats":

    • Replace Microsoft Project (Text only) - Version 98 by Microsoft Project (Text only) - Versions 98, 2000, 2002, and 2003.

42.7 Oracle Database Administrator's Guide


Automatic Workload Repository

In Chapter 1, "Overview of Administering and Oracle Database", section "Automatic Workload Repository" states that, by default, snapshots are made every 30 minutes. The correct default snapsnot frequency is one hour.


Size of SYSAUX Tablespace

In Chapter 8, "Managing Tablespaces", section "Controlling the Size of the SYSAUX Tablespace" states that a system with an average of 30 concurrent active sessions may require approximately 200 MB to 300 MB of space for its Automatic Workload Repository data. The 200 MB to 300 MB estimate is valid for a system with an average of 10 (rather than 30) concurrent active sessions.

The following table provides guidelines on sizing the SYSAUX tablespace, based on the system configuration and expected load.

Parameter/Recommendation      Small     Medium        Large
Number of CPUs         2              8             32
Number of concurrently active sessions       10            20           100
Number of user objects: tables and indexes     500       5,000       50,000
Estimated SYSAUX size at steady state with default configuration     500 MB       2 GB       5 GB


Scheduler (DBMS_SCHEDULER)
  • CREATE_JOB_CLASS Procedure: The default setting for logging_level is LOGGING_RUNS, not NULL.

  • STOP_JOB Procedure: STOP_JOB is not supported for jobs of type executable.


Transient Job

In Chapter 26, "Overview of Scheduler Concepts", there is a reference to a transient job which is somewhat misleading; there is no specific job of type transient. Instead, you can control whether or not to keep metadata once a job has finished running by setting the auto_drop argument. If you set auto_drop to FALSE, job metadata is kept. If auto_drop is set to TRUE, the default job metadata is not kept.


Running Database Configuration Assistant (DBCA) in Silent Mode

In Chapter 2, "Creating an Oracle Database", the following information should be included.

Silent mode does not have a user interface (other than what you initially input on the command line) or user interaction. It outputs all messages, including information, errors, and warnings, to a log file.

From the command line, enter the following to see all of the DBCA options that are available when using silent mode:

dbca -help

The following sections contain examples that illustrate the use of silent mode.

DBCA Silent Mode Example 1: Creating a New Database

To create a clone database, enter the following on the command line:

% dbca -silent -createDatabase -templateName Transaction_Processing.dbc 
-gdbname ora10i -sid ora10i -datafileJarLocation 
/private/oracle10i/ora10i/assistants/dbca/templates -datafileDestination 
/private/oracle10i/ora10i/oradata -responseFile NO_VALUE 
-characterset WE8ISO8859P1 
DBCA Silent Mode Example 2: Creating a Seed Template

To create a seed template, enter the following on the command line:

% dbca -silent -createCloneTemplate -sourceDB ora10i -sysDBAUserName 
sys -sysDBAPassword change_on_install -templateName copy_of_ora10i.dbc 
-datafileJarLocation /private/oracle/ora10i/assistants/dbca/templates 

Time Zone Files

In Chapter 2, "Creating an Oracle Database", section "Specifying the Database Time Zone File" should contain the following information:

This section identifies $ORACLE_HOME/oracore/zoneinfo/timezone.dat as the default time zone file in the Oracle Database installation directory. The default time zone file is now $ORACLE_HOME/oracore/zoneinfo/timezlrg.dat.

$ORACLE_HOME/oracore/zoneinfo/timezlrg.dat is a larger file than $ORACLE_HOME/oracore/zoneinfo/timezone.dat. It contains more time zones. The larger time zone file is now the default time zone file.

If you use the larger time zone file, you must continue to use it unless you are sure that none of the additional time zones that it contains are used for data that is stored in the database. Also, all databases that share information must use the same time zone file.

To enable the use of $ORACLE_HOME/oracore/zoneinfo/timezone.dat, or if you are already using it as your time zone file and you want to continue to do so in an Oracle Database 10g, perform the following steps:

  1. Shut down the database if it has been started.

  2. Set the ORA_TZFILE environment variable to $ORACLE_HOME/oracore/zoneinfo/timezone.dat.

  3. Restart the database.

42.8 Oracle Database SQL Reference


ANALYZE Statement

In Chapter 13, "SQL Statements: ALTER TRIGGER TO COMMIT", the information on the ANALYZE statement, SAMPLE clause, should contain the following statement:

When you analyze an index from many rows have been deleted, Oracle Database sometimes executes a COMPUTE STATISTICS operation (which can entail a full table scan) even if you request an ESTIMATE STATISTICS operation. Such an operation can be time consuming.


Datetime and Interval Datatypes

In Chapter 2, "Basic Elements of Oracle SQL", the descriptions of INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND (in their respective subsections) are reversed. They should read as follows:

  • INTERVAL YEAR TO MONTH stores a period of time using the YEAR and MONTH datetime fields. This datatype is useful for representing the difference between two datetime values when only the year and month values are significant.

  • INTERVAL DAY TO SECOND stores a period of time in terms of days, hours, minutes, and seconds. This datatype is useful for representing the precise difference between two datetime values.


Time Zone Files

In Chapter 2, "Basic Elements of Oracle SQL", section "Support for Daylight Saving Times" states: "The region names are stored in two time zone files. The default time zone file is a small file containing only the most common time zones to maximize performance. If your time zone is not in the default file, then you will not have daylight saving support until you provide a path to the complete (larger) file by way of the ORA_TZFILE environment variable." This is incorrect, as the larger file is the new default.


INSERT Diagram

In Chapter 18, "SQL Statements: DROP SEQUENCE to ROLLBACK", section "INSERT", part of the syntax diagram of the conditional_insert_clause is missing in PDF. The syntax diagram in HTML version of the document is correct. The full clause should be:

[ ALL | FIRST ]
WHEN condition
THEN insert_into_clause
     [ values_clause ]
     [ error_logging_clause ]
     [ insert_into_clause
       [ values_clause ]
       [ error_logging_clause ]
     ]...
[ WHEN condition
  THEN insert_into_clause
       [ values_clause ]
       [ error_logging_clause ]
       [ insert_into_clause
         [ values_clause ]
         [ error_logging_clause ]
       ]...
]...
[ ELSE insert_into_clause
       [ values_clause ]
       [ error_logging_clause ]
       [ insert_into_clause
         [ values_clause ]
         [ error_logging_clause ]
       ]...
]

42.9 Oracle Database Application Developer's Guide - Workspace Manager

In Example 3-3, the year is incorrectly shown as 2000, when it should be 2003. The example, which inserts a row that is valid from 01-Jan-2003 until it is changed, should be as follows:

INSERT INTO employees VALUES(
  'Baxter',
  40000,
  WMSYS.WM_PERIOD(TO_DATE('01-01-2003', 'MM-DD-YYYY'), 
                  DBMS_WM.UNTIL_CHANGED)
);

In Chapter 4, "DBMS_WM Package: Reference", information from GetPrivs was incorrectly inserted instead of GetPhysicalTableName information for the format and in the name of the parameters table. The rest of the information about the GetPhysicalTableName function is correct.


Import and Export Considerations

In Chapter 1, "Introduction to Workspace Manager", section "Import and Export Considerations" incorrectly states that workspace-level import and export operations are not supported. Workspace-level import and export operations by version-enabled table are supported. The scope of the table export from the workspace can be either the entire table, as seen from the workspace, or just the changes to the table made from the workspace.

This material should be replaced by the following information:

Workspace Manager supports the import and export of version-enabled tables in one of the following two ways: a full database import and export, and a workspace-level import and export through Workspace Manager procedures. No other export modes, such as schema, table, or partition level, are currently supported.

Full database import and export operations can be performed on version-enabled databases using the Oracle utilities; however, the following considerations and restrictions apply:

  • A database with version-enabled tables can be exported to another Oracle Database only if the other database has Workspace Manager installed and does not currently have any version-enabled tables or workspaces (that is, other than the LIVE workspace).

  • For the import operation, you must specify IGNORE=Y.

  • The FROMUSER and TOUSER capabilities of the Oracle Import utility are not supported with version-enabled databases.

For workspace-level export operations, each version-enabled table can be exported at the workspace level. Follow these steps to export a version-enabled table from one database into another database.

  1. Call the DBMS_WM.Export procedure to store all of the data that must be exported into a staging table, such as t1. The exported data can either be all data as seen from a particular workspace, savepoint, or instant, or only the data that was modified in the particular workspace. See information about the DBMS_WM.Export procedure for more details. To export multiple workspaces for a version-enabled table, call the DBMS_WM.Export procedure again, specifying the new workspace that must be exported, and the original staging table. If you intend to import the data into a non versioned table, specify the versioned_db parameter as FALSE.

  2. Export the staging table, t1, using the Oracle Export utility.

  3. Import the staging table, t1, into the destination database using the Oracle Import utility.

  4. If you are importing into a version-enabled table, call the DBMS_WM.Import procedure to move the data from the staging table to the version-enabled table, and specify both the workspace where the data resided on the source database and the workspace where the data is stored in the version-enabled table. The structure of the staging table must match that of the version-enabled table. By default, all enabled constraints must be validated before the import procedure completes successfully.

42.10 Oracle Database Reference


TRANSACTIONS Parameter

In Chapter 1, "Initialization Parameters", the TRANSACTIONS initialization parameter specifies how many rollback segments to online when UNDO_MANAGEMENT = MANUAL. The maximum number of concurrent transactions is now restricted by undo tablespace size (UNDO_MANAGEMENT = AUTO) or the number of online rollback segments (UNDO_MANAGEMENT = MANUAL).


LARGE_POOL_SIZE and Automatic Storage Management Files

In Chapter 1, "Initialization Parameters", the following statement is missing:

The value of LARGE_POOL_SIZE derived from the values of PARALLEL_MAX_SERVERS, PARALLEL_THREADS_PER_CPU, CLUSTER_DATABASE_INSTANCES, DISPATCHERS, and DBWR_IO_SLAVES does not take into account the requirements used for Automatic Storage Management files. Automatic Storage Management files use the following amount of memory in the large pool:

(maximum total MB of all concurrently open files in the database) x (mirroring factor) x (8 bytes)

This means that LARGE_POOL_SIZE = 8MB can support one of the following:

  • 1 TB of unmirrored database file space (external redundancy disk group)

  • 1/2 TB of mirrored database file space (normal redundancy disk group)

  • 1/3 TB of mirrored database file space (high redundancy disk group).


LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT Range of Values

In Chapter 1, "Initialization Parameters", the following statement is missing:

In Oracle Database 10g, the LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT initialization parameters have this range of values: 0 to (231 - 1).


V$ASM_CLIENT View

In Chapter 4, "Dynamic Performance (V$) View", the V$ASM_CLIENT view is incorrectly documented for database instances. The view description currently states that there is only one row in this view in database instances, containing information about the Automatic Storage Management instance. The actual implementation returns one row per open Automatic Storage Management disk group.

42.11 Oracle Call Interface Programmer's Guide


Instant Client

In Chapter 1, "Introduction and Upgrading", section "Environment Variables for OCI Instant Client" incorrectly states "Environment variables ORA_NLS33, ORA_NLS32, and ORA_NLS are ignored in the Instant Client mode." Instead, only ORA_NLS33 and ORA_NLS_Profile33 are ignored in the Instant Client mode.


OCIBreak()

In Chapter 16, "More OCI Relational Functions", section "OCIBreak()" and in Chapter 2, "OCI Programming Basics", section "Cancelling Calls", OCIBreak() is incorrectly described as not being supported on Windows platforms. OCIBreak() works on Windows systems, including Windows NT, Windows 2000, and Windows XP.


Oracle Streams Advanced Queueing

In Appendix A, "Handle and Descriptor Attributes", section "OCIAQDeqOption Descriptor Attributes" is missing an additional note for OCI_ATTR_WAIT. That note is: "If the OCI_DWQ_NO_WAIT option is used to poll a queue, the messages are not dequeued after polling an empty queue. Use the OCI_DEQ_FIRST_MSG option instead of the default OCI_DEQ_NEXT_MSG setting for OCI_ATTR_NAVIGATION. You can also use a nonzero wait setting of OCI_ATTR_WAIT for dequeuing."


OCIStmtPrepare2()

In Chapter 16, "More OCI Relational Functions", section "Statement Functions", the following is missing from the description of the mode (IN) parameter of the OCIStmtPrepare2() method: "OCI_PREP2_GET_PLSQL_WARNINGS - If warnings are enabled in the session and the PL/SQL program is compiled with warnings, then OCI_SUCCESS_WITH_INFO is the return status of the execution. Use OCIErrorGet() to find the new error number corresponding to the warnings."

42.12 Oracle High Availability Architecture and Best Practices


Database SPFILE and Oracle Net Configuration File Samples

In Appendix B, "Database SPFILE and Oracle Net Configuration File Samples", the following information in incorrect:

  • In Table B-1,

    • *.COMPATIBLE='10.1.0.1.0 should be instead *.COMPATIBLE='10.1.0'

    • *.DB_RECOVERY_FILE_SIZE should be instead *.DB_RECOVERY_FILE_DEST_SIZE

    • last row of the table, *.INSTANCE_NAME, should be removed

  • In Table B-3,

    • *.DB_UNIQUE_NAME- should be instead *.DB_UNIQUE_NAME=-, in both columns

    • *STANDBY_ARCHIVE_DEST=USE_DB_RECOVERY_FILE_DEST should be instead *.STANDBY_ARCHIVE_DEST=USE_DB_RECOVERY_FILE_DEST_SIZE

    • for the definition of *.LOG_ARCHIVE_DEST_2 in the Boston column, service=SALES_BOSTON should be instead service=SALES_CHICAGO


SCN on Standby is Behind Resetlogs SCN on Production

In Chapter 11, "Restoring Fault Tolerance", in the table for "Scenario 1: SCN on Standby is Behind Resetlogs SCN on Production", ALTER SYSTEM ARCHIVE_LOG_CURRENT; should be See "Step 3: Verify Log Transport Services on Production Database".


ASYNC Attribute

In Chapter 7, "Oracle Configuration Best Practices", replace section "Use the ASYNC Attribute with a 50 MB Buffer for Maximum Performance Mode" with section "Use the ASYNC Attribute to Control Data Loss":

Using LGWR ASYNC instead of the archiver in maximum performance mode reduces the amount of data loss. However, ARCH overrides LGWR ASYNC when the ASYNC network buffer does not empty in a timely manner. For best results, use a minimum ASYNC buffer size of at least 10MB.

Using larger buffer sizes also increases the chance of avoiding ORA-16198 timeout messages that result from a buffer full condition in a WAN. However, if the LGWR wait on full LNS buffer database wait event is in the top 3 database wait events, use ARCH.

If the network buffer becomes full and remains full for 1 second, the transport times out and converts to ARCH transport. This condition indicates that the network to the standby destination cannot keep up with the redo generation rate on the primary database. This is indicated in the alert log by the following message:

ORA-16198: LGWR timed out on Network Server 1 due to buffer full condition.

This message indicates that the standby destination configured with the LGWR ASYNC attributes encountered an async buffer full condition. Log transport services automatically stop using the network server process, LNSn, to transmit the redo data and convert to using the archiver process, ARCn, until a log switch occurs. At the next log switch, redo transmission reverts to using the LGWR ASYNC transport. This change occurs automatically. Using the largest asynchronous network buffer, 50MB, reduces the chance of the transport converting to ARCH. If this error occurs for every log or for the majority of logs, then the transport should be modified to use the archiver process permanently.

Figure 7-2 shows the architecture when the standby protection mode is set to maximum performance with the LGWR ASYNC option.


Recommendation for the LGWR SYNC Option of LOG_ARCHIVE_DEST_n

In Chapter 7, "Oracle Configuration Best Practices", ignore the section "Set SYNC=NOPARALLEL/PARALLEL Appropriately", as well as the entry in Table 7-2 that refers to that section.

In Appendix B, "Database SPFILE and Oracle Net Configuration File Samples", the recommendation for sample configuration files should be replaced with the following information:

Oracle recommends that you never use the LGWR SYNC=NOPARALLEL option for the LOG_ARCHIVE_DEST_n initialization parameter for the maximum availability or maximum protection modes of Oracle Data Guard. Always use the SYNC=PARALLEL default. Fault detection after a standby instance fails occurs within the time specified by the NET_TIMEOUT option of the LOG_ARCHIVE_DEST_n initialization parameter. Further, Oracle recommends that NET_TIMEOUT be set to 30 seconds for most configurations.

42.13 Oracle Data Guard Concepts and Administration


THROUGH ALL SWITCHOVER

In Chapter 10, "Data Guard Scenarios", Section 10.3.1, "Converting a Failed Primary Database into a Physical Standby Database", Step 5, "Start Redo Apply", the command in the second bullet item should include the THROUGH ALL SWITCHOVER clause, as shown in the following example:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL SWITCHOVER DISCONNECT;

Including the THROUGH ALL SWITCHOVER clause ensures that Redo Apply can continue through the end-of-redo marker in the last log file that was archived by the failed primary database. If you do not include this clause, recovery stops and you must issue the command again to restart Redo Apply and continue past the end-of-redo marker.


Oracle Label Security Is Not Supported with a Logical Standby Database

In Chapter 4, "Creating a Logical Standby Database", section "Determine Support for Datatypes and Storage Attributes for Tables" should state that Oracle Label Security is not supported by logical standby databases. If Oracle Label Security is installed on the primary database, SQL Apply fails on the logical standby database with an internal error during startup.


Starting Real-Time Apply
  • In Chapter 10, "Data Guard Scenarios", section "Flashing Back a Physical Standby Database", Step "Restart log apply services", should demonstrate how to start real-time apply:

    Issue the following command to start real-time apply on the physical standby database:

    SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE;
    
    
  • In Chapter 10, "Data Guard Scenarios", section "Flashing Back a Logical Standby Database", Step "Start SQL Apply", should demonstrate how to start real-time apply:

    Issue the following command to start real-time apply on the logical standby database:

    SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
    

42.14 Oracle Database Java Developer's Guide


Using the Native Java Interface

Chapter 12, "Using the Native Java Interface" should contain the following information.

Oracle Database 10g introduces the native Java interface's new features for calls to server-side Java code. This is a simplified application integration: client-side and middle-tier Java applications can directly invoke Java in the database without defining a PL/SQL wrapper. The native Java interface uses server-side Java class reflection.

In previous releases, calling Java stored procedures and functions from a database client required JDBC calls to associated PL/SQL wrappers. Each wrapper had to be manually published with a SQL signature and a Java implementation. This had the following disadvantages:

  • The signatures permitted only Java types that had direct SQL equivalents.

  • Exceptions issued in Java were not properly returned.

The JPublisher -java option with the Java class or package remedies the deficiencies of JDBC calls to associated PL/SQL wrappers by making convenient use of an API for direct invocation of static Java methods. This functionality is also useful for Web Services.

The functionality of the -java option is similar to that of the -sql option, creating a client-side Java stub class to access a server-side Java class, as opposed to creating a client-side Java class to access a server-side SQL object or PL/SQL package. The client-side stub class uses JPublisher code that mirrors the server-side class and includes the following features:

  • Methods that correspond to the public static methods of the server class

  • Two constructors: one that takes a JDBC connection, and one that takes the JPublisher default connection context instance

At runtime, the stub class is instantiated with a JDBC connection. Calls to its methods result in calls to the corresponding methods of the server-side class. Any Java types used in these published methods must be primitive or serializable. For example, assume you want to call the following method in the server:

public String oracle.sqlj.checker.JdbcVersion.to_string();

Use the following -java setting for the JdbcVersion Java class:

-java=oracle.sqlj.checker.JdbcVersion

When you use the -java option, you specify a single or multiple server-side Java class or package. If you want to use multiple classes, provide them as a comma-delimited list.

When you use the -java option, JPublisher generates code similar to the following call:

Connection conn= ...
String serverSqljVersion = (String) Client.invoke (conn,
   "oracle.sqlj.checker.JdbcVersion", "to_string", new Class[] {} );

The Class[] array is used for the method parameter types while the Object[] array is used for parameter values. In this case, because to_string has no parameters, both arrays are empty. This example demonstrates how a Java client outside of the database can call Java methods loaded in the database server. For more information, see Chapter 5, "Command Line Options and Import Files" in Oracle Database JPublisher User's Guide.

Example: Calling Java Methods Inside the Oracle Database

In order to call a Java method published within the Oracle Database 10g, follow these steps:

  1. Create client stubs using the -java option of JPublisher.

    jpub -u scott/tiger  -java=oracle.sqlj.checker.JdbcVersion:CallinImpl#Callin
    
    

    JPublisher generates a Java interface, Callin, and its implementation, CallinImpl. The CallinImpl class contains a method for each method in oracle.sqlj.checker.JdbcVersion.

  2. Client invokes methods in the published Java class.

    Connection conn=DriverManager.getConnection("jdbc:oracle:oci8", "scott", "tiger");
    Callin ci = new CallinImpl(conn);
    System.out.println("JDBC version inside the server is " +  ci.getDriverMajorVersion());
    
    

The client code provides the following type of feedback: "JDBC version inside the server is 10.0 (10.0.0.0.0)"


EJB Call-out

In Chapter 1, "Introduction to Java in Oracle Database 10g", Figure 1-1 is incorrect, and the following information should be added.

In certain enterprise applications, it becomes essential to access Enterprise Java Beans (EJB) that are deployed on a remote server from within the database. For example, if you need complex calculations, for which EJBs are perfect, you can call out to the EJB to perform those calculations. Examples of complex calculations include tax calculators. Because the EJB call-out does not currently support transactions, only stateless session beans can be used. Therefore, if a trigger calls out to an EJB and it fails, the trigger does not roll back.

Thus, through the EJB call-out, the Oracle Database provides a means to access the remotely deployed EJBs over Remote Method Invocation, or RMI.

The EJB JAR is not installed in the database in this release, so you must follow these steps to install the J2EE.JAR:

  1. Load J2EE.JAR using SQL*Plus.

    sqlplus /nolog
    SQL> connect sys/password as sysdba
    SQL> set serveroutput on
    SQL> call dbms_java.set_output(4000);
    SQL> call dbms_java.loadjava (?-r -install -v -s -g public -genmissing
    absolute path to J2EE_HOME/lib/j2ee.jar?);
    
    
  2. Grant the proper Java permissions; this example grants permissions to SCOTT:

    SQL> grant ejbclient to scott;
    SQL> call dbms_java.grant_permission(?SCOTT?,?SYS:java.io.FilePermission?,
         ?absolute_path_to_ORACLE_HOME/javavm/lib.orb.properties?,?read?);
    SQL> call dbms_java.grant_permission(?SCOTT?,
         ?SYS:java.net.SocketPermissino?,?localhost:1024-?,?listen,resolve?);
    SQL> call dbms_java.grant_permission(?SCOTT?,
         ?SYS:java.util.PropertyPermission?,
         ?java.naming.factory,initial?,?write?);
    SQL> call dbms_java.grant_permission(?SCOTT?, 
         ?SYS:java.lang.RuntimePermission?,?shutdownHooks?,??);
    SQL> call dbms_java.grant_permission(?SCOTT?,
         ?SYS:java.util.logging.LoggingPermission?, ?control?,??);
    SQL> call dbms_java.grant_permission(?SCOTT?,
         ?SYS:java.util.PropertyPermission?,
         ?java.naming.provider.url?,?write?);
    SQL> exit;
    
    

Once the J2EE.JAR is loaded, you can call out to EJBs from the database into the application server. The following steps show how to call out to an EJB from the database with the LoggerEJB demo (available at http://java.sun.com/j2se/1.4.1/docs/guide/rmi-iiop/interop.html), and using ojvmjava to execute the LogClient in the database. Note that the EJB application called by the following procedure must already be deployed to the application server.

  1. Load the Java client into the correct schema in the database. In the LogClient example, you load the LoggerClient.jar and Logger interfaces into the scott schema from the LoggerEJB source directory. Note that the LoggerClient.jar contains the IIOP interface stubs.

    loadjava -u scott/tiger -r -v LoggerClient.jar ejbinterop/*.class
    
    
  2. Execute the Java client, which calls the EJB application. Use ojvmjava to execute the client main method. The CORBA URL must be modified to know the hostname and port number on which the application server is executing and listening.

    ojvmjava -u scott/tiger -c "java LogClient"
    ojmvjava -u scott/tiger -c "java ejbinterop.LogClient   corbaname:iiop:1.2@myhost:3700#LoggerEJB"
    
    

If successful, this type of message is added to the server.log: Message from a Java RMI-IIOP client.

The previous example calls out to the EJB application using ojvmjava. If you want to call out from a PL/SQL procedure, use the following set of commands instead:

SQL> create or replace procedure myejb(args varchar2) as language java 
     name 'ejbinterop.LogClient.main('java.lang.String[])';
SQL> /
SQL> set serveroutput on
SQL> call dbms_java.set_output(40000);
SQL> call myejb('corbaname:iiop:1.2@myhost:3200#LoggerEJB

42.15 Oracle Database JDBC Developer's Guide and Reference


WebRowSet

Chapter 18, "Row Set" should contain the following information.

This release of JDBC provides an early implementation of JSR-114 WebRowSet (Public Draft). Its specification is available at the following Web site:

http://jcp.org/aboutJava/communityprocess/first/jsr114/index.html

The WebRowSet API supports the production and consumption of result sets, and their synchronization with the data source, both in XML format and in disconnected fashion. This allows result sets to be shipped across tiers and over Internet protocols.


Reducing the Size of orai18n.jar

If you want to reduce the size of this file, do not follow the instructions in Chapter 12, "Globalization Support". Instead, proceed as follows: The file orai18n.jar contains many important character-related files. Most of these files are essential to globalization support. Instead of extracting only the character-set files your application uses, it is safest to follow this three-step process:

  1. Unpack orai18n.jar into a temporary directory.

  2. Delete the character-set files that your application does not use. Do not delete any territory, collation sequence, or mapping files.

  3. Create a new orai18n.jar file from the temporary directory and add the altered file to your CLASSPATH.


DMS-enabled JDBC JAR files

Chapter 21, "End-To-End Metrics Support" should contain the following information.

If you are using a DMS-enabled JDBC JAR file, then you must include the JAR file, dms.jar, for DMS itself in your classpath. The DMS-enabled JDBC JAR file and the DMS JAR file must come from the same Oracle release.


Native Java Interface Support

Chapter 1, "Overview", should contain the following information. Oracle Database 10g introduces the native Java interface's new features for calls to server-side Java code. Previously, calling Java stored procedures and functions from a database client required JDBC calls to associated PL/SQL wrappers. As of this release, applications can call Java stored procedures using a new API for direct invocation of static Java methods.


Performance Issues with SetQueryTimeout

Chapter 26, "Coding Tips and Troubleshooting", should contain the following information.

"The JDBC standard method Statement.cancel() attempts to cleanly stop the execution of a SQL statement by sending a message to the database. In response, the database stops execution and replies with an error message. The Java thread that invoked Statement.execute() waits on the server, and continues execution only when it receives the error reply message invoked by the other thread's call to Statement.cancel().

As a result, Statement.cancel() relies on the correct functioning of the network and the database. If either the network connection is broken or the database server is hung, the client does not receive the error reply to the cancel message. Frequently, when the server process dies, JDBC receives an IOException that frees the thread that invoked Statement.execute(). In some circumstances, the server is hung, but JDBC does not receive an IOException. Statement.cancel() does not free the thread that initiated the Statement.execute(). Due to limitations in the Java thread API, there is no acceptable workaround.

When JDBC does not receive an IOException, Oracle Net may eventually time out and close the connection. This causes an IOException and frees the thread. This process can take many minutes. For information on how to control this time-out, see the description of the readTimeout property for OracleDatasource.setConnectionProperties(). You can also tune this time-out with certain Oracle Net settings. See Oracle Net Services Administrator's Guide for more information.

The JDBC standard method Statement.setQueryTimeout() relies on Statement.cancel(). If execution continues longer than the specified time-out interval, then the monitor thread calls Statement.cancel(). This is subject to all the same limitations described previously. As a result, there are cases when the time-out does not free the thread that invoked Statement.execute().

The length of time between execution and cancellation is not precise. This interval is no less than the specified time-out interval but can be several seconds longer. If the application has active threads running at high priority, then the interval can be arbitrarily longer. The monitor thread runs at high priority, but other high priority threads may keep it from running indefinitely. Note that the monitor thread is started only if there are statements executed with non zero time-out. There is only one monitor thread that monitors all Oracle JDBC statement execution.

Statement.cancel() and Statement.setQueryTimeout() are not supported in the server-side internal driver. The server-side internal driver runs in the single-threaded server process; the Oracle JVM implements Java threads within this single-threaded process. If the server-side internal driver is executing a SQL statement, then no Java thread can call Statement.cancel(). This also applies to the Oracle JDBC monitor thread.


Autocommit, Global Transactions, and XAConnection

Chapter 9, "Distributed Transactions", should contain the following information.

The connection obtained from an XAConnection object behaves exactly like a regular connection, until it participates in a global transaction; at that time, autocommit status is set to false. After the global transaction ends, autocommit status is returned to the value it had before the global transaction. The default autocommit status on a connection obtained from XAConnection is false in all releases prior to Oracle Database 10g; from this release forward, the default status is true.


New System Properties

Chapter 1, "Overview", should contain the following information.

In this release, oracle.jdbc.driver.OracleDriver supports the following new system properties, all of which default to false:

  • oracle.jdbc.TcpNoDelay sets the TcpNoDelay flag on the Socket, which might speed up the data transfer rate, depending on the native implementation of the Socket class.

  • If oracle.jdbc.defaultNChar is set to true, the default behavior for handling character datatypes is changed to make NCHAR or NVARCHAR2 the default. Setting this property to true removes the need to use setFormOfUse() when using NCHAR or NVARCHAR2 columns.

  • oracle.jdbc.useFetchSizeWithLongColumn, when set to true, increases the number of LONG column entries that are prefetched at one time. This behavior applies to thin drivers only.

  • oracle.jdbc.V8Compatible, when set to true, makes JDBC use representation for date data that is compatible with the Oracle8i Database.

Note that these are system properties; setting a connection property overrides the value specified as a system property.

42.16 Oracle Database JPublisher User's Guide


SQLJ Option

Chapter 5, "Command Line Options and Import Files", should contain the following information. The Oracle SQLJ translator and runtime libraries are supplied with the Oracle JPublisher product. If you do not have direct access to the SQLJ translator command-line utility, you can translate SQLJ source files through the JPublisher -sqlj option.

42.17 PL/SQL Packages and Types Reference


DBMS_DATA_MINING Package

In Chapter 23, "DBMS_DATA_MINING", the usage notes for the GET_MODEL_DETAILS_SVM function should include the following information: "For the linear SVM model, to reduce storage requirements and speed up model loading, only non zero coefficients are stored. As a result, if an attribute is missing in the coefficient list returned by GET_MODEL_DETAILS_SVM, then the coefficient of this attribute should be interpreted as zero."


DBMS_SCHEDULER Package

In Chapter 83, "DBMS_SCHEDULER":

  • CREATE_JOB_CLASS procedure default setting for parameter logging_level is LOGGING_RUNS, not NULL.

  • STOP_JOB procedure is not supported for jobs of type executable.


DBMS_SQLTUNE Package

Chapter 91, "DBMS_SQLTUNE", is missing the following information:

SELECT_CURSOR_CACHE Function

This function enables collection of SQL statements from the cursor cache.

Syntax

DBMS_SQLTUNE.SELECT_CURS0R_CACHE (
  basic_filter        IN   VARCHAR2 := NULL,
  object_filter       IN   VARCHAR2 := NULL,
  ranking_measure1    IN   VARCHAR2 := NULL,
  ranking_measure2    IN   VARCHAR2 := NULL,
  ranking_measure3    IN   VARCHAR2 := NULL,
  result_percentage   IN   NUMBER   := 1,
  result_limit        IN   NUMBER   := NULL)
 RETURN sys.sqlset PIPELINED;

Parameters

  • basic_filter The SQL predicate to filter the SQL from the cursor cache.

  • object_filter Specifies the objects that should exist in the object list of selected SQL from the cursor cache.

  • ranking_measuren An order-by clause on the selected SQL.

  • result_percentage A percentage on the sum of a ranking measure.

  • result_limit The top limit SQL from the filtered source, ranked by the ranking measure.

42.18 Oracle Database Application Developer's Guide - Fundamentals

Chapter 15, "Using Flashback Features", section "Database Administration Tasks Before Using Flashback Features" should contain the following information:

To use the Flashback Transaction Query feature in Oracle Database 10g, the database must be running with version 10.0 compatibility, and must have supplemental logging turned on with the following SQL statement:

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

42.19 Oracle HTML DB User's Guide


Default Install Location

Chapter 2, "Quick Start" defines the default location where Oracle HTML installs as:

http://server:port/pls/Database Authentication Descriptor/htmldb

This should be:

http://server:port/pls/Database Access Descriptor/htmldb

Administration Services Application Location

Chapter 14, "Administering Workspaces" desribes the location of the Oracle HTML DB Administration Services application as:

http://server:port/pls/Database Authentication Descriptor/htmldb_admin

This should be:

http://server:port/pls/Database Access Descriptor/htmldb_admin

Referencing Values Within an On Submit Process

In chapter 13, "Oracle HTML DB APIs," the example in "Referencing Values Within an On Submit Process" incorrectly appears as:

FOR i IN HTMLDB_APPLICATION.G_F01.COUNT LOOP
  htp.p('element '||I||' has a value of '||HTMLDB_APPLICATION.G_F01(i));
END LOOP;

This should be:

For i IN 1..HTMLDB_APPLICATION.G_F01.COUNT LOOP 
  htp.p('element '||I||' has a value of '||HTMLDB_APPLICATION.G_F01(i)); 
END LOOP; 

42.20 Oracle Real Application Clusters Administrator's Guide


Oracle Interface Configuration Tool and Supported Storage Interfaces

In Chapter 8, "Administrative Options", it states that the Oracle Interface Configuration (OIFCFG) tool supports storage type interfaces for file I/O. This is incorrect: OIFCFG does not support storage type interfaces for file I/O in this release.


Oracle Notification Services Configuration

In "Adding and Deleting Nodes and Instances", the incorrect command

racgons nodeI:4948 nodeI+1:4948......nodeI+n:4948

should be replaced by

racgons add_config nodeI:4948 nodeI+1:4948......nodeI+n:4948

42.21 Oracle Streams Replication Administrator's Guide


Supplemental Logging

In Chapter 1, "Understanding Streams Replication," section "Supplemental Logging for Streams Replication", replace the two (2) rules referring to apply process parallelism with the following rule: "If the parallelism of any apply process that will apply the changes is greater than 1, then any indexed column at a destination database that comes from one or more columns at the source database must be unconditionally logged."

42.22 Oracle Database Heterogeneous Connectivity Administrator's Guide

The following information should be removed from Chapter 4, "Using Heterogeneous Services Agents", section "Determining the Heterogeneous Services Parameters":

The Distributed Access Manager has a refresh capability available through the menu and toolbar that allows users to rerun queries if necessary and update the data. When the data is refreshed, the tool verifies that the set of registered agents remains the same. If it is not, the global view is updated. See Oracle Enterprise Manager Administrator's Guide and online help for more information about the Distributed Access Manager.

42.23 Oracle Database Globalization Support Guide


Time Zone Files

In Chapter 4, "Datetime Datatypes and Time Zone Support", section "Choosing a Time Zone File" should contain the following information.

This section identifies $ORACLE_HOME/oracore/zoneinfo/timezone.dat as the default time zone file in the Oracle Database installation directory. The default time zone file is now $ORACLE_HOME/oracore/zoneinfo/timezlrg.dat.

$ORACLE_HOME/oracore/zoneinfo/timezlrg.dat is a larger file than $ORACLE_HOME/oracore/zoneinfo/timezone.dat. It contains more time zones. The larger time zone file is now the default time zone file.

If you use the larger time zone file, then you must continue to use it unless you are sure that none of the additional time zones that it contains are used for data that is stored in the database. Also, all databases that share information must use the same time zone file.

To enable the use of $ORACLE_HOME/oracore/zoneinfo/timezone.dat, or if you are already using it as your time zone file and you want to continue to do so in an Oracle Database 10g, perform the following steps:

  1. Shut down the database if it has been started.

  2. Set the ORA_TZFILE environment variable to $ORACLE_HOME/oracore/zoneinfo/timezone.dat.

  3. Restart the database.

The section continues by describing the results of entering the following statement:

SELECT tzname, tzabbrev FROM v$timezone_names;

The results that are shown are for $ORACLE_HOME/oracore/zoneinfo/timezone.dat, which is no longer the default time zone file. However, similar output also results for the new default time zone file.

The section also contains output from the following statement:

SELECT UNIQUE tzname FROM v$timezone_names;

The results shown are for $ORACLE_HOME/oracore/zoneinfo/timezone.dat, which is no longer the default time zone file. However, similar output also results for the new default time zone file.


Customizing Time Zones

In Chapter 13, "Customizing Locale", section "Customizing Time Zones" identifies $ORACLE_HOME/oracore/zoneinfo/timezone.dat as the default time zone file in the Oracle Database installation directory. The default time zone file is now $ORACLE_HOME/oracore/zoneinfo/timezlrg.dat.


Time Zone Names

In Appendix A, "Locale Data", Table A-14, "Time Zone Names", contains time zone names from $ORACLE_HOME/oracore/zoneinfo/timezone.dat and $ORACLE_HOME/oracore/zoneinfo/timezlrg.dat. Change the column title "Is It in the Default Time Zone File?" to "Is It in the Smaller Time Zone File?".

Change the paragraph immediately preceding Table A-14 as follows: "Table A-14 shows the time zone names in the default time zone file that is supplied with the Oracle Database. The default time zone file is $ORACLE_HOME/oracore/zoneinfo/timezlrg.dat. Oracle also supplies a smaller time zone file, $ORACLE_HOME/oracore/zoneinfo/timezone.dat. See Chapter 4, "Datetime Datatypes and Time Zone Support", section "Choosing a Time Zone File".

42.24 Oracle Database Recovery Manager Reference

In Chapter 2, "RMAN Commands", section "CONVERT", example "Converting Tablespaces on the Target Platform: Example" illustrates the use of the RMAN CONVERT command, and currently appears as:

RMAN> CONVERT DATAFILE='/tmp/transport_solaris/*'
      DB_FILE_NAME_CONVERT
        '/tmp/transport_solaris/fin','/orahome/dbs/fin',
        '/tmp/transport_solaris/hr','/orahome/dbs/hr'

Use the following instead:

RMAN> CONVERT DATAFILE=
            '/tmp/transport_solaris/fin/fin01.dbf',
            '/tmp/transport_solaris/fin/fin02.dbf',
            '/tmp/transport_solaris/hr/hr01.dbf',
            '/tmp/transport_solaris/hr/hr02.dbf'
      DB_FILE_NAME_CONVERT
            '/tmp/transport_solaris/fin','/orahome/dbs/fin',
            '/tmp/transport_solaris/hr','/orahome/dbs/hr'

The wildcard used in the original example is not supported. The names of datafiles that should be converted must be spelled out explicitly.

42.25 Oracle XML Developer's Kit Programmer's Guide

In Chapter 7, "XML SQL Utility (XSU)", section "XSU Generating XML....", add the following note at the end of the "Create a connection" discussion: "Note: oracle.xml.sql.dataset.OracleXMLDataSetExtJdbc is used only for Oracle JDBC, while oracle.xml.sql.dataset.OracleXMLDataSetGenJdbc is used for non-Oracle JDBC."

42.26 Oracle Database Recovery Manager Reference


Converting Tablespaces on the Target Platform: Example

In Chapter 2, "RMAN Commands", section "CONVERT", example "Converting Tablespaces on the Target Platform: Example", the following information should be added:

In this scenario, you need to transport these tablespaces from a source database running on a Sun Solaris host to a destination database running on an Linux PC host:

  • finance datafiles (fin): /orahome/fin/fin01.dbf and /orahome/fin/fin02.dbf

  • human resources datafiles (hr): /orahome/hr/hr01.dbf and /orahome/hr/hr02.dbf

If you plan to perform conversion on the target host, you should temporarily store the unconverted datafiles in the directory /tmp/transport_solaris/ on the target host. When the datafiles are inserted into the destination database, they will be stored in /orahome/dbs.

This example assumes that you followed these steps in preparation for the tablespace transport:

  1. Set the source tablespaces to be transported to be read-only.

  2. Use the Original Export utility to create the structural information file, expdat.dmp.

  3. Gather expdat.dmp and unconverted tablespace datafiles that should be transported.

  4. Copy these files to the destination host in the /tmp/transport_solaris/ directory.

  5. Preserve the subdirectory structure from the original location of the files; the datafiles should be stored as:

    /tmp/transport_solaris/fin/fin01.dbf
    /tmp/transport_solaris/fin/fin02.dbf
    /tmp/transport_solaris/hr/hr01.dbf
    /tmp/transport_solaris/hr/hr02.dbf
    
    
  6. Use RMAN's CONVERT command to convert the datafiles to be transported to the destination host's format.

  7. Deposit the results in /orahome/dbs.

Note the following:

  • You have to identify the datafiles by filename, not by tablespace name. Until the datafiles are plugged in, the local instance cannot determine the desired tablespace names.

  • The DB_FILE_NAME_CONVERT argument controls the name and location of the converted datafiles. You do not have to specify the source or destination platform because RMAN can determine the source platform by examining the datafile, and the target platform defaults to the platform of the host performing the conversion.

% rman TARGET /

RMAN> CONVERT
      DATAFILE=
         '/tmp/transport_solaris/fin/fin01.dbf',
         '/tmp/transport_solaris/fin/fin02.dbf',
         '/tmp/transport_solaris/hr/hr01.dbf',
         '/tmp/transport_solaris/hr/hr02.dbf'
      DB_FILE_NAME_CONVERT=
         '/tmp/transport_solaris/fin','/orahome/dbs/fin',
         '/tmp/transport_solaris/hr','/orahome/dbs/hr'

The result is a set of converted datafiles in the /orahome/dbs/ directory:

/orahome/dbs/fin/fin01.dbf
/orahome/dbs/fin/fin02.dbf
/orahome/dbs/hr/hr01.dbf
/orahome/dbs/hr/hr02.dbf

From this point, you should follow the general outline for tablespace transport. Use Import to plug the converted tablespaces into the new database with the import utility, and make the tablespaces read-write, if applicable.

42.27 Oracle XML DB Developer's Guide

In Appendix I, "Oracle XML DB Feature Summary", section "Oracle XML DB Limitations", subsection "SubstitutionGroup Limited to 2048 Elements" should be removed, as this limitation no longer exists.

42.28 Oracle OLAP Application Developer's Guide

In Chapter 12, "Administering Oracle OLAP", the following information is missing:

Users who want to author or execute Analytic Workspace Java API applications within the OracleJVM may need the following Java permissions, in addition to the OLAP_DBA or OLAP_USER role:

Permission Type Action
java.io.FilePermission
read, write, execute
java.util.PropertyPermission
read, write
java.net.SocketPermission
connect, resolve
java.lang.RuntimePermission
null

You can grant these permissions in either Java or SQL. For more information about OracleJVM security and Java permissions, refer to the Oracle Database Java Developer's Guide.

42.29 Oracle Advanced Security Administrator's Guide

In Chapter 7, "Configuring Secure Sockets Layer Authentication", section "Oracle Net Tracing File Error Messages Associated with Certificate Validation", the following action should be added under the error message, "Fetch CRL from CRL DP: No CRLs Found":

Ensure that your certificate authority publishes the CRL to the URL that is specified in the certificate's CRL DP extension.

42.30 Oracle Data Mining Application Developer's Guide

Appendix B, "ODM Tips and Techniques" should contain a new section B.2.11, "Linear SVM Model Coefficients":

For the linear SVM model, to reduce storage requirements and speed up model loading, only non-zero coefficients are stored. As a result, if an attribute is missing in the coefficient list returned by GET_MODEL_DETAILS_SVM, the coefficient of this attribute should be interpreted as zero.

42.31 Oracle Data Mining Administrator's Guide

In Chpater 5, "Oracle Data Mining Administration", section "ODM Configuration Parameters" should containt the following description for AI_BUILD_SEQ_PER_Partition:

Data type is int; default is 50000. Keeps the computations constrained to memory-sized chunks. There is no maximum value; this value should not be smaller than 1000.

43 Open Bugs

This section lists known bugs for this release. A supplemental list of bugs may be found as part of the release documentation specific for your platform.

43.1 Automatic Storage Management Known Bugs


Bug 3107894

Automatic Storage Management (ASM) Disk discovery string must be the same on all nodes. ASM disks should be accessible by the same discovery string on all nodes.

Workaround:

When setting shared storage for a RAC database on ASM, all disks should have same access path from all nodes that comprise a RAC cluster. Alternatively, manually set the ASM_DISKSTRING to distinct values for each node of the cluster.


Bug 3343372

ASM rebalance hangs when the process limit is exceeded.

Workaround:

Ensure that the PROCESSES initialization parameter value is sufficiently large to accommodate the ASM_POWER_LIMIT parameter for all arb processes, as well as all other ASM processes.


Bug 3349512

A disk cannot be dropped when the name supplied by the administrator is a keyword.

Workaround:

Do not use SQL keywords as ASM disk names.


Bug 3362592

ASM terminates with ORA-00600 [KFFMXPALLOC_1] error message when creating files larger than 1.3 TB.

Workaround:

Do not create ASM files that are larger than 1.2TB on external redundancy disk groups, larger than 600GB on normal redundancy disk groups, or larger than 400GB on high redundancy disk groups.


Bug 3385592

When using a shared $ORACLE_HOME, a core dump in racgmain shuts down the ASM instances immediately after it starts.

Workaround:

This bug is observed only on the Solaris platform. Do not use the same $ORACLE_HOME for all nodes in the cluster. Alternatively, you can share the same $ORACLE_HOME among all nodes in the cluster as long as you do not use an SPFILE for the ASM instances.


Bug 3386190

Typical installation with ASM does not discover ASMLIB disks.

Workaround:

Perform a custom installation using the advanced database configuration install option. Alternatively, create disk groups after the installation by using DBCA or ASM CREATE DISKGROUP in SQL.


Bug 3389920

RAC ASM cannot be used for non-RAC database instance.

Workaround:

Use RAC database instances only with RAC ASM, and non-RAC database instances only with non-RAC ASM.


Bug 3390752

On ASM, ADD INSTANCE does not configure ASM to work with DBCONTROL.

Workaround:

Manually update targets.xml with ASM instance details, then reload the agent. For further details, see this file:

$ORACLE_HOME/relnotes/readmes/EM_db_control.txt

Bug 3392653

ASM cannot create files larger than 300GB.

Workaround:

Do not attempt to create an ASM file larger than 300GB on external redundancy disk groups, larger than 150GB on normal redundancy disk groups, or larger than 100GB on high redundancy disk groups. Alternatively, make the SGA size for the ASM and database instances larger than 1GB.


Bug 3401639

The ALTER DISKGROUP DISMOUNT FORCE statement is not supported in this release.

Workaround:

None.

43.2 Compatibility and Upgrade Known Bugs


Bug 3334209

When upgrading an Oracle8i Database to Oracle Database 10g, the size of the four KOT system tables, KOTTD$, KOTAD$, KOTMD$ and KOTTB$, may increase significantly, depending on the size of blocks used. Additional space may be required for a successful upgrade to Oracle Database 10g.

Workaround:

Use the SQL script rdbms/admin/utlu101i.sql to estimate the amount of space necessary. Alternatively, set AUTOEXTEND ON MAXSIZE UNLIMITED for the SYSTEM tablespace.

43.3 Globalization Support Known Bugs


Bug 3268735

Users working with Traditional Chinese may encounter a java exception when invoking GUI components such as oidamin, dbca or dbua in zh_TW or zh_TW.EUC locales.

Workaround:

Change the locale to zh_TW.BIG5.

43.4 OLAP Known Bugs


Bug 3313073

In Analytic Workspace Manager, the default "All objects" option in the Import from EIF file dialog will not import all the objects.

Workaround:

In the Import from EIF file dialog, select the "Import Object Properties" option found on the Advance tab.


Bug 3325006

Analytic Workspace users should not use the MAINTAIN DELETE ALL command on compressed composites. If a user performs the MAINTAIN DELETE ALL command on a compressed composite, they will receive a system error when they detach the analytic workspace. Also, the subsequent compressed composite may yield incorrect data.

Workaround:

You should first delete any variables set by the compressed composite, then delete the compressed composite itself. Afterwards, you can redefine the compressed composite and any associated variable.


3340372

OLAP Worksheet cannot be launched.

Workaround:

Launch the OLAP Worksheet from the OLAP Analytic Workspace Manager Tool installed with Oracle Database 10g client.


Bug 3366805

While installing the OLAP option, a custom DBCA database creation can fail with an ORA-00060: Deadlock Detected error message.

Workaround:

Press the Ignore button and continue with the installation. Once the installation is complete, login as sysdba and re-run the $ORACLE_HOME/olap/admin/cataps.sql script.

43.5 Oracle Advanced Security Known Bugs


Bug 3388688

If you are upgrading from a 32-bit version of the Oracle Database, you will receive an ORA-01637: Packet receive failed error message the first time you use Kerberos authentication adapter.

Workaround:

After upgrading to the 64-bit version of the database and before using Kerberos external authentication method, check if you have the /usr/tmp/oracle_service_name.RC file on your system, and remove it.

43.6 Oracle Call Interface Known Bugs


Bug 1704273

This bug manifests itself in MTS setups when shared db links are used between the middle-tier and the dedicated backend. The symptoms of this bug are client disconnects, accompanied by the following error messages: ORA-02068: following severe error from BACKEND, ORA-00022: invalid session ID; access denied.

Workaround:

Set the values of system variables mts_servers and max_mts_servers to be equal.


Bug 3133023

Performing an array operation through OCIStmtExecute using iters >= 65535 will result in the error ORA-24381: error(s) in array DML.

Workaround:

Do repeated OCIStmtExecute operations in a loop with iters set to a value less than 65535, and use the rowoff argument as necessary for array binds or defines.

43.7 Oracle Data Guard Known Bugs


Bug 2834560

Transient constraint violations as part of batch updates are not supported in a logical standby database. Updates to primary key or unique key causing transient collisions are not resolved in the logical standby database. For instance, the primary database may contain a table contact, (ID number primary key, name varchar2(32)), that contains rows with ID values (1...32). An update statement such as UPDATE CONTACT SET ID = id+1; causes temporary collisions on the ID column but executes successfully in the primary database. On the logical standby database, the attempt to replicate the effects will result in constraint violation.

Workaround

None.


Bug 2795200

Distributed transaction states are not preserved after failover involving a logical standby database. The state of an in-doubt transaction at the primary database is not preserved in the logical standby database on a failover or switchover.

Workaround

None.


Bug 3184834

Logical standby database shows job processes are available although jobs cannot be executed in the logical standby database. Querying the V$PARAMETER view in the Logical Standby database may show JOB_QUEUE_PROCESSES being available, and the logical standby database allows for scheduling jobs. However, the scheduled jobs are not executed in the logical standby database. Some applications may incorrectly assume that job scheduling is possible, when it is not.

Workaround

Set JOB_QUEUE_PROCESSES to 0 before starting SQL Apply. After a failover or switchover to the database, set JOB_QUEUE_PROCESSES to the desired value.


Bug 3199961

ALTER DATABASE GUARD {ALL | STANDBY | NONE} does not propagate to all active instances in RAC. Executing the ALTER DATABASE GUARD command on one instance does not affect the other active instances in a RAC. Instances that are shut down at the time of the command execution, automatically picks up the new guard setting when starting up.

Workaround:

Execute the ALTER DATABASE GUARD command on all active instances.


Bug 3310115

If a logical standby database operating in an Oracle Real Application Clusters mode is involved in a switchover, and if the Oracle Data Guard configuration is running maximum protection mode or maximum availability mode, then standby locks will not be acquired by the instance that performs that switchover. If the standby locks are not acquired, the instance cannot communicate destination failures to other instances; this directly affects no-data-loss recovery.

Workaround:

Before performing a switchover to a logical standby database configured in an Oracle Real Application Cluster, make sure to shut down all instances except for one on each database. After the switchover is complete, the other instances can be started. If the primary database is running in maximum protection mode or maximum availability mode, the original instance that performed the switchover on the primary database should be stopped and restarted. This can take place before or after the other instances have been restored on the database.


Bug 3311688

The Oracle Label Security feature is incompatible with SQL Apply.

Workaround:

If Oracle Label Security is in use, use Redo Apply instead of SQL Apply for reporting and disaster recovery.


Bug 3372626

If you do not cancel real-time apply before performing a failover to a physical standby database, then less redo data may be applied to the standby database, and the new primary database may contain inconsistent data after the failover.

Workaround:

Before failing over to a physical standby database that has real-time apply enabled, issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL statement to stop real-time apply before you issue the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH statement to initiate the failover.


Bug 3389628

When DB_DOMAIN is not set, the Data Guard broker RSM0 background process crashes while using Data Guard GUI or CLI, with ORA-16766 error message:

ORA-16766: Physical apply service unexpectedly offline
Workaround:

Set the DB_DOMAIN initialization parameter to null, then re-enable the database using either Data Guard CLI or GUI.

If using CLI, use this command to set DB_DOMAIN to null:

SQL> ALTER SYSTEM SET DB_DOMAIN='' SCOPE=SPFILE;

The database must be restarted. Make sure the parameter is also set on the primary database; the restart may be deferred until a role change operation. Use the following script to re-enable the database:

DGMGRL> ENABLE DATABASE db_unique_name;
Enabled.

If you set the DB_DOMAIN initialization parameter to a non-null value, you must remove the database and add it back into the configuration:

DGMGRL> REMOVE DATABASE db_unique_name
Removed database "db_unique_name" from the configuration.

DGMGRL> ADD DATABASE 'db_unique_name' AS CONNECT IDENTIFIER IS 'connect_identifier' MAINTAINED AS PHYSICAL;
Database "db_unique_name" added.

If using the GUI, click Reset on the database General properties page to re-enable the database. You can use the Add Standby Database wizard to add the database back into the configuration.


Bug 3398758

In a logical standby database, the apply process may hang under the following conditions:

  • The value of V$STREAMS_APPLY_COORDINATOR.TOTAL_ASSIGNED is not increasing, and value of V$STREAMS_APPLY_COORDINATOR.TOTAL_RECEIVED is greater than the value of V$STREAMS_APPLY_COORDINATOR.TOTAL_ASSIGNED.

  • There was a recent apply restart in the alert log.

Workaround:

Perform the following steps:

  1. Abort the apply process:

    alter database abort logical standby apply;
    
    
  2. Query SYSTEM.LOGSTDBY$APPLY_MILESTONE.FETCHLWM_SCN.

  3. Increase the value of the _eager_size parameter:

    dbms_logstdby.apply_set('_eager_size',new_value_for_eager_size);
    
    
  4. Restart the apply process:

    alter database start logical standby apply;
    
    
  5. Restore the _eager_size parameter when V$STREAMS_APPLY_COORDINATOR.LWM_MESSAGE_NUMBER is greater than the FETCHLWM_SCN value.

43.8 Oracle Data Mining Known Bugs


Bug 3202916

If a cost matrix is supplied to the build operation, the default behavior is to return costs, not probabilities, when doing a batch apply.

Workaround:

None.


Bug 2545555

Mining attributes may be specified with mixed case names in the ODM Java API methods, such as "Age", "EducationLevel", "Affinity_Card", but the ODM server uniformly converts the names to uppercase, like "AGE", "EDUCATIONLEVEL", "AFFINITY_CARD", before using them for model operations. Consequently, the input tables used for model build, testing, and scoring, should have the non-transactional input column names in uppercase, and the attribute names in the case of transactional input in uppercase.

Workaround:

When using data tables with ODM, do not name columns in lower or mixed case, using the literal-based naming mechanism, as in "Education_Num", for non-transactional input data. For transactional input, all attributes that are provided through the attribute_name column in the schema:

(sequence_id, attribute_name, attribute_value)

are implicitly converted into upper case before being used for model operations. Also, do not introduce special characters, such as ', ", space, and :, in your attribute names.


Bug 3259277

BLAST problem: When there are gaps in alignment, the attributes pct_identity, positives, mismatches, and gap_openings will be incorrect. However, all other attributes are correct. Users who are not interested in the attributes that are incorrect can still make use of the results. If the alignment does not produce any gaps, all the result attributes are correct.

Workaround:

None.


Bug 3268579

Model Migration: During an RDBMS upgrade from Oracle9i Database Release 2 to Oracle Database 10g, Oracle JVM resource constraints may limit the number of Java Data Mining models that can be migrated at once.

Workaround:

After an RDBMS upgrade, compare the number of models in odm.odm_mining_model in Oracle9i Database Release 2 with the number of models in odm.dm_model view in Oracle Database 10g. If there is a discrepancy between the row number of the odm_mining_model table and the Oracle Database 10g dm_model view, use the following script to migrate all remaining ODM Java models:

cd $ORACLE_HOME/dm/admin 
sqlplus odm/<passwd> 
SQL>@initodm.sql 
SQL>exit 

Verify the results once more in the odm.dm_model view to ensure that all Java models are migrated.


Bug 3313128

k-Means build in the PL/SQL API can fail under stress load, such as large number of attributes (in hundreds of thousands) or large number of clusters (in thousands), or a combination of both.

Workaround:

Build smaller models with fewer clusters and attributes.


Bug 3387835

K-Means build in the PL/SQL API fails in MTS mode when SGA memory is fragmented.

Workaround:

Log in as sysdba and ALTER SYSTEM FLUSH SHARED_POOL.

43.9 Oracle Database Configuration Assistant Known Bugs


Bug 3155831

When running DBCA, in the last screen of Database Storage the strings that represent locales for Traditional Chinese, zh_CN and zh_TW, are not displayed correctly.

Workaround:

None.


Bug 3335580

DBCA hangs when using Traditional Chinese locale settings LANG=zh_TW, LC_ALL=zh_TW.

Workaround:

None.

43.10 Oracle Database Resource Manager Known Bugs


Bug 3326388

When the resource_mapping_priority$ table has 0 rows, you may encounter one of the following:

  • Resource manager functionality may generate this error message in the alert log:

    ORA-00600: internal error code, arguments: [kkkicreatecgmap:!efn3], [1403] ...
    
    
  • Export utility may generate an error stack that starts with these two messages:

    EXP-00008: ORACLE error 1403 encountered
    ORA-01403: no data found
    
Workaround:

Follow these steps to diagnose and correct this problem:

  1. Disable the resource manager in your init.ora. You will not be able to use the resource manager after the database comes back up.

  2. Add the following content to your pfile: resource_manager_plan=''.

  3. Start the database.

  4. Verify that there are 0 rows in the resource_mapping_priority$ table.

    select count(*) from resource_mapping_priority$;
    
    

    If the number of rows is not 0, contact your customer support organization.

  5. Execute the following SQL script:

    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('EXPLICIT', 1, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('ORACLE_USER', 7, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('SERVICE_NAME', 6, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('CLIENT_OS_USER', 9, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('CLIENT_PROGRAM', 8, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('CLIENT_MACHINE', 10, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('MODULE_NAME', 5, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('MODULE_NAME_ACTION', 4, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('SERVICE_MODULE', 3, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('SERVICE_MODULE_ACTION', 2, 'ACTIVE');
    insert into resource_mapping_priority$ (attribute, priority, status)
    values ('CLIENT_ID', 11, 'ACTIVE');
    commit;
    
    
  6. Shutdown the database. Restore the value of the resource_manager_plan parameter.

  7. Restart the database.

43.11 Oracle HTML DB Known Bugs


Bug 3018082

SVG charts created with Oracle HTML DB cannot display multibyte characters. This problem is caused by the Adobe SVG Viewer not properly determining the browser language settings and applying the correct default font.

Workaround:

To correct this issue,

  • Edit the chart attributes

  • Enclose the chart title in tspan tags

  • Specify the correct language-specific font family.

For example:

<tspan style="Language-specific Font Family">Chart Title</tspan>

Bug 3164030

If you created your application in Oracle HTML DB prior to this release, then you may experience problems with images that were copied to a directory instead of being uploaded into the Oracle HTML DB Image Repository.

Workaround:

You can correct this by uploading the images to the Oracle HTML DB Image Repository. Note that the images must have the same file names as before. Alternatively, you can provide a full path to the virtual image directory on your file system.


Bug 3308710

Date format masks YYYY-MM-DD and RR-MON-DD are not supported in item type Data Picker.

Workaround:

None.


Bug 3314941

When using the Export XML report template in a configuration of Oracle HTML DB that uses a character set other than AL32UTF8 or UTF8, you will receive an invalid XML error if the report contains multibyte characters or the title of the report region contains multibyte characters.

Workaround:

None.


Bug 3374287

When you download a SQL script that has Japanese table data using 'Native file format' view link in Script Repository, all Japanese characters in the saved script on local machine are corrupted.

Workaround:

None.


Bug 3384664

When you open the number or date format select popup dialog on 'Column Attribute' in 'Page Definition' of Application Builder, it always displays 'backslash'+ 5,234.10 in the dialog, it is expected that the symbol of 'yen' accurately in Japanese environment. (Note: backslash and yen are the same character code point, but displayed differently depending on font used). Backslash is also displayed when applying the data format on the page in the application.

Workaround:

None.


Bug 3393090

If you create a form on a table or view based on an included column whose name is in Japanese, using the Application Builder wizard, the name of the new item will be included in Japanese.

Workaround:

When you create new items on the Page Definition page in ApplicationBuilder, you need to use alphanumeric characters A_Z, 0-9 and '_' as item names. You also need to edit item names to alphanumeric before you apply changes to the item.

43.12 Oracle Installation - Enterprise Edition Known Bugs


Bug 3374370

When the language is Traditional or Standard Chinese, if you install Enterprise Edition and create a database and then install it again in the same $ORACLE_HOME, you get the following error:

Oracle Universal Installer has detected that there are processes running in the currently selected Oracle Home.
The following processes need to be  ...
Workaround:

None.

43.13 Oracle interMedia Known Bugs


Bug 3250761

Repeatedly calling ordsys.ordimage.processcopy() within a single session can eventually cause an out of memory error. This is a problem for applications that batch process many large images during a single database session.

Workaround:

Establish a new database session when available memory runs low.

43.14 Oracle Real Application Clusters Known Bugs


Bug 3235218

ocrconfig commands can only be executed by the root user, but ocrconfig does not check to ensure that the executing user is root.

If an oraconfig.log file exists before the execution of ocrconfig begins, and if the current user in the current directory has no write privileges for this log file, ocrconfig fails without reporting any errors.

Workaround:

Become root before executing ocrconfig.


Bug 3311316

CLSR_START_ASM doesn't check the ENABLED or DISABLED status of the ASM instance. When starting an Automatic Storage Management (ASM) instance which has a CRS resource defined for it, the CRS resource is started even when the ASM instance has been disabled through srvctl. The CRS resource will detect that the ASM instance is disabled and shut it down. Thus, a disabled ASM instance cannot be started through SQL*Plus.

Workaround:

Enable the ASM instance before starting it.


Bug 3377903

When shutting down an instance and immediately starting it again using SQL*Plus, sometimes the CRS resource for the instance does not get started, because it is still busy stopping. In this case, srvctl status instance -d name -i sid will incorrectly report the instance not to be running.

Workaround:

Start the instance's CRS resource with srvctl status instance -d name -i sid.


Bug 3388963

The Database Configuration Assistant and the Database Upgrade Assistant have a Services Page that does not allow entry of service names that have a domain name. A dialog states that only alphanumeric characters are allowed because the service name contains a period.

Workaround:

Use SRVCTL to create the service if the service name contains a domain other than the DB_DOMAIN of the database.


Bug 3394085

After a node failure under heavy load conditions, srvctl, vipca, racg*, or crs_* commands may hang. This is caused by a race condition in the CRS node recovery code.

Workaround:

The race condition can be fixed by killing the oldest crsd process; you can find it by examining the $ORA_CRS_HOME/crs/log/hostname.log file. The init process will restart the killed crsd process within a few seconds and clear the hang.

43.15 Oracle Streams Known Bugs


Bug 3241689

The Streams queue buffer may not be initialized on database restart after some types of configuration problems.

Workaround:

Avoid using dbms_propagation_adm or other Oracle Streams APIs interchangeably with Oracle Streams Advanced Queuing (AQ) APIs to create and drop propagations. Always use Oracle Streams APIs for configuring propagation between databases. Particularly if the DB_DOMAIN parameter is null, avoid using multiple database link names where one link may be a substring of another link in propagations from the same queue.


Bug 3316255

In general, DDL statements are not supported on queue tables and may even render them inoperable. For example, issuing an ALTER TABLE ... SHRINK statement against a queue table results in an internal error and all subsequent attempts to use the queue table will also result in errors.

Workaround:

Do not use DDL commands on queue tables.


Bug 3317678

Reducing the scope of rules for Streams processes while the Streams queue contains messages can result in partial or incomplete transactions. If the modified rules eliminate logical change records, LCRs, for a transaction that previously satisfied the rules, and if LCRs for that eliminated transaction already exist in the queue, subsequent LCRs of the transaction, including the commit or rollback, may not be processed.

Workaround:

To ensure that uncommitted transactions are not impacted by such rule changes, stop the Streams processes and confirm that the Streams queue is empty before modifying the rules.


Bug 3384300

Intermittently, executing a remote cursor that has aged out of the shared pool generates an ORA-00600 [OPIXRM-1] error message.

Workaround:

Increase the shared pool size, or mark the remote cursor as kept. See PL/SQL Packages and Types Reference, Chapter 87, "DBMS_SHARED_POOL".

43.16 Oracle Text Known Bugs


Bug 3372757

A PARALLEL resume operation that uses ALTER INDEX, such as

  • ALTER INDEX REBUILD PARAMETERS ('RESUME') PARALLEL

  • ALTER INDEX REBUILD PARTITION PARAMETERS ('RESUME') PARALLEL

may cause an ORA-02070 error message:

ORA-02070: database does not support antijjoin in this context.
Workaround:

None. Turn off parallelism when using RESUME.

43.17 Oracle Ultra Search Known Bugs


Bug 3382153

Ultrasearch - Search Query API has an error. The KWIC function may cause PL/SQL buffer overflow exception. The KWIC function used for marking up excerpts in search results uses an internal varchar2 buffer to hold intermediate results. As the size of the text to be marked up approaches the upper limit the varchar2 buffer, the memory is likely to overflow and cause an exception.

Workaround:

When invoking oracle.ultrasearch.query.Request.setExcerptLength(), use an excerpt length that is significantly smaller than a varchar2, which has a length of 4000.


Bug 3374094

Ultrasearch - Search application/API has and error in some pages of a simple search result. When invoking the KWIC function for multibyte character sets, a message stack that contains the following error is generated:

ORA-06502: PL/SQL: numeric or value error: Character string buffer too small output array extended

This behavior is observed when the function SUBSTRC(x,x,1) returns a unicode character that cannot fit in a buffer declared as char(1), which is the case for databases that use multibyte character sets, such as Korean KO16MSWIN949.

Workaround:

None.


Bug 3379271

Ultrasearch - remote JDBC crawler has an error. The file $ORACLE_HOME/ultrasearch/bin/JdbcCrawlerLauncher.class is needed to run the JDBC remote crawler launcher. It is missing in this release.

Workaround:

None.


Bug 3379283

Ultrasearch - remote JDBC crawler has an error. When starting up the JDBC remote crawler launcher using the runall_jdbc.sh script, you should not get back the command prompt unless you sent the job into the background. If the prompt returns without feedback or error messages, it means that an error was generated, but error reporting was suspended.

Workaround:

None.


Bug 3379299

Ultrasearch - remote JDBC crawler has an error. $ORACLE_HOME/ultrasearch/tools/remotecrawler/scripts/unix/runall_jdbc.sh script fails to set the correct CLASSPATH. Java cannot find the required libraries, such as $ORACLE_HOME/ultrasearch/lib/ultrasearch.jar and others, and the remote crawler launcher cannot start.

Workaround:

Edit the following two files:

  • In $ORACLE_HOME/ultrasearch/tools/remotecrawler/scripts/unix/runall_jdbc.sh, add these two lines immediately after the call to define_env:

    CLASSPATH=${SYSTEM_CLASSPATH}:${APPLICATION_CLASSPATH}
    export CLASSPATH
    
    
  • In $ORACLE_HOME/ultrasearch/tools/remotecrawler/scripts/winnt/runall_jdbc.bat, add this line and add the single line immediately after the call to define_env:

    set CLASSPATH=%SYSTEM_CLASSPATH%;%APPLICATION_CLASSPATH%
    

43.18 Oracle XML DB Known Bugs


Bug 3097823

Internet Explorer 6.0 cannot be used as a client for the Oracle XML DB FTP server.

Workaround:

Use another browser; both Netscape 4.7 and Mozilla work.


Bug 3141321

When running in Oracle Real Application Clusters mode, multiple instances of Oracle XML DB may hang during a DBMS_XMLSCHEMA.DELETESCHEMA call.

If a session from node A performs an operation on the same schema that node B is attempting to delete, the DELETESCHEMA operation will hang. This happens because the session pool will remain with node A during the DELETESCHEMA call; the pool is cleared only for the local instance.

Workaround:

Shut down all nodes that have accessed the schema.


Bugs 3244176 and 3244257

In earlier releases, PL/SQL packages XMLDOM, XMLPARSER, and XSLPROCESSOR had an XDK Java-based implementation. In Oracle Database 10g Release 1, these packages have been moved to C-based API implementations, supported by Oracle XML DB. Please note that while we are trying to be close to the original packages, there will be subtle differences in the semantics of some APIs, which are mentioned in bugs 3244176 and 3244257. In this release, we do not guarantee backward compatibility with the XDK packages.

Workaround:

None.


Bug 3330788

Direct path loading is not supported if the schema-based xml column is not a top-level column of a table, or if the types created for the schema-based xml involve inheritance.

Workaround:

Use conventional loading to load the data through SQL*Loader.


Bug 3330801

If the loading of data into tables with schema-based xml columns fails due to an error on the column type, then please use SQL*Loader's conventional path method. The following scenarios can cause potential errors:

  • Loading a schema based xmltype column that involves many object and nested tables can cause the number of open cursors to exceed the default value of the initialization parameter, open_cursors. The error message generated in this case is not indicative of the problem. Therefore, if you suspect this may be the problem, then the solution is to increase the value of the open_cursors parameter, or to use SQL*Loader's conventional path method.

  • If an error is encountered while parsing an xml document, then this error will be fatal to the SQL*Loader job and no data will be loaded. In contrast, SQL*Loader's conventional path method will reject the current row being processed, but will continue to load data for subsequent error-free rows. If this type of error is encountered, then please either correct the error or use SQL*Loader's conventional path method.

Workaround:

Use conventional loading to load the data through SQL*Loader.


Bug 3377188

Dropping an object table fails with ORA-600[ORA-16609: one or more resources have failed] if the text index is present on one of the nested tables of the default table. This can happen during a deleteSchema() call or when dropping a user.

Workaround:

Drop the text index before calling deleteSchema() or dropping a user. Delete any text indexes over XMLType tables, or nested tables associated with XMLType tables or columns before dropping the XML Schema. If a text index exists on an attribute of the nested table, an interaction between Oracle Text and nested tables will prevent the nested table from being dropped when the parent table is dropped. Once this happens, it is impossible to drop the nested table or the schema owner.


Bug 3338112

This release of Oracle Database has limited versioning support for schema-based XML. Versioning is supported for schema-based resources only if schema tables under consideration have no triggers, indexes, or constraints.

Workaround:

None.


Bug 3391726

If the database is not restarted between running catnoqm.sql and catqm.sql scripts, UGA memory problems or hangs may result when trying to reinstall XML DB.

Workaround:

First run the catnoqm.sql script, then restart the database, and finish by running the catqm.sql script.

43.19 Oracle Workflow Known Bugs


Bug 3399034

There is a problem in the PL/SQL Webtool Kit: If the user has installed Workflow Schema and Workflow password as the same string in the database, this error will be returned while accessing http://hostname:HTTP Server Port number/pls/wf/wfa_html.home after the installation of middle tier, the following error would be returned.

You don't have permission to access /pls/wf/wfa_html.home on this server.
Workaround:

The user should manually type the Workflow schema name for PlsqlDatabaseUsername in $ORACLE_HOME/Apache/modplsql/conf/dads.conf.


Bug 3399659

During Workflow Middle Tier installation, if the user enters an incorrect Workflow username, password, or database connection information, Workflow installation sets the PlsqlNLSLanguage parameter to ""AMERICAN_AMERICA.AL32UTF8"", which causes the HTTP Server startup to fail.

You don't have permission to access /pls/wf/wfa_html.home on this server.

Workaround:

Remove one set of quotes, so that the value of the PlsqlNLSLanguage parameter is "AMERICAN_AMERICA.AL32UTF8", or set it to the correct database NLS_LANG. Bring up the HTTP Server manually.

Manually type the Workflow schema name for PlsqlDatabaseUsername in $ORACLE_HOME/Apache/modplsql/conf/dads.conf.

43.20 PL/SQL Known Bugs


Bug 3291684

The Oracle Database 10g Compiler implements stricter checks for allowed datatype definitions of the row signature returned by a pipelined table function.

In prior releases, the PL/SQL compiler allowed the definition of the signature of the rows returned by a pipelined table function to include one or more fields of datatype PL/SQL RECORD. Such definitions are no longer allowed in Oracle Database 10g; this means that programs that used this loophole and compiled and ran successfully prior to the current release will no longer compile.

Workaround:

Oracle has implemented a mechanism in Oracle Database 10g, to be used only under guidance from Oracle Support, which will preserve the Oracle9i Database Release 2 loophole behavior. It is turned on in this manner:

alter session set events = '10946 trace name context level 4'

Code that fails to compile in Oracle Database 10g because it uses the loophole behavior must be recompiled after event 10946 is set. Although the environment in which the compiled code runs does not need this event setting under normal runtime conditions, you should set the event to enable successful automatic recompilation in response to invalidation that might be caused by the dependency mechanism. The event may be set at the system level, and therefore may be set through the pfile or spfile.

For further notes and alternative implementations that do not rely on this loophole, please review bug 3291684.

43.21 Pro*C Known Bugs


Bug 1466269

Hints are not precompiled properly when used in an EXEC SQL EXPLAIN PLAN statement. The hints are not carried forward in the Pro*C generated code.

Workaround:

None.


Bug 1323304

When PL/SQL code is used in a version of Pro*C that has embedded \n characters, if the statement is sufficiently long, it has to be split into a call to sqlbuft and also be placed in sqlstm.stmt. This results in inconsistent escaping for \n in sqlbuft, with a runtime error in prepare:

PLS-00103: Encountered the symbol "\" when expecting ...
Workaround:

Edit the generated C file to remove the extra backslashes when the statement is split.


Bug 658837

While precompiling even with SQLCHECK=FULL, Pro*C doesn't detect invalid column names in UPDATE statement when WHERE CURRENT OF clause is used.

Workaround:

None.

43.22 Pro*COBOL Known Bugs


Bug 1897639

Referring to an implicit VARCHAR within a group item returns a PCB-S-00208 error while precompiling with Pro*Cobol.

Workaround:

Add an EXEC SQL VAR statement before referring it as host variable as follows:

EXEC SQL VAR <varchar group item name> IS VARCHAR(<size>) END-EXEC

Bug 1656765

Using the INCLUDE statement to copy files, which contain IDENTIFICATION, ENVIRONMENT, DATA, and PROCEDURE DIVISION, into a host program as sub programs, Pro*Cobol fails with PCB-S-00400 error. This problem happens if .pco files use EXEC SQL INCLUDE statements to include one nested program after another nested program.

Workaround

Put two files into one include file. If PROCEDURE DIVISION has only one INCLUDE statement, Pro*Cobol precompiler succeeds in generating a .cob file. The included .pco file should not contain IDENTIFICATION DIVISION, any DIVISIONS for that matter but only code (routines).


Bug 1620777

If a host variable is defined by adding the keyword VARYING, and then the datatype is overridden using datatype equivalencing, the length value of the expanded string field is increased by 2.

Workaround

None.


Bug 953338

When a Pro*Cobol program is precompiled with embedded PL/SQL using select /*+ index hint */ statement, PCB-S-00567, PLS-103 errors are returned.

Workaround

None.

43.23 Pro*FORTRAN Known Bugs


Bug 2425918

Use of CHARACTER*(*) as a host variable generates bad code. This is a restriction of PROFOR. CHARACTER*(*) variables have no predetermined length. They are used to specify dummy arguments in a subroutine declaration. The maximum length of an actual argument is returned by the LEN intrinsic function. Although CHARACTER*(*) variables are valid FORTRAN variable names, they are not supported as a host variable.

Workaround

None.

43.24 RMAN Known Bugs


Bug 2656503

RMAN cannot back up transported tablespaces unless they have been made read-write after they have been transported. It is not necessary to leave them in read-write status, but they cannot be backed up by RMAN unless they have been made read-write once, if only briefly, after they have been plugged in.

Workaround:

None.


Bug 3154925

If the DELETE INPUT option is used for an archivelog backup, RMAN can delete logs that have not yet been shipped or applied to a standby database.

Workaround:

Do not use the DELETE INPUT option for logs until you are sure they have been applied to any standby databases.


Bug 2692990

When restoring files at a standby database, RMAN will attempt to use the file names of the primary database for the restored files.

Workaround:

Use SET NEWNAME to restore the files to the correct locations.


Bug 2670671

Lack of statistics in RMAN catalog schema can lead to poor performance.

Workaround:

Occasionally use the ALTER USER ... COMPUTE STATISTICS command to re-compute the statics for the RMAN catalog schema.


Bug 2353334

RMAN backups will fail with internal errors when a non-standard b_block_size, such as 6k, is used, and the tablespace that is being backed up has a standard block size, such as 8k or 16k.

Workaround:

Contact Oracle support for the workaround if you encounter this bug.


Bug 3401014

The restoration of a compressed backup set fails with ORA-00060 error message when using a different buffer size than the one with which it was created:

ORA-00600 [krbrrd_kgcddo_fails]

This occurs when a compressed backup set is created on disk, migrated to tape using the BACKUP RECOVERY AREA, BACKUP RECOVERY FILES, or BACKUP BACKUPSET commands, and then restored from the tape. The default buffer size for an I/O to device type DISK is 1 MB, while the default buffer size for an I/O to device type SBT is 256 KB.

Workaround:

Configure your tape device block size to 1 MB to successfully restore compressed backup sets from tape. The following parms configuration should be made in addition to any other configurations of SBT channels:

RMAN> configure channel device type sbt parms="BLKSIZE=1048576";

If using manually allocated channels, run this script:

RMAN> run {
2>     allocate channel device type sbt  parms="BLKSIZE=1048576"; 
3>     .... command to execute ...
4>    }

If not using this workaround, you must restore the files using the same buffer size as in the backup:

  • A compressed backup to a recovery area and then to tape uses a default buffer size of 1MB.

  • A compressed backup directly to tape uses the default buffer size of 256K.

43.25 Shared Servers Known Bugs


Bug 3398097

Client connections terminate with ORA-03113 error message:

ORA-03113: end-of-file on communication channel

Trace files in Oracle Net Dispatcher lists these error messages:

NS Primary Error: TNS-12535: TNS:operation timed out
NS Secondary Error: TNS-12606: TNS: Application timeout occurred

Workaround:

When using shared servers, do not specify the SQLNET.INBOUND_CONNECT_TIMEOUT parameter in sqlnet.ora on a database server.

43.26 SQL Execution Known Bugs


Bug 3342089

Attempts to DROP TABLE result in an ORA-04063: table has errors when both of these conditions are true:

  1. The table being dropped has columns that depend on an invalid type, such as a type that was altered or dropped forcibly without upgrading its dependents.

  2. The RecycleBin feature is enabled for UNDROP TABLE support.

Workaround:

There are several workarounds for this problem:

  • Use DROP TABLE ... PURGE instead, if you are certain that you will not need to UNDROP the table at a later time.

  • Use ALTER TABLE ... UPGRADE INCLUDING DATA; DROP TABLE ..., if you will need to UNDROP the table at a later time.

    Alternatively, you can use a temporary table, which allows you to delete the invalid table and retain the ability to UNDROP the valid subset of the original table. This is illustrated in the following script:

    CREATE TABLE TMP AS SELECT * FROM ORIG;
    DROP TABLE ORIG PURGE;
    ALTER TABLE TMP RENAME TO ORIG;
    DROP TABLE ORIG;
    

Bug 3382501

A query selecting from three or more tables with optimizer_mode set to first_rows_xx, where xx is 1, 10, 100, or 1000, may crash the database.

Workaround:

Set optimizer_mode to all_rows, or specify all_rows hint in the SQL statement.


Bug 3389170

During the execution of DML/DDL or any statements that require space allocation, one may see an ORA-600 [kqlupd2] error message. The trace file for this error shows a recursive SQL call to ALTER RecycleBin names such as BIN$XXX. This typically happens when the database is attempting to reclaim space from the RecycleBin.

Workaround:

Purge the RecycleBin and reissue the original statement. Note that purged objects cannot be undropped.


Bug 3374078

Dropping a table that was previously dropped and undropped may generate an ORA-00972 error. The table that is undropped has an index that retains its RecycleBin name during the undrop, due to a name conflict with an existing index.

Workaround:

Drop table purge to bypass the RecycleBin. Note that purged objects cannot be undropped.


Bug 3399017

Reads against a LOB column may unexpectedly generate an ORA-01555 01555: snapshot too old: rollback segment number %s with name \"%s\" too small message error, even if the tablespace has been sized appropriately. This happens when the database is in auto undo management mode and the LOB is using time-based retention. Either the RETENTION clause was explicitly specified when creating the LOB, or PCTVERSION was not specified.

Workaround:

Alter the LOB to use PCTVERSION explicitly. This switches the LOB to use space-based retention. Note that time-based retention is no longer supported.


Bug 3392439

Row Access Method problem: DML statements may fail with ORA-00600 [kcbget_24], ORA-00600 [kcbgtcr_5], or ORA-00600 [kcbgcur_3] error messages. The DML on an Index-Organized Table (IOT) that has one or more secondary index may fail with ORA-00600, fail with an access violation, or it may cause modification of rows in objects that are not associated with the target IOT.

Workaround:

Set hidden parameter _db_cache_pre_warm to false in init.ora to avoid pre-warming behavior in buffer cache. Alternatively, use ALTER SYSTEM FLUSH BUFFER_CACHE.


Bug 3391317

The automated optimizer statistics collection job, GATHER_STATS_JOB, may fail with an ORA-01555 error if there are too many DML activities taking place while the job is running.

Workaround:

Increase the value of the UNDO_RETENTION initialization parameter and size of the undo tablespace. For details, see either the Oracle Database Administrator's Guide, Chapter 10, "Managing the Undo Tablespace", subsection "Undo Retention" in "Sizing the Undo Tablespace", or Oracle Database Reference, Chapter 1, "UNDO_RETENTION".

43.27 SQL*Plus Known Bugs


Bug 3329103

In iSQL*Plus login page, Standard Chinese, Traditional Chinese, and Brazilian Portuguese help cannot be invoked and an error is generated. After login, English help will be invoked for these languages.

Workaround:

None.


Bug 3340424

In iSQL*Plus, if the language used is Brazilian Portuguese, the UI displays in English, instead of Brazilian Portuguese as defined for the browser.

Workaround:

None.

43.28 Summary Management Known Bugs


Bug 3355726

When executing a CREAET MATERIALIZED VIEW statement that uses the REFRESH USING TRUSTED CONSTRAINTS and GROUPING SETS clauses concurrently with other CREAET MATERIALIZED VIEW statements, you may experience ORA-04021 timeout errors:

ORA-04021: timeout occurred while waiting to lock tablename
Workaround:

Re-execute the failed CREATE MATERIALIZED VIEW statement.


Bug 3383591

The Access Advisor can crash with either ORA-07445 or ORA-03113 error messages if it encounters materialized views or dimensions that need to be revalidated:

ORA-07445: exception encountered: core dump [qsmsuniq()+92] [SIGBUS] [Invalid address alignment] [0x52574849]
ORA-03113: end-of-file on communication channel

Running the Access Advisor again will generally solve the problem because it revalidates materialized views and dimensions before it crashes.

You can use the following queries to determine if any materialized views or dimensions need to be revalidated:

SELECT owner, mview_name FROM dba_mviews 
   WHERE compile_state = 'NEEDS_COMPILE';
SELECT owner, dimension_name FROM dba_dimensions 
   WHERE compile_state = 'NEEDS_COMPILE';
Workaround:

You can simply retry running the Access Advisor, or manually revalidate materialized views and dimensions using this script:

ALTER MATERIALIZED VIEW owner.mview_name COMPILE; 
ALTER DIMENSION owner.dimension_name COMPILE; 

43.29 Utilities Known Bugs


Bug 2770880

When a READ_ONLY tablespace is imported into the target database using Data Pump Import, it is created OFFLINE.

Workaround:

The tablespace can be manually set back to READ_ONLY when the Data Pump Import completes.


Bug 3309770

If an object type in one schema references an object type in another schema, there will be compilation warnings when Data Pump Import creates the first object type.

Workaround:

The object types can be manually revalidated once Data Pump Import completes.


Bug 3369197

Errors will result when attempting to perform a network mode Data Pump Import using the NETWORK_LINK parameter if the source database table contains a LONG datatype.

Workaround:

Use a standard dumpfile-based Data Pump Export and Import of the table that contains the LONG datatype.


Bug 3392310

Datapump exp and expdb fail with ORA-29341: The transportable set is not self-contained error message when in transportable mode.

Workaround:

Use EXCLUDE=CONSTRAINT with datapump and constraints=n with classic exp.


Bug 3217435

Nested table is missing SYS_C* segments after transportable datapump import.

Workaround:

Use classic exp instead of expdp.

44 Documentation Accessibility

Our goal is to make Oracle products, services, and supporting documentation accessible, with good usability, to the disabled community. To that end, our documentation includes features that make information available to users of assistive technology. This documentation is available in HTML format, and contains markup to facilitate access by the disabled community. Standards will continue to evolve over time, and Oracle is actively engaged with other market-leading technology vendors to address technical obstacles so that our documentation can be accessible to all of our customers. For additional information, visit the Oracle Accessibility Program Web site at

http://www.oracle.com/accessibility/
Accessibility of Code Examples in Documentation

JAWS, a Windows screen reader, may not always correctly read the code examples in this document. The conventions for writing code require that closing braces should appear on an otherwise empty line; however, JAWS may not always read a line of text that consists solely of a bracket or brace.

Accessibility of Links to External Web Sites in Documentation

This documentation may contain links to Web sites of other companies or organizations that Oracle does not own or control. Oracle neither evaluates nor makes any representations regarding the accessibility of these Web sites.

45 Legal Notices

License Restrictions & Warranty Disclaimer

The Programs (which include both the software and documentation) contain proprietary information; they are provided under a license agreement containing restrictions on use and disclosure and are also protected by copyright, patent, and other intellectual and industrial property laws. Reverse engineering, disassembly, or decompilation of the Programs, except to the extent required to obtain interoperability with other independently created software or as specified by law, is prohibited.

The information contained in this document is subject to change without notice. If you find any problems in the documentation, please report them to us in writing. This document is not warranted to be error-free. Except as may be expressly permitted in your license agreement for these Programs, no part of these Programs may be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose.

Restricted Rights Notice

If the Programs are delivered to the United States Government or anyone licensing or using the Programs on behalf of the United States Government, the following notice is applicable:

U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the Programs, including documentation and technical data, shall be subject to the licensing restrictions set forth in the applicable Oracle license agreement, and, to the extent applicable, the additional rights set forth in FAR 52.227-19, Commercial Computer Software--Restricted Rights (June 1987). Oracle Corporation, 500 Oracle Parkway, Redwood City, CA 94065.

Hazardous Applications Notice

The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications. It shall be the licensee's responsibility to take all appropriate fail-safe, backup, redundancy and other measures to ensure the safe use of such applications if the Programs are used for such purposes, and we disclaim liability for any damages caused by such use of the Programs.

Third Party Web Sites, Content, Products, and Services Disclaimer

The Programs may provide links to Web sites and access to content, products, and services from third parties. Oracle is not responsible for the availability of, or any content provided on, third-party Web sites. You bear all risks associated with the use of such content. If you choose to purchase any products or services from a third party, the relationship is directly between you and the third party. Oracle is not responsible for: (a) the quality of third-party products or services; or (b) fulfilling any of the terms of the agreement with the third party, including delivery of products or services and warranty obligations related to purchased products or services. Oracle is not responsible for any loss or damage of any sort that you may incur from dealing with any third party.