Oracle® Fusion Middleware User's Guide for Technology Adapters 11g Release 1 (11.1.1) Part Number E10231-01 |
|
|
View PDF |
This appendix describes Oracle BPEL Process Manager and Oracle Mediator troubleshooting methods.
This appendix includes the following sections:
The following sections describe possible issues and solutions when using the Oracle JCA Adapter for Database (Oracle Database Adapter).
This section includes the following issues:
TIMESTAMP Data Type Is Not Supported for a Microsoft SQL Server Database
Switching from a Development Database to a Production Database
Outbound SELECT on a CHAR(X) or NCHAR Column Returns No Rows
MERGE Sometimes Does UPDATE Instead of INSERT, or Vice Versa
Integrity Violation Occurs with Delete or DeletePollingStrategy
Some Queried Rows Appear Twice or Not at All in the Query Result
Importing a Same-Named Table, with Same Schema Name, but Different Databases
Must Fully Specify Relationships Involving Composite Primary Keys
Must Fully Specify Relationships Involving Composite Primary Keys
Oracle Database Adapter Throws an Exception When Using a BFILE
Relationships Not Autogenerated When Tables Are Imported Separately
Update Only Sometimes Performs Inserts/Deletes for Child Records
Issue When Design-Time And Run-Time Connection Users Are Different
Problem
At run time, you may see the "Could not create the TopLink session"
exception.
Solution
This common error occurs when the run-time connection is not configured properly. For more information, see Deployment.
Problem
You may see the "Could not find adapter for eis/DB/
my_connection
/...."
exception.
Solution
For more information, see Deployment.
Problem
Changes to Customers_table.xsd
are not reflected, or you get an exception.
Solution
You cannot specify the XSD format that the Oracle Database Adapter produces.
Problem
After clicking Finish, or at deployment, you may see the following exception:
Caused by Exception [TOPLINK-0] (OracleAS TopLink - 10g (9.0.4.4) (Build 040705)): oracle.toplink.exceptions.IntegrityException Descriptor Exceptions: ---------------------- Exception [TOPLINK-64] (OracleAS TopLink - 10g (9.0.4.4) (Build 040705)): oracle.toplink.exceptions.DescriptorException Exception Description: No target foreign keys have been specified for this mapping. Mapping: oracle.toplink.mappings.OneToManyMapping[phonesCollection] Descriptor: Descriptor(Test.Customers --> [DatabaseTable(CUSTOMERS)])
This generally means that there was a problem in the wizard.
Solution
The simplest solution is to create all constraints on the database first. Also, depending on the problem, you may only need to fix something in the offline tables and then run the wizard again.
Problem
After clicking Finish, or at deployment, you may see the following exception:
Caused by Exception [TOPLINK-0] (OracleAS TopLink - 10g (9.0.4.4) (Build 040705)): oracle.toplink.exceptions.IntegrityException Descriptor Exceptions: ---------------------- Exception [TOPLINK-46] (OracleAS TopLink - 10g (9.0.4.4) (Build 040705)): oracle.toplink.exceptions.DescriptorException Exception Description: There should be one non-read-only mapping defined for the primary key field [PHONES.ID]. Descriptor: Descriptor(Test.Phones --> [DatabaseTable(PHONES)])
This probably means that no primary key was defined for PHONES
.
Solution
If this exception appears in conjunction with the No Target Foreign Keys error, then see No Target Foreign Keys Error and resolve that problem first. Otherwise, do the following:
Add the property usesStringBinding=true
to the weblogic-ra.xml
file, and also declare it in the ra.xml
file. For blobs, you need to instead set the properties usesStreamsForBinding=true
and UsesByteArrayBinding=true
.
Problem
You may get a conversion exception when you pass in an xs:dateTime
value to the Oracle Database Adapter.
Solution
If an attribute is of type xs:dateTime
, then the Oracle Database Adapter is expecting a string in one of the following formats:
1999-12-25T07:05:23-8:00 1999-12-25T07:05:23.000-8:00 1999-12-25T15:05:23:000Z 1999-12-25T15:05:23
The format 1999-12-25
is accepted, although it is not a valid xs:dateTime
value. The xs:dateTime
format is yyyy-MM-ddTHH:mm:ss.SSSZ
, where
yyyy
is the year (2005, for example)
MM
is the month (01 through 12)
dd
is the day (01 through 31)
HH
is the hour (00 through 23)
mm
is the minute (00 through 59)
ss
is the second (00 through 59)
SSS
is milliseconds (000 through 999), optional
Z
is the time zone designator (+hh:mm
or -hh:mm
), optional
A DATE
column may exist on an Oracle Database, which can accept the 25-DEC-1999
date format. However, this is not a date format that the Oracle Database Adapter can accept. The following workaround applies to TopLink only.
If you want to pass in the 25-DEC-1999
date format, then map the attribute as a plain string. The Oracle Database Adapter passes the value as a plain string.
To do this, you must edit the offline database table and change the column data type from DATE
to VARCHAR2
.
Save.
Edit the database partner link.
Click Next to the end in the wizard, and then click Finish and Close.
While not a valid xs:dateTime
format, the format yyyy-mm-dd
is a valid xs:date
format.
Problem
The time portion of DATE
fields may be truncated on Oracle9 or later platforms when using oracle.toplink.internal.databaseaccess.DatabasePlatform
. For example, 2005-04-28 16:21:56
becomes 2005-04-28T00:00:00.000+08:00
.
Or, the millisecond portion of DATE
fields may be truncated on Oracle9 or later platforms when using oracle.toplink.internal.databaseaccess.Oracle9Platform
. For example, 2005-04-28 16:21:56.789
becomes 2005-04-28T16:21:56.000+08:00
.
Or, you may have trouble with TIMESTAMPTZ
(time stamp with time zone) or TIMESTAMPLTZ
(time stamp with local time zone).
Solution
You must set the platformClassName
parameter for Oracle platforms, because these include special workarounds for working with date-time values on Oracle. So, if you are connecting to an Oracle9 platform, you must set the platformClassName
parameter accordingly.
Due to an issue with the time portion of DATE
being truncated with Oracle9 JDBC drivers, the property oracle.jdbc.V8Compatible
was set when using any Oracle platform class name. Therefore, use oracle.toplink.internal.databaseaccess.Oracle9Platform
to solve the time truncation problem.
However, starting with Oracle9, dates started to include millisecond precision. Setting oracle.jdbc.V8Compatible
in response had the drawback of returning the milliseconds as 000
, as an Oracle8 database did. (This also introduced an issue with null IN/OUT DATE
parameters for stored procedure support.) You do not see any truncation (of the time portion or milliseconds) when using the Oracle9Platform
class.
You must also use the Oracle9Platform
class if you have TIMESTAMPTZ
and TIMESTAMPLTZ
.
If you want DATE
to be treated like a date (with no time portion), set the attribute-classification in the toplink_mappings.xml
to java.sql.Date
.
In general, if you have an issue with a particular database, check to see if TopLink has a custom platformClassName
value for that database, and whether you are using it.
For more information, see Deployment.
Because the TIMESTAMP
data type is not supported, the best approach is to unmap a TIMESTAMP
column. Note the following in support of unmapping TIMESTAMP
:
TIMESTAMP
values can never be used as the primary key.
Oracle JDeveloper offline tables interpret TIMESTAMP as a dateTime
type, although it is actually a binary value; therefore, you must change the type anyway.
As a binary value, TIMESTAMP
has no meaning or use after it is converted to XML and base64 encoded.
TIMESTAMP
values cannot be modified; therefore, at a minimum, you must mark them read-only.
Note that TIMESTAMP
is similar to the pseudocolumn ROWID
, which is technically a column but is never mapped by default by the Oracle Database Adapter.
To understand how to handle faults, such as a unique constraint violation on insert or when a database or network is temporarily unavailable, see the InsertWithCatch
tutorial at Oracle_Home
\bpel\samples\tutorials\122.DBAdapter
.
Problem
A BPEL process modeled against one database does not run against another database.
The most likely cause for this problem is that you are using a different schema in the second database. For example, if you run the wizard and import the table SCOTT.EMPLOYEE
, then, in the toplink_mappings.xml
file, you see SCOTT.EMPLOYEE
. If you run the sample in the USER
schema on another database, you get a "table not found"
exception.
Solution
Until qualifying all table names with the schema name is made optional, manually edit toplink_mappings.xml
and replace SCOTT.
with nothing, as shown in the following example.
Change:
<project>
<project-name>toplink_mappings</project-name>
<descriptors>
<descriptor>
<java-class>BPELProcess1.A</java-class>
<tables>
<table>SCOTT.A</table>
</tables>
To:
<project>
<project-name>toplink_mappings</project-name>
<descriptors>
<descriptor>
<java-class>BPELProcess1.A</java-class>
<tables>
<table>A</table>
</tables>
You must repeat this step every time after running the Adapter Configuration Wizard.
Note:
HavingEMPLOYEE
on both the SCOTT
and USER
schemas, and querying against the wrong table, can result in a problem that is difficult to detect. For this reason, the Oracle Database Adapter qualifies the table name with the schema name.For more information, see Deployment.
For more information, see
Problem
Many departments with many employees are read in, but only one employee per department appears.
Solution
You must use a transform with a for-each
statement. An Assign activity with a XPath query can result in only the first employee being copied over.
For an example of how to use a transform for database adapter outputs, go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\MasterDetail
Problem
If you use an outbound SELECT
to find all employees where firstName =
some_parameter
, then you have a problem if firstName
on the database is a CHAR
column, as opposed to a VARCHAR2
column.
It is a known problem with some databases that if you insert a CHAR
value (for example, 'Jane'
) into a CHAR(8)
field, then the database pads the value with extra spaces (for example, 'Jane '
).
If you then run the query
SELECT ... WHERE firstName = 'Jane';
no rows may be returned. Although you are querying for the same value that you inserted, and some tools such as SQL*Plus and SQL Worksheet operate as expected, the query does not work with the Oracle Database Adapter.
Solution
The best practice is to use a CHAR
column for fields that must be fixed, such as SSN
, and VARCHAR2
for columns that can take a variable length, such as firstName
.
Transforming the value to add padding may be difficult, and using SELECT
to trim the value on the database (as opposed to padding the other side) requires using SQL statements. For example:
SELECT ... WHERE trim(firstName) = #firstName;
Note that the number sign (#)
is a TopLink convention for denoting input parameters.
Problem
When querying on table A
, which has a one-to-one relationship to B
, where B
contains a CLOB
, you may see the following exception:
Exception Description: java.sql.SQLException: ORA-00932: inconsistent datatypes: expected - got CLOB
Solution
A SELECT
query returning CLOB
values must not use the DISTINCT
clause. You can avoid DISTINCT
by disabling the batch attribute reading from A
to B
. Batch reading is a performance enhancement that attempts to simultaneously read all B
s of all previously queried A
s. This query uses a DISTINCT
clause. Use joined reading, instead, or neither joined reading nor batch attribute reading.
Because both DISTINCT
and CLOB
s are common, you may see this problem in other scenarios. For example, an expression like the following uses a DISTINCT
clause:
SELECT DISTINCT dept.* from Department dept, Employee emp WHERE ((dept.ID = emp.DEPTNO) and (emp.name = 'Bob Smith'));
Problem
When inserting large objects (LOBs), you may get an exception such as
java.sql.SQLException: setString can only process strings of less than 32766 characters Error Code: 17157
Solution
Check the platformClassName
property in the oc4j-ra.xml
file. For an Oracle database, set the property to Oracle8Platform
(for Oracle8) or Oracle9Platform
(for Oracle9i and Oracle10g). See Table 9-9, "Database Platform Names" for a list of platformClassName
properties for Oracle and third-party databases.
For more information, see "How-To: Map Large Objects (LOBs) to Oracle Databases with OracleAS TopLink" at
http://www.oracle.com/technology/products/ias/toplink/technical/tips/lob/index.html
If you are using Oracle Database 10g and having difficulties with CLOBs, then configure the Oracle Database Adapter to use a data source, and add <propertyname="SetBigStringTryClob" value="true" />
to the <data-source>
element in the OC4J data-sources.xml
file.
Also, see " Handling CLOBs - Made Easy with Oracle JDBC 10g" at
http://www.oracle.com/technology/sample_code/tech/java/codesnippet/jdbc/clob10g/handlingclobsinoraclejdbc10g.html
Problem
You may sometimes notice that MERGE
performs an UPDATE
query when it should do an INSERT
, or vice versa.
Solution
MERGE
works by first determining, for each element in XML, whether the corresponding database row exists or not. For each row, it does an existence check. There are two known limitations with the existence check.
First, you can configure the existence check to either Check cache or Check database. You can configure this for each descriptor (mapped table) in your Mapping Workbench Project. The default is Check database, but TopLink's check database works such as "check cache first, then database" for performance reasons. If a row exists in the cache, but was deleted from the database (the cache is stale), then you may see UPDATE
when you expect INSERT
. You can configure caching and a WeakIdentityMap
is used by default, meaning rows are only held in memory while being processed. However, Java garbage collection is not controlled by the adapter. Therefore, if you insert a row, delete it in a separate process, and insert it again, all within a very short time, you may see INSERT
and then UPDATE
. One solution is to use NoIdentityMap
. However, performance may suffer, and if you are using SELECT
statements on a mapped schema with complex cycles (which you should avoid), then the adapter can be trapped in an endless loop when building XML.
Second, there is a timing issue when reading first and then later INSERT
or UPDATE
. If the same row is simultaneously inserted by multiple invokes, then each may do an existence check that returns false, and then all attempt INSERT
. This does not seem realistic, but the following scenario did come up:
A polling receive reads 100 employee rows and their departments from database A
. With maxRaiseSize
set to 1
, 100 business process instances were initiated. This led to 100 simultaneous invokes to database B
, one for each employee row. No problems were encountered when existence checking on employee, but some employees had the same department. Hence, many of the 100 invokes failed because the existence checks on department were more or less simultaneous.
There are two solutions to this problem. The first is to avoid it. In a data synchronization-style application, setting maxRaiseSize
to unlimited
boosts performance and eliminates this problem. A second solution is to retry MERGE
in your BPEL process. Optimistic lock and concurrency exceptions are common, and the best solution is usually to wait and try again a short time later.
Problem
Using the MERGE
invoke operation, 100 rows are passed to the Oracle Database Adapter. However, not all the rows are inserted into the database table as expected.
For more information about this problem, see "MERGE Sometimes Does UPDATE Instead of INSERT, or Vice Versa". A flaw in MERGE
existence checking allows cases where a row is not inserted when it should be, thus appearing as message loss.
Solution
Use NoIdentityMap
instead of WeakIdentityMap
.
In JDeveloper BPEL Designer, open your project.
In the Applications - Navigator, double-click the TopLink Mappings node under Application Sources for your project.
The TopLink project appears in the TopLink Mappings - Structure pane.
Click each descriptor in the TopLink Mappings - Structure pane so that it appears in the main window.
Along the top of the main window, you will see the following tabs: Descriptor Info, Queries, Query Keys, and Identity.
Click the Identity tab.
From the Identity Map list, select NoIdentityMap.
Set Existence Checking to Check database (the default).
The value Check Cache becomes illegal when no caching is used.
From File, select Save All.
Run the Adapter Configuration Wizard again in edit mode to regenerate toplink_mappings.xml
.
(Optional) Verify that the solution worked by closing and then reopening toplink_mappings.xml
.
You will see that NoIdentityMap
globally replaced WeakIdentityMap
.
Redeploy the process.
Problem
Child records found an integrity violation with DeletePollingStrategy
.
When deleting rows, you must be aware of integrity constraints. For example, if DEPARTMENT
has a one-to-many relationship to EMPLOYEE
, that means DEPTID
is a foreign key on EMPLOYEE
. If you delete a DEPARTMENT
record but not its employees, then DEPTID
becomes a broken link and this can trigger an integrity constraint.
This problem occurs because you imported a table by itself and did not import its related tables. For example, if you import only the DEPARTMENT
table from the database and not the EMPLOYEE
table, which has an integrity constraint on column DEPTID
, then the Oracle Database Adapter does not know about EMPLOYEE
and it cannot delete a record from DEPARTMENT
. You receive an exception.
Solution
Ensure that you import the master table and all its privately owned relationships. Or set the constraint on the database to CASCADE
for deletions, or use a nondelete polling strategy.
Ensure that the one-to-many relationship between DEPARTMENT
and EMPLOYEE
is configured to be privately owned. It is by default, but if the above fails, check the run-time X-R mappings file. For more information, see Relational-to-XML Mapping.
If the problem is not this simple, TopLink supports shallow/two-phase inserts (but does not support this for DELETE
). For example, if A
has a foreign key pointing to B
, and B
has a foreign key pointing to A
, then there is no satisfactory order by which you can delete both A
and B
. If you delete A
first, then you orphan B
. If you delete B
first, then you orphan A
. The safest DELETE
is a two-phase DELETE
that performs an UPDATE
first as follows:
UPDATE B set A_FK = null; DELETE from A; DELETE from B;
Problem
When you run a query, you may get the correct number of rows, but some rows appear multiple times and others do not appear at all.
This behavior is typically because the primary key is configured incorrectly. If the Oracle Database Adapter reads two different rows that it thinks are the same (for example, the same primary key), then it writes both rows into the same instance and the first row's values are overwritten by the second row's values.
Solution
Open Application Sources > TopLink > TopLink Mappings. In the Structure window, double-click PHONES. On the first page, you should see Primary Keys. Make sure that the correct columns are selected to make a unique constraint.
Save and then edit the database partner link.
Click Next to the end, and then click Finish and Close.
Open your toplink_mappings.xml
file. For the PHONES
descriptor, you should see something like this:
<primary-key-fields> <field>PHONES.ID1</field> <field>PHONES.ID2</field> </primary-key-fields>
Problem
Importing a table from a database on one host and also importing a table with the same name, and the same schema name, from a database on another host raises an error.
Solution
Create one project against database #1 and model the adapter service. Next, create a second project against database #2 and model the adapter service. (Because the databases are on different hosts, you use different database connections.) Then, create a third project, but do not run the Adapter Configuration Wizard. Instead, copy the BPEL artifacts (WDSL, XSD, and toplink_mapings.xml
) from projects one and two. Deploy only the third project.
If the two tables are identical, or if the data you are interested in is identical, then you need not follow the preceding procedure.
Problem
In the Relationship window of the Adapter Configuration Wizard, all elements of the primary key appear and cannot be removed. Therefore, a foreign key referring to only part of the composite primary key cannot be created.
Solution
Because foreign key constraints must map to every part of the primary key (not a subset), there is no solution. The Oracle Database Adapter allows a foreign key only with a corresponding primary key at the other end.
The wizard does not let you create an ambiguous relationship. For example, assume that PurchaseOrder
has a 1-1 billTo
relationship to Contact
. For uniqueness, the primary key of Contact
is name
and province
. This means PurchaseOrder
must have two foreign keys (bill_to_name
and bill_to_province
). If there is only one foreign key (bill_to_name
), then the wizard does not allow you to create that ambiguous 1-1 relationship. Otherwise, the same purchase order can be billed to multiple people.
The BFILE
, USER DEFINED
, OBJECT
, STRUCT
, VARRAY
, and REF
types are not supported.
Problem
If tables are imported one at a time, relationships are not generated even if foreign key constraints exist on the database.
Solution
Relationship mappings can be autogenerated only if all the related tables are imported in one batch. When importing tables, you can select multiple tables to be imported as a group. If you have related tables, then they should all be imported at the same time.
Problem
If you try to create a relationship that has the same name as the primary key field name, then you encounter a problem in which the PK field becomes unmapped.
Solution
To add the PK mapping back manually, follow these instructions:
Open the Java source for the descriptor to which you want to add the mapping (for example, Movies.java
).
Add a new Java attribute appropriate to the field to which you are mapping. For example, if PK of the Movies
table is a VARCHAR
field named TITLE
, then create a new attribute: "private String title;"
Save the Java file.
Click the TopLink Mappings node in the Applications - Navigator pane; then choose the Descriptor from the TopLink Mappings - Structure pane. You see the newly created attribute in the Descriptor as unmapped (in this example, title).
Right-click the new attribute and select Map As > Direct To Field.
Double-click the new attribute. The TopLink Mappings editor should appear in the main JDeveloper window. Change the database field to match the PK field on the database (in this example, TITLE).
Click the Descriptor in the TopLink Mappings - Structure pane. Ensure that the PK field has a check box next to it in the Primary Keys list.
Run the Adapter Configuration Wizard again and continue with the rest of the wizard.
Problem
If you import a database table that contains a column whose name is a Java keyword, you receive the following error message:
The following error occurred: null
Solution
Perform the following in JDeveloper BPEL Designer:
Click OK in the error dialog.
Click Cancel in the Adapter Configuration Wizard.
Click Cancel in the Create Partner Link dialog.
Open the .java
file that was generated during the failed import. In the Application Navigator, click Applications, then WorkspaceName, then ProcessName, then Application Sources, then ProcessName, and then TableName.java.
Rename any Java fields that have errors. For example, you may see
private String class;
If you have syntax error highlighting turned on, this line will be underlined in red, indicating that there is a Java error. Change the line to
private String myClass;
(Or use some other nonreserved word.)
Delete all the methods from the Java class. This step is normally handled automatically by the database adapter but must be done manually here because of the error encountered during import. After you delete the methods, your class looks something like this:
package MyBPELProcess; public class MyDatabaseTable { private String fieldOne; private String fieldTwo; ... private String fieldN; }
Remap the field that you renamed in Step 5. Click the Mapping tab at the bottom of the Java Class editor. The Structure pane updates to show the Java class attributes. Right-click the field that you renamed in Step 5 (unlike the other fields, it has a single dot icon indicating that it is unmapped) and select Map As -> Direct To Field. In the main editor window, select the database field that this Java field maps to (the field has the same name as the attribute did before you renamed it). Then, close the Java Class editor.
From File, select Save All.
Rerun the Adapter Configuration Wizard. When you get to the Select Table page, your database table will already be in the list. Select it and continue with the wizard. Do not import the table again.
Problem
Extra steps are needed to add fault handling to a BPEL process.
Solution
The steps for catching a database exception are provided in the 122.DBAdapter
tutorial. Go to
Oracle_Home\bpel\samples\tutorials\122.DBAdapter\InsertWithCatch
See the readme.txt
files in both the Insert
and InsertWithCatch
directories.
The Readme.txt file for the InsertWithCatch tutorial describes two kinds of faults, binding faults (for example, inserting a row that already exists) and remote faults (for example, inserting a row when the network or database is unavailable). The Readme.txt file provides steps for catching an exception and a list of common Oracle database error codes.
See Oracle Database Error Messages for a complete list of error codes.
Problem
When using Oracle Lite, you may see the following error:
java.sql.SQLException: [POL-3261] there are too many transactions
This means that the maximum number of connections from the database has been exceeded.
Oracle BPEL Server uses a data source called BPELServerDataSource
, which is configured with a large connection pool size. The connections may all be allocated to the BPEL engine, thus leaving no connections available for the database adapter.
Solutions
Try the following solutions, in order of preference.
Solution 1
Use an Oracle database instead of Oracle Lite. This error occurs with Oracle Lite only.
Solution 2
Try reducing the values for max-connections
and, in particular, for min-connections
. You may need to experiment to find values that work in your environment. Start with a value of 5
for both max-connections
and min-connections
and see if the performance is acceptable. You must use higher values for a production database.
To set the values for max-connections
and min-connections
:
Open data-sources.xml
, located in
Oracle_Home\bpel\appserver\oc4j\j2ee\home\config
In the Oracle Lite data sources section, set max-connections
and min-connections
:
<!-- Use these datasources to connect to Oracle Lite --> <data-source class="com.evermind.sql.DriverManagerDataSource" name="BPELServerDataSource" location="loc/BPELServerDataSource" xa-location="BPELServerDataSource" ejb-location="jdbc/BPELServerDataSource" connection-driver="oracle.lite.poljdbc.POLJDBCDriver" username="system" password="any" max-connections="5" min-connections="5" connection-retry-interval="30" max-connect-attempts="10" url="jdbc:polite4@127.0.0.1:100:orabpel"/>
Solution 3
Reduce the number of applications that access Oracle Lite. Multiple concurrent BPEL processes are one application but can use all the connections.
Ensure that the oc4j-ra.xml
file uses the correct value for platformClassName
. See Table 9-9, "Database Platform Names" for the values for various databases. If the database you are using is listed in the table, use the value shown. For example, use DB2Platform
, not DatabasePlatform
, if you are using a DB2 database.
The Update Only
operation in the wizard sometimes performs inserts/deletes of the child records.
The Insert Only
and Update Only
operations are different in terms of recursively inserting/updating child records. The Insert Only
operation will insert the top-level table and all related child records. It assumes the top-level record, and all related records are new. The Update Only
is guaranteed to do an update only of the top-level records. It then checks whether to do an insert/update, or delete off the child records. It makes an assumption only about the existence of the top-level element. The Merge operation makes no assumptions about the existence of either top-level or child records.
Consider this example: There are two users (table1_owner
and table1_user
) and one table (table1
) in the data base.
If during design time, the connection user (in Oracle JDeveloperv) is table1_owner
and the run-time connection user (JDBC datasource) is table1_user
, then a run-time exception, stating that the table or view does not exist, is thrown.
If you check the generated SQL file, you will notice that:
If table1_owner
is used in design time, then table1
will be referred to as table1
.
If table1_user
is used in design time, then table1
will be referred to as table1_owner.table1
.
The workaround to this issue is to configure the tableQualifier
property in weblogic-ra.xml
or ra.xml
to clarify ambiguous names at run time.
The following sections describe possible issues and solutions when using the Oracle Database Adapter for stored procedures:
Design Time: Referencing User-Defined Types in Other Schemas
Configuring Multiple Adapters in the Inbound Direction using Correlation Sets
Problem
Using an unsupported or undefined parameter type in the chosen API is a common problem. Consider the following procedure:
PROCEDURE PROC (O OBJ) AS BEGIN … END;
In this example, OBJ
refers to a type that is undefined.
After you click Finish on the final page of the wizard, an attempt to generate the XSD file is made, which produces the following error message:
The message indicates that the named parameter, O
, which is of type OBJ
, is either not defined or is otherwise inaccessible.
To generate XSD for APIs containing parameters whose types are user-defined, those types must first be defined in the database and be accessible through the associated service connection. This error also occurs if the adapter does not support a data type, that is, a type mapping for the data type does not exist and a type conversion for the data type has not been implemented.
Solution
Ensure that only supported data types are used as types for parameters when choosing API. If the types are user-defined, check to ensure that the types are defined in the database and that the database is accessible when the attempt to generate XSD is made.
Problem
When the type of one or more of the parameters in the chosen API is a user-defined type that belongs to a different schema, a design-time problem can occur.
Assume type OBJ
is created in SCHEMA2
, as in
CREATE TYPE OBJ AS OBJECT (X NUMBER, Y VARCHAR2 (10));
And, a procedure is created in SCHEMA1
that has a parameter whose type is SCHEMA2.OBJ
, as in
CREATE PROCEDURE PROC (O SCHEMA2.OBJ) AS BEGIN … END;
If permission to access type OBJ
in SCHEMA2
has not been granted to SCHEMA1
, then a PL/SQL error will occur when the attempt to create the stored procedure in SCHEMA1 is made, as shown in the following example:
PLS-00201: identifier SCHEMA2.OBJ must be declared
Solution
SCHEMA2
must grant permission to SCHEMA1
so that SCHEMA1
can refer to type OBJ
from SCHEMA2
, as in
SQL> GRANT EXECUTE ON OBJ TO SCHEMA1;
See Referencing Types in Other Schemas for more information.
Problem
A mismatch between the formal parameters provided by the instance XML and the actual parameters that are defined in the signature of the stored procedure is a common run-time problem. When this type of error occurs, the invoke activity that tried to execute the stored procedure fails due to "wrong number or types of arguments" passed into the API. Possible causes for this problem include:
An element corresponding to one of the required parameters was not provided in the instance XML file.
Solution: Add the necessary element to resolve the issue.
More elements than were specified in the XSD were included in the instance XML file.
Solution: Remove the extra elements from the XML file.
The XSD file does not accurately describe the signature of the stored procedure. For example, if the type of one of the parameters were to change and the XSD was not regenerated to reflect that change, then a type mismatch can occur between the db:type
of the element and the new type of the modified parameter.
Solution: Ensure that the parameters match the signature of API.
Figure B-1 Example of a Faulted Invoke Due to MIsmatched Parameters
Problem
A failure can also occur if the stored procedure is not defined in the database when an attempt to execute it is made. For example, if an ADDEMPLOYEES
stored procedure is invoked, but is not defined in the database, then the invoke activity will fail.
Figure B-2 Example of a Faulted Stored Procedure
An error such as "... identifier ADDEMPLOYEES must be declared" will occur, which is an indication that the stored procedure may not be defined in the database. This can happen, for example, if the procedure was dropped some time between when the process was deployed and when the procedure was invoked. This can also occur if the required privileges to execute the stored procedure have not been granted.
Solution
Ensure that the API is defined in the database and that the appropriate privileges to execute that procedure have been granted.
Some run-time errors can occur if the instance XML does not conform to the XSD that was generated for the chosen API.
Ensure that each value in the instance XML file conforms to the definition of its corresponding element in the InputParameters root element of the generated XSD.
Problem
When multiple adapter-based receive activities in the inbound direction use correlation sets in a process, the wrong property alias query is evaluated and the process fails at run time with the error:
Failed to evaluate correlation query
Workaround
As a workaround, ensure that the port type and operation values are unique between the two adapter WSDL files. For example, ensure that each adapter WSDL file has a unique operation name.
The following sections describe possible issues and solutions when using the Oracle JCA Adapter for Files/FTP (Oracle File and FTP Adapters):
Changing Logical Names with the Adapter Configuration Wizard
Creating File Names with Spaces with the Native Format Builder Wizard
Creating Schema Definition Files using Native Format Builder Constructs
Setting AUTOEXTEND for Tablespaces for Processing More Than 30000 Files
Setting MinimumAge to Ensure Processing of All Files During Multiple Processing
If you change a previously specified logical name to a different name, both the old and the new logical names appear in the bpel.xml
file. You must manually edit the bpel.xml
file to remove the old logical name.
While the Native Format Builder Wizard does not restrict you from creating native schema file names with spaces, it is recommended that your file names do not have spaces in them.
The Native Format Builder cannot edit an existing XSD that has not been generated by the Native Format Builder or has not been hand-coded using Native Format Builder constructs. Also, XSD created using NXSD constructs must specify a valid sample file name in its annotations if it has to be edited by the Native Format Builder Wizard.
After processing more than 30000 files, tablespace does not extend. You must enable AUTOEXTEND
for the tablespace when it reaches its limit. The best practice is to set AUTOEXTEND
for tablespace to recommended value.
For more information about autoextending tablespaces, see Oracle Database Administrators Guide.
When an Oracle File Adapter processes and copies multiple files into an input directory and if another Oracle File Adapter polls this inbound directory to retrieve new files, then some files may not get processed and may be copied to the archive folder. The following is a sample error message:
oracle.integration.platform.blocks.adapter.fw.log.LogManagerImpl log INFO: File Adapter FlatStructure ORABPEL-11117 minOccurs not satisfied. minOccurs="1" not satisfied for the node "<element name="Root-Element">". Loop terminated with cardinality "0". Insufficient or invalid data. Please correct the NXSD schema. at oracle.tip.pc.services.translation.xlators.nxsd.NXSDTranslatorImpl.parseNXSD(NXSDTranslatorImpl.java:1269)
The error occurs because the Oracle File Adapter may have picked up a file that was still potentially being written to. To work around this problem, you must set the adapter configuration property, MinimumAge
to a higher value.
This section describes common user errors.
On the Adapter Configuration Wizard - Messages window (Figure 4-19), you can select the Native format translation is not required (Schema is Opaque) check box. Opaque cannot be selected in only one direction. Opaque must be selected in both the inbound and outbound directions.
Messages have a different meaning based on whether they are inbound or outbound. For example, assume you make the following selections:
Select 2 from the Publish Messages in Batches of list (Figure 4-17) in the inbound direction.
Select 3 from the Number of Messages Equal list (Figure 4-22) in the outbound direction.
If an inbound file contains two records, it is split (debatched) into two messages. However, because 3 was specified in the outbound direction, a file is not created. This is because there are not three outbound messages available. Ensure that you understand the meaning of inbound and outbound messages before selecting these options.
If the Oracle File Adapter or the Oracle FTP Adapter is not able to read or get your file, respectively, it may be because you chose to match file names using the regular expression (regex) but are not correctly specifying the name (Figure 4-17). For more information, see Table 4-3.
You may have content that does not require translation (for example, a JPG or GIF image) that you just want to send "as is." The file is passed through in base-64 encoding. This content is known as opaque. To do this, select the Native format translation is not required (Schema is Opaque) check box on the Adapter Configuration Wizard - Messages window (Figure 4-19). If you select this check box, you do not need to specify an XSD file for translation.
The inbound directory must exist for the Oracle File Adapter or the Oracle FTP Adapter to read or get your file, respectively.
If the Oracle FTP Adapter cannot connect to a remote host, ensure that you have configured the Oracle_Home
\bpel\system\appserver\oc4j\j2ee\home\application-deployments\default\FtpAdapter\
oc4j-ra.xml
deployment descriptor file for adapter instance JNDI name and FTP server connection information. For more information, see Oracle FTP Adapter Get File Concepts.
Oracle_Home\bpel\system\appserver\oc4j\j2ee\home\application-deployments\default\FtpA
dapter\oc4j-ra.xml
You cannot use completely static names such as po.txt
for outbound files. Instead, outgoing file names must be a combination of static and dynamic portions. This is to ensure the uniqueness of outgoing file names, which prevents files from being inadvertently overwritten. For information about creating correct outbound file names, see Specifying the Outbound File Naming Convention.
Two header files are created in the Applications Navigator after you finish running the Adapter Configuration Wizard in both directions:
type
AdapterInboundHeader.wsdl
Provides information such as the name of the file being processed and its directory path, as well as data about which message and part define the header operations
type
AdapterOutboundHeader.wsdl
Provides information about the outbound file name
where type
is either ftp
or file
.
You can define properties in these header files. For example, you can specify dynamic inbound and outbound file names through use of the InboundHeader_msg
and OutboundHeader_msg
message names in the type
AdapterInboundHeader.wsdl
and type
AdapterOutboundHeader.wsdl
files, respectively.
You can also set header variables that appear in the BPEL process file. Header variables are useful for certain scenarios. For example, in a file propagation scenario, files are being moved from one file system to another using the Oracle File Adapter. In this case, it is imperative that you maintain file names across the two systems. Use file headers in both directions, and set the file name in the outbound file header to use the file name in the inbound file header.
See the online help available with the Properties tab of invoke, receive, reply, and pick - OnMessage branch activities for more information.
The Adapter Configuration Wizard - File Modification Time window (Figure 4-35) prompts you to select a method for obtaining the modification times of files on the FTP server.
You must perform the following procedure to obtain this information:
Determine the modification time format supported by the FTP Server by running the command mdtm
or ls -al
(whichever is supported by the operating system).
Determine the time difference between the system time (time on which Oracle BPEL Server is running) and the file modification time. Obtain the file modification time by running either mdtm
or ls -al
on the FTP server.
Manually add the time difference to the bpel.xml
as a property:
<activationAgents> <activationAgent ...> <property name="timestampOffset">2592000000</property>
Specify the Substring Begin Index and End Index field values that you determine by running the mdtm
or ls -al
command on the FTP server.
The following sections describe possible issues and solutions when using the Oracle JCA Adapter for AQ (Oracle AQ Adapter):
The following sections describe possible issues and solutions for inbound errors when using the Oracle AQ Adapter.
Sample error:
Unable to locate the JCA Resource Adapter via .jca binding file element <connection-factory/> The JCA Binding Component is unable to startup the Resource Adapter specified in the <connection-factory/> element: location='eis/AQ/aqSample2'.
Problem
The reason for this error is most likely that either:
The resource adapters RAR file has not been deployed successfully to the WebLogic J2EE Application server.
The JNDI <jndi-name> setting in the WebLogic JCA deployment descriptor has not been set to eis/AQ/aqSample2
.
In the latter case, you might have to add a new 'connector-factory' entry (connection) to the deployment descriptor. Correct this and then restart the Oracle WebLogic Application Server.
Solution
Ensure that the Oracle AQ Adapter is deployed and running. This can be verified using the Oracle WebLogic Server Administration Console.
Ensure that the connection instance with the above JNDI name is defined in the weblogic-ra.xml file. This can be verified by using the Oracle WebLogic Server Administration Console, as shown in the following example:
<connection-instance> <jndi-name>eis/AQ/aqSample2</jndi-name> <connection-properties> <properties> <property> <name>XADataSourceName</name> <value>jdbc/aqSample2</value> </property> <property> <name>DataSourceName</name> <value></value> </property> <property> <name>ConnectionString</name> <value></value> </property> <property> <name>UserName</name> <value></value> </property> <property> <name>Password</name> <value></value> </property> <property> <name>DefaultNChar</name> <value>false</value> </property> <property> <name>UseDefaultConnectionManager</name> <value>false</value> </property> </properties> </connection-properties> </connection-instance>
Sample error:
Apr 19, 2009 11:52:23 PM oracle.integration.platform.blocks.adapter.fw.log.LogManagerImpl log SEVERE: JCABinding=> Raw Error while performing endpoint Activation: BINDING.JCA-11975 AQ_INVALID_QUEUE. Unable to obtain queue table name. Queue does not exist or not defined correctly. Drop and re-create queue.
Solution
Create the queue and redeploy the process. If this process is deployed from the samples, all queue creation scripts are located in sql\create_queues.sql
under each project.
As a general note, problems in the outbound direction are often not caught at deployment time because an outbound thread is only activated if there is a message going to outbound.
Sample error:
NOTIFICATION: Purge-Repos Diagnostics [auto-purge, partition-name, partition-id, #versions purged]: [true, soa-infra, 1, 2]. Apr 20, 2009 12:03:00 AM oracle.integration.platform.blocks.adapter.fw.log.LogManagerImpl log WARNING: JCABinding=> JCABinding=> Raw:Outbound [ Enqueue_ptt::Enqueue(opaque) ] JNDI lookup of 'eis/AQ/aqSample3' failed due to: Unable to resolve 'eis.AQ.aqSample3'. Resolved 'eis.AQ' Apr 20, 2009 12:03:00 AM oracle.integration.platform.blocks.adapter.fw.log.LogManagerImpl log SEVERE: JCABinding=> Raw:Outbound [ Enqueue_ptt::Enqueue(opaque) ] Could not invoke operation 'Enqueue' against the 'null' due to: BINDING.JCA-12511 JCA Binding Component connection issue. JCA Binding Component is unable to create an outbound JCA (CCI) connection. Raw:Outbound [ Enqueue_ptt::Enqueue(opaque) ] : The JCA Binding Component was unable to establish an outbound JCA CCI connection due to the following issue: BINDING.JCA-12510 JCA Resource Adapter location error. Unable to locate the JCA Resource Adapter via .jca binding file element <connection-factory/> The JCA Binding Component is unable to startup the Resource Adapter specified in the <connection-factory/> element: location='eis/AQ/aqSample3'.
Problem
The reason for this error could be either of the following:
The resource adapter's RAR file has not been deployed successfully to the Oracle WebLogic Server.
The <jndi-name> element in weblogic-ra.xml
is not set to eis/AQ/aqSample3
.
In the later case, you must add a new WebLogic JCA connection factory (deploy a RAR). Correct this and then restart the Oracle WebLogic Server.
Examine the log file for any other reasons. Enable FINEST
adapter logging by using the Oracle Enterprise Manager Console.
Solution
See the solution section for the same problem in the inbound section, as described in Inbound Errors.
INFO: AQ Adapter Raw:Outbound [ Enqueue_ptt::Enqueue(opaque) ] begin() ignored... Apr 20, 2009 12:08:02 AM oracle.integration.platform.blocks.adapter.fw.log.LogManagerImpl log SEVERE: JCABinding=> Raw:Outbound [ Enqueue_ptt::Enqueue(opaque) ] Could not invoke operation 'Enqueue' against the 'AQ Adapter' due to: BINDING.JCA-11975 AQ_INVALID_QUEUE. Unable to obtain queue table name. Queue does not exist or not defined correctly. Drop and re-create queue.
Solution
Same solution as the inbound Queue not found problem. Create the queue and redeploy the process. If this process is deployed from the samples, all queue creation scripts are located in sql\create_queues.sql
under each project.
JMS Provider Error
ORABPEL-12165 ERRJMS_PROVIDER_ERR. Could not produce message due to JMS provider error. Please examine the log file to determine the problem. ....... ...... ....... Caused by: javax.jms.JMSException: MQJMS1013: operation invalid whilst session is using asynchronous delivery.
Solution:
This exception occurs when the inbound JMS adapter and the outbound JMS adapter for MQ provider use same JNDI name. To avoid this exception, specify different JNDI names in WSDL files for inbound and outbound JMS adapter for MQ provider.
Another workaround is to specify the UseMessageListener
property as false
in the inbound WSDL file. For example:
UseMessageListener="false"