Skip Headers
Oracle® Fusion Middleware Connectivity and Knowledge Modules Guide for Oracle Data Integrator
11g Release 1 (11.1.1)

Part Number E12644-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

18 Oracle Hyperion Essbase

This chapter describes how to work with Oracle Hyperion Essbase in Oracle Data Integrator.

This chapter includes the following sections:

18.1 Introduction

Oracle Data Integrator Adapter for Oracle's Hyperion Essbase enables you to connect and integrate Essbase with virtually any source or target using Oracle Data Integrator. The adapter provides a set of Oracle Data Integrator Knowledge Modules (KMs) for loading and extracting metadata and data and calculating data in Essbase applications.

18.1.1 Integration Process

You can use Oracle Data Integrator Adapter for Essbase to perform these data integration tasks on an Essbase application:

  • Load metadata and data

  • Extract metadata and data

Using the adapter to load or extract metadata or data involves the following tasks:

18.1.2 Knowledge Modules

Oracle Data Integrator provides the Knowledge Modules (KM) listed in Table 18-1 for handling Hyperion Essbase data. These KMs use Hyperion Essbase specific features. It is also possible to use the generic SQL KMs with the Hyperion Essbase database. See Chapter 4, "Generic SQL" for more information.

Table 18-1 Hyperion Essbase Knowledge Modules

Knowledge Module Description

RKM Hyperion Essbase

Reverse-engineers Essbase applications and creates data models to use as targets or sources in Oracle Data Integrator interfaces

IKM SQL to Hyperion Essbase (DATA)

Integrates data into Essbase applications.

IKM SQL to Hyperion Essbase (METADATA)

Integrates metadata into Essbase applications

LKM Hyperion Essbase DATA to SQL

Loads data from an Essbase application to any SQL compliant database used as a staging area.

LKM Hyperion Essbase METADATA to SQL

Loads metadata from an Essbase application to any SQL compliant database used as a staging area.


18.2 Installation and Configuration

Make sure you have read the information in this section before you start using the Oracle Data Integrator Adapter for Essbase:

18.2.1 System Requirements and Certifications

Before performing any installation you should read the system requirements and certification documentation to ensure that your environment meets the minimum installation requirements for the products you are installing.

The list of supported platforms and versions is available on Oracle Technical Network (OTN):

http://www.oracle.com/technology/products/oracle-data-integrator/index.html.

18.2.2 Technology Specific Requirements

There are no technology-specifc requirements for using the Oracle Data Integrator Adapter for Essbase.

18.2.3 Connectivity Requirements

There are no connectivity-specific requirements for using the Oracle Data Integrator Adapter for Essbase.

18.3 Setting up the Topology

Setting up the Topology consists of:

  1. Creating an Hyperion Essbase Data Server

  2. Creating an Hyperion Essbase Physical Schema

18.3.1 Creating an Hyperion Essbase Data Server

Create a data server for the Hyperion Essbase technology using the standard procedure, as described in "Creating a Data Server" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section details only the fields required or specific for defining a Hyperion Essbase data server:

  1. In the Definition tab:

    • Name: Enter a name for the data server definition.

    • Server (Data Server): Enter the Essbase server name.

    Note:

    If the Essbase server is running on a port other than the default port (1423), then provide the Essbase server details in this format, <Essbase Server hostname>:<port>.
  2. Under Connection, enter a user name and password for connecting to the Essbase server.

Note:

The Test button does not work for an Essbase data server connection. This button works only for relational technologies that have a JDBC Driver.

18.3.2 Creating an Hyperion Essbase Physical Schema

Create a Hyperion Essbase physical schema using the standard procedure, as described in "Creating a Physical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator.

Under Application (Catalog) and Application (Work Catalog), specify an Essbase application and under Database (Schema) and Database (Work Schema), specify an Essbase database associated with the application you selected.

Create for this physical schema a logical schema using the standard procedure, as described in "Creating a Logical Schema" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator and associate it in a given context.

18.4 Creating and Reverse-Engineering an Essbase Model

This section contains the following topics:

18.4.1 Create an Essbase Model

Create an Essbase Model using the standard procedure, as described in "Creating a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator

18.4.2 Reverse-engineer an Essbase Model

Reverse-engineering an Essbase application creates an Oracle Data Integrator model that includes a datastore for each dimension in the application and a datastore for data.

To perform a Customized Reverse-Engineering on Hyperion Essbase with a RKM, use the usual procedure, as described in "Reverse-engineering a Model" of the Oracle Fusion Middleware Developer's Guide for Oracle Data Integrator. This section details only the fields specific to the Hyperion Essbase technology.

  1. In the Reverse tab of the Essbase Model, select the RKM Hyperion Essbase.

  2. Set the KM options as indicated in Table 18-2.

    Table 18-2 RKM Hyperion Essbase Options

    Option Possible Values Description

    MULTIPLE_DATA_COLUMNS

    • No (Default)

    • Yes

    If this option is set to No, then the datastore created for the data extract / load model contains one column for each of the standard dimensions and a single data column.If this option is set to Yes, then the datastore created for the data extract / load model contains one column for each of the standard dimensions excluding the dimension specified by the DATA_COLUMN_DIMENSION option and as many data columns as specified by the comma separated list for the DATA_COLUMN_MEMBERS option.

    DATA_COLUMN_DIMENSION

    Account

    This option is only applicable if MULTIPLE_DATA_COLUMNS is set to Yes.

    Specify the data column dimension name. For example, data columns are spread across the dimension Account or Time, and so on.

    DATA_COLUMN_MEMBERS

    Account

    This option is only applicable if MULTIPLE_DATA_COLUMNS is set to Yes.

    Separate the required data column members with, (Comma).

    For example, if the data column dimension is set to Account and members are set to Sales,COGS then the datastore for data extract/load contains one column for each of the dimension except the data column dimension and one column for each of the data column member specified in the comma separated value. For example. Assuming that the dimensions in the Essbase application are Account, Scenario, Product, Market, and Year and the data column dimension is specified as Account and Data Column Members as Sales, COGS, the datastore will have the following columns:

    • Scenario (String)

    • Product (String)

    • Market (String)Year (String)

    • Sales (Numeric)

    • COGS (Numeric)

    EXTRACT_ATTRIBUTE_MEMBERS

    • No (Default)

    • Yes

    If this option is set to No, then the datastore created for the data extract / load model contains one column for each of the standard dimensions and a single data column. Attribute dimensions are not included.

    If this option is set to Yes, then the data model contains these columns.

    • One column is created for each of the standard dimensions

    • One or more Data column(s) are created depending upon the value of the MULTIPLE_DATA_COLUMN option

    • One column is created for each of the associated attribute dimension


The RKM connects to the application (which is determined by the logical schema and the context) and imports some or all of these datastores, according to the dimensions in the application.

18.5 Designing an Interface

After reverse-engineering an Essbase application as a model, you can use the datastores in this model in these ways:

The KM choice for an interface determines the abilities and performance of this interface. The recommendations in this section help in the selection of the KM for different situations concerning Hyperion Essbase.

This section contains the following topics:

18.5.1 Loading Metadata

Oracle Data Integrator provides the IKM SQL to Hyperion Essbase (METADATA) for loading metadata into an Essbase application.

Metadata consists of dimension members. You must load members, or metadata, before you load data values for the members.

You can load members only to dimensions that exist in Essbase. You must use a separate interface for each dimension that you load. You can chain interfaces to load metadata into several dimensions at once.

  1. Note:

    The metadata datastore can also be modified by adding or delete columns to match the dimension build rule that will be used to perform the metadata load. For example, the default datastore would have columns for ParentName and ChildName, if the rule is a generational dimension build rule, you can modify the metadata datastore to match the columns within your generational dimension build rule. The loadMarkets interface within the samples is an example of performing a metadata load using a generational dimension build rule.

Table 18-3 lists the options of the IKM SQL to Hyperion Essbase (METADATA). These options define how the adapter loads metadata into an Essbase application.

Table 18-3 IKM SQL to Hyperion Essbase (METADATA) Options

Option Values Description

RULES_FILE

Blank (Default)

Specify the rules file for loading or building metadata. If the rules file is present on the Essbase server, then, only specify the file name, otherwise, specify the fully qualified file name with respect to the Oracle Data Integrator Agent.

RULE_SEPARATOR

, (Default)

(Optional) Specify a rule separator in the rules file.

These are the valid values:

  • Comma

  • Tab

  • Space

  • Custom character; for example, @, #, ^

RESTRUCTURE_DATABASE

  • KEEP_ALL_DATA (Default)

  • KEEP_INPUT_DATA

  • KEEP_LEVEL0_DATA

  • DISCARD_ALL_DATA

Restructure database after loading metadata in the Essbasecube.

These are the valid values:

  • KEEP_ALL_DATA- Keep all the data

  • KEEP_INPUT_DATA Keep onlyinput data

  • KEEP_LEVEL0_DATA-Keep onlylevel 0 data

  • DISCARD_ALL_DATA-Discard alldata

Note: This option is applicable for the Essbase Release 9.3 and later. For the Essbase releases prior to 9.3, this option is ignored.

PRE_LOAD_MAXL_SCRIPT

Blank (Default)

Enable this option to execute a MAXL script before loading metadata to the Essbase cube.

Specify a fully qualified path name (without blank spaces) for the MAXL script file.

Note: To successfully execute this option, the Essbase client must be installed and configured on the machine where the Oracle Data Integrator Agent is running.

POST_LOAD_MAXL_SCRIPT

Blank (Default)

Enable this option to execute a MAXL script after loading metadata to the Essbase cube.

Specify a fully qualified path name (without blank spaces) for the MAXL script file.

Note: To successfully execute this option, the Essbase client must be installed and configured on the machine where the Oracle Data Integrator Agent is running.

ABORT_ON_PRE_MAXL_ERROR

  • No (Default)

  • Yes

This option is only applicable if you are enabling the PRE_LOAD_MAXL_SCRIPT option.

If you set the ABORT_ON_PRE_MAXL_ERROR option to Yes, then the load process is aborted on encountering any error while executing the pre-MAXL script.

LOG_ENABLED

  • No (Default)

  • Yes

If this option is set to Yes, during the IKM process, logging is done to the file specified in the LOG_FILE_NAME option.

LOG_FILE_NAME

<?=java.lang.System.getProperty (“java.io.tmpdir”)?>/Extract_<% =snpRef.getFrom()%>.log (Default)

Specify a file name to log events of the IKM process.

ERROR_LOG_FILENAME

<?=java.lang.System.getProperty (“java.io.tmpdir”)?>/Extract_<% =snpRef.getFrom()%>.log (Default)

Specify a file name to log the error records of the IKM process.


18.5.2 Loading Data

Oracle Data Integrator provides the IKM SQL to Hyperion Essbase (DATA) for loading data into an Essbase application.

You can load data into selected dimension members that are already created in Essbase. For a successful data load, all the standard dimension members are required and they should be valid members. You must set up the Essbase application before you can load data into it.

You can also create a custom target to match a load rule.

Before loading data, ensure that the members (metadata) exist in the Essbase dimension. The data load fails for records that have missing members and this information is logged (if logging is enabled) as an error record and the data load process will continue until the maximum error threshold is reached.

Note:

The data datastore can also be modified by adding or delete columns to match the data load rule that will be used to perform the data load.

Table 18-4 lists the options of the IKM SQL to Hyperion Essbase (DATA). These options define how the adapter loads and consolidates data in an Essbase application.

Table 18-4 IKM SQL to Hyperion Essbase (DATA)

Option Values Description

RULES_FILE

Blank (Default)

(Optional) Specify a rules file to enhance the performance of data loading.

Specify a fully qualified file name if the rules file is not present on the Essbase server.

If the rules file option is not specified, then the API-based data load is used. However, you cannot specify the API.

RULE_SEPARATOR

, (Default)

(Optional) Specify a rule separator in the rules file.

These are the valid values:

  • Comma

  • Tab

  • Space

  • Custom character; for example, @, #, ^

GROUP_ID

Integer

When performing multiple data loads in parallel, many interfaces can be set to use the same GROUP_ID. This GROUP _ID is used to manage parallel loads allowing the data load to be committed when the final interface for the GROUP_ID is complete. For more information on loading to parallel ASO cubes, refer to the Essbase Database Administrators guide.

BUFFER_ID

1–1000000

Multiple data load buffers can exist on an aggregate storage database. To save time, you can load data into multiple data load buffers at the same time. Although only one data load commit operation on a database can be active at any time, you can commit multiple data load buffers in the same commit operation, which is faster than committing buffers individually. For more information on loading to parallel ASO cubes, refer to the Essbase Database Administrators guide.

BUFFER_SIZE

0-100

When performing an incremental data load, Essbase uses the aggregate storage cache for sorting

data. You can control how much of the cache a data load buffer can use by specifying the percentage (between 0 and 100% inclusive). By default, the resource usage of a data load buffer is set to 100, and the total resource usage of all data load buffers created on a database cannot exceed 100. For example, if a buffer of 90 exists, you cannot create another buffer of a size greater than 10. A value of 0 indicates to Essbase to use a self-determined, default load

buffer size.

CLEAR_DATABASE

  • None (Default)

  • All

  • Upper Blocks

  • Non-input Blocks

Enable this option to clear data from the Essbase cube before loading data into it.

These are the valid values:

  • None—Clear database will not happen

  • All—Clears all data blocksinput data

  • Upper Blocks—Clears all consolidated level blocks

  • Non-Input Blocks—Clears blocks containing values derived from calculations

Note: For ASO applications, the Upper Blocks and Non-Input Blocks options will not be applicable.

CALCULATION_SCRIPT

Blank (Default)

(Optional) Specify the calculation script that you want to run after loading data in the Essbase cube.

Provide a fully qualified file name if the calculation script is not present on the Essbase server.

RUN_CALC_SCRIPT_ONLY

  • No (Default)

  • Yes

This option is only applicable if you have specified a calculation script in the CALCULATION_SCRIPT option.

If you set the RUN_CALC_SCRIPT_ONLY option to Yes, then only the calculation script is executed without loading the data into the target Essbase cube.

PRE_LOAD_MAXL_SCRIPT

Blank (Default)

Enable this option to execute a MAXL script before loading data to the Essbase cube.

Specify a fully qualified path name (without blank spaces) for the MAXL script file.

Note: Essbase client must be installed and configured on the machine where the Oracle Data Integrator Agent is running.

POST_LOAD_MAXL_SCRIPT

Blank (Default)

Enable this option to execute a MAXL script after loading data to the Essbase cube.

Specify a fully qualified path name (without blank spaces) for the MAXL script file.

Note: Essbase client must be installed and configured on the machine where the Oracle Data Integrator Agent is running.

ABORT_ON_PRE_MAXL_ERROR

  • No (Default)

  • Yes

This option is only applicable if you are enabling the PRE_LOAD_MAXL_SCRIPT option.

If you set the ABORT_ON_PRE_MAXL_ERROR option to Yes, then the load process is aborted on encountering any error while executing pre-MAXL script.

MAXIMUM_ERRORS_ALLOWED

1 (Default)

Enable this option to set the maximum number of errors to be ignored before stopping a data load.

The value that you specify here is the threshold limit for error records encountered during a data load process. If the threshold limit is reached, then the data load process is aborted. For example, the default value 1 means that the data load process stops on encountering a single error record. If value 5 is specified, then data load process stops on encountering the fifth error record. If value 0 (== infinity) is specified, then the data load process continues even after error records are encountered.

COMMIT_INTERVAL

1000 (Default)

Commit Interval is the chunk size of records that are loaded in the Essbase cube in a complete batch.

Enable this option to set the Commit Interval for the records in the Essbase cube.

Changing the Commit Interval can increase performance of data load based on design of the Essbase database.

LOG_ENABLED

  • No (Default)

  • Yes

If this option is set to Yes, during the IKM process, logging is done to the file specified in the LOG_FILENAME option.

LOG_FILENAME

<?=java.lang.System.getProperty(“java.io.tmpdir”)?/<%=snpRef.getTargetTable("RES_NAME")%>.log (Default)

Specify a file name to log events of the IKM process.

LOG_ERRORS

  • No (Default)

  • Yes

If this option is set to Yes, during the IKM process, details of error records are logged to the file specified in the ERROR_LOG_FILENAME option.

ERROR_LOG_FILENAME

<?=java.lang.System.getProperty(java.io.tmpdir”)?>/<%=snpRef.getTargetTable("RES_NAME")%>.err

Specify a file name to log error record details of the IKM process.

ERR_LOG_HEADER_ROW

  • No (Default)

  • Yes

If this option is set to Yes, then the header row containing the column names are logged to the error records file.

ERR_COL_DELIMITER

, (Default)

Specify the column delimiter to be used for the error records file.

ERR_ROW_DELIMITER

\r\n (Default)

Specify the row delimiter to be used for the error records file.

ERR_TEXT_DELIMITER

' (Default)

Specify the text delimiter to be used for the column data in the error records file.

For example, if the text delimiter is set as ' " ' (double quote), then all the columns in the error records file will be delimited by double quotes.


18.5.3 Extracting Data

This section includes the following topics:

18.5.3.1 Data Extraction Methods for Essbase

The Oracle Data Integrator Adapter for Essbase supports querying and scripting for data extraction. To extract data, as a general process, create an extraction query and provide the extraction query to the adapter. Before the adapter parses the output of the extraction query and populates the staging area, a column validation is done. The adapter executes the extraction query based on the results of the metadata output query during the validation. The adapter does the actual parsing of the output query only when the results of the column validation are successful.

After the extraction is complete, validate the results—make sure that the extraction query has extracted data for all the output columns.

You can extract data with these Essbase-supported query and scripts:

Data Extraction Using Report Scripts

Data can be extracted by parsing the reports generated by report scripts. The report scripts can exist on the client computer as well as server, where Oracle Data Integrator is running on the client computer and Essbase is running on the server. The column validation is not performed when extracting data using report scripts. So, the output columns of a report script is directly mapped to the corresponding connected column in the source model. However, before you begin data extract using report scripts, you must complete these tasks:

  • Suppress all formatting in the report script. Include this line as the first line in the report script—{ROWREPEAT SUPHEADING SUPFORMAT SUPBRACKETS SUPFEED SUPCOMMAS NOINDENTGEN TABDELIMIT DECIMAL 15}.

  • The number of columns produced by a report script must be greater than or equal to the connected columns from the source model.

  • The column delimiter value must be set in the LKM option.

Data Extraction Using MDX Queries

An MDX query is an XML-based data-extraction mechanism. You can specify the MDX query to extract data from an Essbase application. However, before you begin data extract using MDX queries, you must complete these tasks:

  • The names of the dimension columns must match with the dimensions in the Essbase cube.

  • For Type 1 data extraction, all the names of data columns must be valid members of a single standard dimension.

  • For Type 1 data extraction, it is recommended that the data dimension exists in the lower level axis, that is, axis (0) of columns. If it is not specified in the lowest level axis then the memory consumption would be high.

  • If columns are connected with the associated attribute dimension from the source model, then, the same attribute dimension must be selected in the MDX query.

  • The script of the MDX query can be present on the client computer or the server.

Data Extraction Using Calculation Scripts

Calculation scripts provide a faster option to extract data from an Essbase application. However, before you extract data using the calculation scripts, take note of these restrictions:

  • Data extraction using calculation scripts is supported ONLY for BSO applications.

  • Data extraction using calculation scripts is supported ONLY for the Essbase Release 9.3 and later.

  • Set the DataExportDimHeader option to ON.

  • (If used) Match the DataExportColHeader setting to the data column dimension (in case of multiple data columns extraction).

  • The Oracle Data Integrator Agent, which is used to extract data, must be running on the same machine as the Essbase server.

  • When accessing calculation scripts present on the client computer, a fully qualified path to the file must be provided, for example, C:\Essbase_Samples\Calc_Scripts \calcall.csc, where as, to access calculation scripts present on the server, only the file name is sufficient.

18.5.3.2 Extracting Essbase Data

Oracle Data Integrator provides the LKM Hyperion Essbase DATA to SQL for extracting data from an Essbase application.

You can extract data for selected dimension members that exist in Essbase. You must set up the Essbase application before you can extract data from it.

Table 18-5 provides the options of the LKM Hyperion Essbase Data to SQL. These options define how Oracle Data Integrator Adapter for Essbase extracts data.

Table 18-5 LKM Hyperion Essbase DATA to SQL Options

Option Values Description

PRE_CALCULATION_SCRIPT

Blank (Default)

(Optional) Specify the calculation script that you want to run before extracting data from the Essbase cube.

EXTRACTION_QUERY_TYPE

  • ReportScript (Default)

  • MDXQuery

  • CalcScript

Specify an extraction query type—report script, MDX query, or calculation script.

Provide a valid extraction query, which fetches all the data to fill the output columns.

The first record (first two records in case of calculation script) contains the meta information of the extracted data.

EXTRACTION_QUERY_FILE

Blank (Default)

Specify a fully qualified file name of the extraction query.

EXT_COL_DELIMITER

\t (Default)

Specify the column delimiter for the extraction query.

If no value is specified for this option, then space (“ “) is considered as column delimiter.

EXTRACT_DATA_FILE_IN_CALC_SCRIPT

Blank (Default)

This option is only applicable if the query type in the EXTRACTION_QUERY_TYPE option is specified as CalcScript.

Specify a fully qualified file location where the data is extracted through the calculation script..

PRE_EXTRACT_MAXL

Blank (Default)

Enable this option to execute a MAXL script before extracting data from the Essbase cube.

POST_EXTRACT_MAXL

Blank (Default)

Enable this option to execute a MAXL script after extracting data from the Essbase cube.

ABORT_ON_PRE_MAXL_ERROR

  • No (Default)

  • Yes

This option is only applicable if the PRE_EXTRACT_MAXL option is enabled.

If the ABORT_ON_PRE_MAXL_ERROR option is set to Yes, while executing pre-MAXL script, the load process is aborted on encountering any error.

LOG_ENABLED

  • No (Default)

  • Yes

If this option is set to Yes, during the LKM process, logging is done to the file specified in the LOG_FILE_NAME option.

LOG_FILENAME

<?=java.lang.System.getProperty (“java.io.tmpdir”)?/<% =snpRef.getTargetTable("RES_NAME")%>.log (Default)

Specify a file name to log events of the LKM process.

MAXIMUM_ERRORS_ALLOWED

1 (Default

Enable this option to set the maximum number of errors to be ignored before stopping extract.

LOG_ERRORS

  • No (Default)

  • Yes

If this option is set to Yes, during the LKM process, details of error records are logged to the file specified in the ERROR_LOG_FILENAME option.

ERROR_LOG_FILENAME

<?=java.lang.System.getProperty(java.io.tmpdir”)?>/<%=snpRef.getTargetTable("RES_NAME")%>.err

Specify a file name to log error record details of the LKM process.

ERR_LOG_HEADER_ROW

  • No (Default)

  • Yes

If this option is set to Yes, then the header row containing the column names are logged to the error records file.

ERR_COL_DELIMITER

, (Default)

Specify the column delimiter to be used for the error records file.

ERR_ROW_DELIMITER

\r\n (Default)

Specify the row delimiter to be used for the error records file.

ERR_TEXT_DELIMITER

' (Default)

Specify the text delimiter to be used for the column data in the error records file.

For example, if the text delimiter is set as ' " ' (double quote), then all the columns in the error records file are delimited by double quotes.

DELETE_TEMPORARY_OBJECTS

  • No (Default)

  • Yes

Set this option to No, in order to retain temporary objects (tables, files, and scripts) after integration.

This option is useful for debugging.


18.5.3.3 Extracting Members from Metadata

Oracle Data Integrator provides the LKM Hyperion Essbase METADATA to SQL for extracting members from a dimension in an Essbase application.

To extract members from selected dimensions in an Essbase application, you must set up the Essbase application and load metadata into it before you can extract members from a dimension.Before extracting members from a dimension, ensure that the dimension exists in the Essbase database. No records are extracted if the top member does not exist in the dimension.

Table 18-6 lists the options of the LKM Hyperion Essbase METADATA to SQL. These options define how Oracle Data Integrator Adapter for Oracle's Hyperion Essbase extracts dimension members.

Table 18-6 LKM Hyperion Essbase METADATA to SQL

Option Values Description

MEMBER_FILTER_CRITERIA

IDescendants, (Default)

Enable this option to select members from the dimension hierarchy for extraction. You can specify these selection criteria:

  • IDescendants

  • Descendants

  • IChildren

  • Children

  • Member_Only

  • Level0

  • UDA

MEMBER_FILTER_VALUE

Blank (Default)

Enable this option to provide the member name for applying the specified filter criteria. If no member is specified, then the filter criteria is applied on the root dimension member.If the MEMBER_FILTER_CRITERIA value is MEMBER_ONLY or UDA, then the MEMBER_FILTER_VALUE option is mandatory and cannot be an empty string.

LOG_ENABLED

  • No (Default)

  • Yes

If this option is set to Yes, during the LKM process, logging is done to the file specified by the LOG_FILE_NAME option.

LOG_FILE_NAME

<?=java.lang.System.getProperty(java.io.tmpdir”)?>/Extract_<%=snpRef.getFrom()%>.log

Specify a file name to log events of the LKM process.

MAXIMUM_ERRORS_ALLOWED

1 (Default)

Enable this option to set the maximum number of errors to be ignored before stopping extract.

LOG_ERRORS

  • No (Default)

  • Yes

If this option is set to Yes, during the LKM process, details of error records are logged to the file specified in the ERROR_LOG_FILENAME option.

ERROR_LOG_FILENAME

<?=java.lang.System.getProperty(java.io.tmpdir”)?>/Extract_<%=snpRef.getFrom()%>.err

Specify a file name to log error record details of the LKM process.

ERR_LOG_HEADER_ROW

  • No (Default)

  • Yes

If this option is set to Yes, then the header row containing the column names are logged to the error records file.

ERR_COL_DELIMITER

, (Default)

Specify the column delimiter to be used for the error records file.

ERR_ROW_DELIMITER

\r\n (Default)

Specify the row delimiter to be used for the error records file.

ERR_TEXT_DELIMITER

  • Blank (Default)

  • \"

  • \"

Specify the text delimiter to be used for the data column in the error records file. For example, if the text delimiter is set as ' " ' (double quote), then all the columns in the error records file are delimited by double quotes.

DELETE_TEMPORARY_OBJECTS

  • No (Default)

  • Yes

Set this option to No, in order to retain temporary objects (tables, files, and scripts) after integration.

This option is useful for debugging.