Oracle® Enterprise Manager Cloud Control Extensibility Programmer's Reference 12c Release 2 (12.1.0.2) Part Number E25161-06 |
|
|
PDF · Mobi · ePub |
By defining new job types, you can extend the utility and flexibility of the Enterprise Manager job system. Adding new job types also allows you to enhance corrective actions. This chapter assumes that you are already familiar with the Enterprise Manager job system.
This chapter includes the following topics:
As a plug-in developer, you are responsible for the following steps with regard to adding job types:
Defining Job Types
You define a job type by using an XML specification that defines the steps in a job, the work (command) that each step performs, and the relationships between the steps.
For more information, see "About Job Types".
Executing long-running commands
The job system allows plug-in developers to write commands that perform their work at the Management Service level.
For more information. see"Executing Long-Running Commands at the Oracle Management Service".
Specifying parameter sources
By default, the job system expects plug-in developers to provide values for all job parameters, either when the job is submitted or at execution time (by adding/updating parameters dynamically).
For more information, see "Specifying Parameter Sources".
Specifying credential information
for more information, see "Specifying Credential Information".
Specifying security information
For more information, see "Specifying Security Information".
Specifying lock information
For more information, see"Specifying Lock Information".
Suspending a job or step
For more information, see "Suspending a Job or Step".
Restarting a job
For more information, see "Restarting a Job".
Enterprise Manager allows you to define jobs of different types that can be executed using the Enterprise Manager job system, thereby extending the number and complexity of the tasks you can automate.
By definition, a job type is a specific category of job that carries out a well-defined unit of work. A job type is uniquely identified by a string. For example, OSCommand may be a job type that runs a remote command. You define a job type by using an XML specification that defines the steps in a job, the work (command) that each step performs, and the relationships between the steps.
Table 7-1 shows some of the Enterprise Manager job types and functions.
Table 7-1 Example of Job Types
Job Type | Purpose |
---|---|
Backup |
Backs up a database. |
Backup Management |
Performs management functions such as crosschecks and deletions on selected backup copies, backup sets, or files. |
CloneHome |
Clones an Oracle Home directory. |
DBClone |
Clones an Oracle Database instance. |
DBConfig |
Configures monitoring for database releases earlier than release 10g. |
Export |
Exports database contents or objects within an Enterprise Manager user's schemas and tables. |
GatherStats |
Generates and modifies optimizer statistics. |
OSCommand |
Runs an operating system command or script. |
HostComparison |
Compares the configurations of multiple hosts. |
Import |
Imports the content of objects and tables. |
Load |
Loads data from a non-Oracle Database into an Oracle Database. |
Move Occupant |
Moves occupants of the SYSAUX tablespace to another tablespace. |
Patch |
Patches an Oracle product. |
Recovery |
Restores or recovers a database, tablespaces, data files, or archived logs. |
RefreshFromMetalink |
Allows Enterprise Manager to download patches and critical patch advisory information from My Oracle Support ( |
Reorganize |
Rebuilds fragmented database indexes or tables, moves objects to a different tablespace, or optimizes the storage attributes of specified objects. |
Multi-Task |
Runs a composite job consisting of multiple tasks. |
SQLScript |
Runs a SQL or PL/SQL script using SQL*Plus. |
An Enterprise Manager job consists of a set of steps and each step runs a command or script. The job type defines how the steps are assembled. For example, which steps run serially, which ones execute in parallel, step order, and dependencies. You can express a job type, the steps, and commands in XML (for more information, see "Specifying a New Job Type in XML"). The job system then constructs an execution plan from the XML specification that allows it to run the steps in the specified order.
A new job type is specified in XML. The job type specification provides information to the job system on the following:
Steps that make up the job.
Commands or scripts to run in each step.
How steps relate to each other. For example, whether steps run in parallel or serially, or whether one step depends on another step.
User credentials to authenticate the job (typically, the owner of the job must provide these). The job type author must also declare these credentials in the job type XML.
How specific job parameters should be computed (optional).
What locks, if any, a running job execution should attempt to acquire and what happens if the locks are not available.
What privileges users must have in order to submit a job.
The XML job type specification is then added to a metadata plug-in archive. After the metadata plug-in is added to Enterprise Manager, the job system has enough information to schedule the steps of the job, as well as what to run in each step.
A job type can have one of the following categories depending on how it performs tasks on the targets to which it is applied:
Single-Node
A single-node job type is a job type that runs the same set of steps in parallel on every target on which the job is run. Typically, the target lists for these job types is not fixed. They can take any number of targets. The following are examples of single-node job types:
OSCommand
Runs an OS command or script on all of its targets.
SQL
Runs a specified SQL script on all of its targets.
Multi-Node/Combination
A multi-node job type is a job type that performs different, possibly inter-related tasks on multiple targets. Such job types typically operate on a fixed set of targets. For example, a Clone job that clones an application schema might require two targets, a source database and a target database.
Note:
Iterative stepsets may be used for multi-node and combination job types to repeat the same activity over multiple targets.An Agent-bound job type is one whose jobs cannot be run unless the Agent of one or more targets in the target list is functioning and responding. A job type that fits this category must declare itself to be Agent-bound by setting the agentBound attribute of the jobType XML tag to true.
If a job type is Agent-bound, then the job system does not schedule any executions if one or more of the Agents corresponding to the targets in the target list of the job execution are down or not responding. The job (and all its scheduled steps) is set to a special state called Suspended/Agent down. The job is kept in this state until the Enterprise Manager repository tier detects that the emd has come back up.
At this point, the job and its steps are set to scheduled status again and the job can now execute. By declaring their job types to be Agent-bound, a job-type writer can ensure that the job system will not schedule the job when it has detected that the Agent is down.
Note:
Single-node job types are Agent-bound by default while multi-node job types are not.If an Agent-bound job has multiple targets in its target list, it is marked as Suspended even if one of the Agents goes down.
A good example of an Agent-bound job type is the OSCommand job type, which executes an OSCommand using the Agent of a specified target. However, not all job types are Agent-bound. For example, a job type that executes SQL in the Management Repository is not Agent-bound.
Enterprise Manager has a heartbeat mechanism that enables the repository tier to quickly determine when a remote emd goes down. After an emd is marked as Down, all Agent-bound job executions that have this emd in their target list are marked Suspended/Agent Down. However, there is still a possibility that the job system will try to dispatch some remote operations during the time the emd went down and when the Management Repository detects the fact. In cases when agent cannot be contacted and the step executes, the step is set back to a SCHEDULED state and is retried by the job system. The series of retries continues until the heartbeat mechanism marks the node as down, at which point the job is suspended.
When a job is marked as Suspended/Agent Down, by default the job system keeps the job in that state until the emd comes back up. However, there is a parameter called the grace period which, if defined, can override this behavior. The grace period is the maximum amount of time (in minutes) that a job's execution is allowed to start executing within. If the job cannot start within this grace period, the job execution is skipped for that schedule.
The only way that a job execution in a Suspended/Agent Down state can resume is for the Agents to come back up. The resume_execution() APIs cannot be used to resume the job.
The unit of execution in a job is called a step. A step has a command, which determines what work the step will be doing. Each command has a Java class, called a command executor, that implements the command. A command also has a set of parameters, which will be interpreted by the command executor.
The job system will offer a fixed set of pre-built commands, such as the remote operation command (which executes a command remotely), the file transfer command that transfers a file between two Agents, and a get file command that streams a log file produced on the Agent tier into the Management Repository).
Steps are grouped into sets called stepsets. Stepsets can contain steps or other stepsets and can be categorized into the following types:
Serial Stepsets
Serial stepsets are stepsets whose steps execute serially. Steps in a serial stepset can have dependencies on their execution. For example, a job can specify that step S2 executes only if step S1 completes successfully, or that step S3 executes only if S1 fails.
Steps in a serial stepset can have dependencies only on other steps or stepsets within the same stepset. By default, a serial stepset is considered to complete successfully if the last step in the stepset completed successfully. It is considered to have aborted/failed if the last step in the stepset was aborted. This behavior may be overridden by using the stepsetStatus attribute. Overriding is allowed only when the step is not a dependent on another (no successOf/failureOf/abortOf attribute).
Parallel Stepsets
Parallel stepsets are stepsets whose steps execute in parallel (execute simultaneously). Steps in a parallel stepset cannot have dependencies. A parallel stepset is considered to have succeeded if all the parallel steps have completed successfully. It is considered to have aborted if any step within it was aborted. By default, a parallel stepset is considered to have failed if one or more of its constituent steps failed, and no steps were aborted. This behavior can be overridden by using the stepsetStatus attribute.
Iterative Stepsets
Iterative stepsets are special stepsets that iterate over a vector parameter. The target list of a job is available using special, implicit parameters named job_target_names and job_target_types. An iterative stepset iterates over the target list or vector parameter and essentially executes the stepset N times; once for each value of the target list or vector parameter.
Iterative stepsets can execute in parallel (N stepset instances execute at simultaneously), or serially (N stepset instances are scheduled serially, one after another). An iterative stepset is said to have succeeded if all its N instances have succeeded. Otherwise, it is said to have aborted if at least one of the N stepsets aborted. It is said to have failed if at least one of the N stepsets failed and none were aborted. An abort will always cause an iterative stepset to stop processing further.
Steps within each iterative stepset instance execute serially and can have serial dependencies similar to those within serial stepsets. Iterative serial stepsets have an attribute called iterateHaltOnFailure (not applicable for iterativeParallel stepsets). If this is set to true, the stepset halts at the first failed or aborted child iteration. By default, all iterations of an iterative serial stepset execute, even if some of them fail (iterateHaltOnFailure=false).
Switch Stepsets
Switch stepsets are stepsets where only one of the steps in the stepset is executed based on the value of a specified job parameter. A switch stepset has an attribute called switchVarName, which is a job (scalar) parameter whose value will be examined by the job system to determine which of the steps in the stepset should be executed. Each step in a switch stepset has an attribute called switchCaseVal, which is one of the possible values the parameter specified by switchVarName can have.
The step in the switch stepset that is executed is the one whose switchCaseVal parameter value matches the value of the switchVarName parameter of the switch stepset. Only the selected step in the switch stepset is executed. Steps in a switch stepset cannot have dependencies with other steps or stepsets within the same stepset or outside.
By default, a switch stepset is considered to complete successfully if the selected step in the stepset completed successfully. It is considered to have aborted/failed if the selected step in the stepset was aborted/failed. Also, a switch stepset will succeed if no step in the stepset was selected.
For example, if there is a switch stepset with two steps, S1 and S2 and you specify that switchVarName is sendEmail and that switchCaseVal for S1 is true and false for S2. If the job is submitted with the job parameter sendEmail set to true, then S1 will be executed. If the job is submitted with the job parameter sendEmail set to false, then S2 will be executed. If the value of sendEmail is anything else, the stepset will still succeed but do nothing.
Nested Jobs
One of the steps in a stepset may itself be a reference to another job type. A job type can therefore include other job types within itself. However, a job type cannot reference itself.
Nested jobs are a convenient way to reuse blocks of functionality. For example, performing a database backup could be a job in its own right, with a complicated sequence of steps. However, other job types (such as patch and clone) might use the backup facility as a nested job. With nested jobs, the job type writer can choose to pass all the targets of the containing job to the nested job, or only a subset of the targets. Likewise, the job type can specify whether the containing job should pass all its parameters to the nested job or whether the nested job has its own set of parameters (derived from the parent job's parameters). The status of a nested job is determined by the status of the individual steps and stepsets (and possibly other nested jobs) within the nested job.
The default algorithm by which the status of a stepset is computed from the status of its steps can be altered by the job type, using the stepsetStatus attribute of a stepset. By setting stepsetStatus to the name (ID) of a step, stepset, or job contained within it, a stepset can indicate that the status of the stepset depends on the status of the specific step, stepset, or job named in the stepStatus attribute. This feature is useful if the author of a job type wishes a stepset to succeed, even if certain steps within it fail.
A good example is a step that runs as the final step in a stepset in a job that sends e-mail about the status of the job to a list of administrators. The actual status of the job should be set to the status of the step (or steps) that performs the work, not the status of the step that sent the e-mail. Only steps that are unconditionally executed can be named in the stepsetStatus attribute. A step, stepset, or job that is executed as a successOf or failureOf dependency cannot be named in the stepsetStatus attribute.
The parameters of the job can be passed to steps by enclosing the parameter name in a placeholder (contained within two % symbols). For example, %patchNo% would represent the value of a parameter named patchNo. The job system will substitute the value of this parameter when it is passed to the command executor of a step.
Placeholders can also be defined for vector parameters by using the [] notation. For example, the first value of a vector parameter called patchList is referenced as %patchList%[1], the second is %patchList%[2].
The job system provides a predefined set of placeholders that can be used. These are always prefixed by job_. The following placeholders are provided:
job_iterate_index
The index of current value of the parameter in an iterative stepset, when iterating over any vector parameter. The index refers to the closest enclosing stepset only. In case of nested iterative stepsets, the outer iterate index cannot be accessed.
job_iterate_param
The name of the parameter being iterated over, in an iterative stepset.
job_target_names[n]
The job target name at position n. For single-node jobs, the array would always be only of size 1 and refer only to the current node the job is execution on, even if the job was submitted against multiple nodes.
job_target_types[n]
The type of the job target at position n. For single-node jobs, the array would always only be of size one and refer only to the current node the job is executing on, even if the job was submitted against multiple nodes.
job_name
The name of the job.
job_type
The type of the job.
job_owner
The Enterprise Manager user that submitted the job.
job_id
The job id. This is a string representing a globally unique identifier (GUID).
job_execution_id
The execution id. This is a string representing a GUID.
job_step_id
The step id. This is an integer.
In addition to the above placeholders, the following target-related placeholders are also supported:
emd_root: The root location of the emd install
perlbin: The location of the (Enterprise Manager) Perl install
scriptsdir: The location of emd-specific scripts
The above placeholders are not interpreted by the job system, but by the Management Agent. For example, when %emd_root% is used in the remoteCommand or args parameters of the remoteOp
command, or in any of the file names in the putFile
, getFile
and transferFile
commands, the Management Agent substitutes the actual value of the Management Agent root location for this placeholder.
A step consists of a status (indicates whether it succeeded, failed, or terminated), some output (the log of the step), and an error message. If a step failed, the command executed by the step could indicate the error in the error message column. By default, the standard output and standard error of an asynchronous remote operation is set to be the output of the step that requested the remote operation.
A step can choose to insert error messages by either using the getErrorWriter() method in CommandManager (synchronous), or by using the insert_step_error_ message API in the mgmt_jobs package (typically, this is called by a remotely executing script in a command channel).
This section describes available commands and associated parameters. Targets of any type can be provided for the target names and target type parameters described in the following sections. The job system will automatically identify and contact the Agent that is monitoring the specified targets.
The remote operation command has the identifier remoteOp
. The command accepts a credential usage with name as defaultHostCred
. These credentials are required to perform the operation on the host of the target. The binding can be performed as follows:
<step ID="Step_2" command="remoteOp"> <credList> <cred usage="defaultHostCred" reference="osCreds"/> </credList> <paramList> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="remoteCommand">%remoteCommand%</param> <param name="args">%args%</param> <param name="executeSynchronous">false</param> </paramList> </step>
Here defaultHostCred
is the credential usage which is understood by the command. For example, the java code in the command would make a call for credential with this sting, whereas the osCreds
is the credential usage declared in the job type at the top level.
It takes the following parameters:
remoteCommand
: The path name to the executable/script (for example, /usr/local/bin/perl).
args
: A comma-separated list of arguments to the remoteCommand
.
targetName
: The name of the target that the command is executed on. Note that placeholders can be used to represent targets.
targetType
: The target type that the command is executed on.
executeSynchronous
: This option defaults to false whereby a remote command always executes asynchronously on the Agent and the status of the step is updated after the command is done executing.
If set to true, the command executes synchronously, waiting until the Agent completes the process. Typically, this parameter is set to true for quick, short-lived remote operations (such as starting up a listener). For remote operations that take a long time to execute, this parameter should always be set to false.
successStatus
: A comma-separated list of integer values that determines the success of the step. If the remote command returns any of these numbers as the exit status, the step is considered successful. The default is zero. These values are only applicable when executeSynchronous is set to true.
failureStatus
: A comma-separated list of integer values that determines the failure of the step. If the remote command returns any of these numbers as the exit status, the step is considered to have failed. The default is all non-zero values. These values are only applicable when executeSynchronous is set to true.
input
: If specified, this is passed as standard input to the remote program.
outputType
: Specifies the type of output the remote command is expected to generate. This can have two values, normal (the default) or command. Normal output is output that is stored in the log corresponding to this step and is not interpreted in any way. Command output is output that could contain one or more command blocks, which are XML sequences that map to pre-registered SQL procedure calls. By using the command output option, a remote command can generate command blocks that can be directly loaded into schema in the Enterprise Manager repository database.
The standard output generated by the executed command is stored by the job system as the output corresponding to this step.
The fileTransfer
command transfers a file from one Agent to another. It can also execute a command on the source Agent and transfer its standard output as a file to the destination Agent or as standard input to a command on the destination Agent. The fileTransfer
command is always asynchronous and it takes the following parameters:
<step ID="S1" command="fileTransfer"> <credList> <cred usage=”srcReadCreds” reference=”mySourceReadCreds”/> <cred usage=”dstWriteCreds” reference=”myDestWriteCreds”/> </credList> <paramList> <param name="sourceTargetName">%job_target_names%[1]</param> <param name="sourceTargetType">%job_target_types%[1]</param> <param name="destTargetName">%job_target_names%[2]</param> <param name="destTargetType">%job_target_types%[2]</param> <param name="sourceFile">%sourceFile%</param> <param name="sourceCommand">%sourceCommand%</param> <param name="sourceArgs">%sourceArgs%</param> <param name="sourceInput">%sourceInput%</param> <param name="destFile">%destFile%</param> <param name="destCommand">%destCommand%</param> <param name="destArgs">%destArgs%</param> </paramList> </step>
The following command uses two credentials. The srcReadCreds
credential is used to read the file from the source and the dstWriteCreds
credential is used to write the file to the destination. The binding can be performed as follows:
<step ID="S1" command="fileTransfer"> <credList> <cred usage=”srcReadCreds” reference=”mySourceReadCreds”/> <cred usage=”dstWriteCreds” reference=”myDestWriteCreds”/> </credList> <paramList> <param name="sourceTargetName">%job_target_names%[1]</param> <param name="sourceTargetType">%job_target_types%[1]</param> <param name="destTargetName">%job_target_names%[2]</param> <param name="destTargetType">%job_target_types%[2]</param> <param name="sourceFile">%sourceFile%</param> <param name="sourceCommand">%sourceCommand%</param> <param name="sourceArgs">%sourceArgs%</param> <param name="sourceInput">%sourceInput%</param> <param name="destFile">%destFile%</param> <param name="destCommand">%destCommand%</param> <param name="destArgs">%destArgs%</param> </paramList> </step>
sourceTargetName
: The target name corresponding to the source Agent.
destTargetName
: The target name corresponding to the destination Agent.
destTargetType
: The target type corresponding to the destination Agent.
sourceFile
: The file to be transferred from the source Agent.
sourceCommand
: The command to be executed on the source Agent. If this is specified, then the standard output of this command is streamed to the destination Agent. Both sourceFile
and sourceCommand
parameters cannot be specified.
sourceArgs
: A comma-separated set of command-line parameters for the sourceCommand
.
destFile:
The location or file name where the file is to be stored on the destination Agent.
destCommand
: The command to be executed on the destination emd. If this is specified, then the stream generated from the source emd (whether from a file or from a command) is sent to the standard input of this command. Both destFile
and destCommand
parameters cannot be specified.
destArgs
: A comma-separated set of command-line parameters for the destCommand
.
The fileTransfer
command succeeds (and returns a status code of 0) if the file was successfully transferred between the Agents. If there was an error, it returns error codes appropriate to the reason for failure.
The putFile
command affords the capability to transfer large amounts of data from the Management Repository to a file on the Management Agent. The data transferred can come from a blob in the Management Repository, a file on the file system, or be embedded in the specification (inline).
If a file is being transferred, the location of the file must be accessible from the Management Repository installation. If a blob in a database is being transferred, it must be in a table in the Management Repository database that is accessible to the Management Repository schema user (typically mgmt_rep).
The command accepts a credential usage with name as defaultHostCred
. These credentials are required to write the file at the host of the target. The binding can be performed as follows:
<step ID="S1" command="putFile"> <credList> <cred usage="defaultHostCred" reference="osCreds"/> </credList> <paramList> <param name="sourceType">file</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="sourceFile">%oms_root%/myfile</param> <param name="destFile">%emd_root%/yourfle</param> </paramList> </step>
The putFile
command requires the following parameters:
sourceType
: The type of the source data. This may be sql, file, or inline.
targetName
: The name of the target where the file is to be transferred (destination Agent).
targetType
: The type of the destination target.
sourceFile
: The file to be transferred from the Management Repository, if the sourceType
is set to fileSystem
. This must be a file that is accessible to the Management Repository installation.
sqlType
: The type of SQL data (if the sourceType
is set to sql). Valid values are CLOB and BLOB.
accessSql
: A SQL statement that is used to retrieve the blob data (if the sourceType
is set to sql). For example, " select output from my_output_table
where blob_id=%blobid%".
destFile
: The location or file name where the file is to be stored on the destination Agent.
contents
: If the sourceType
is set to "inline", this parameter contains the contents of the file. Note that the text could include placeholders for parameters in the form %param%.
The putFile
command succeeds if the file was transferred successfully and the status code is set to 0. On failure, the status code is set to an integer appropriate to the reason for failure.
The getFile
command transfers a file from a Management Agent to the Management Repository. The file is stored as the output of the step that executed this command.
The command accepts a credential usage with name as defaultHostCred
. These credentials are required to read the file at the host of the target. The binding can be performed as follows:
<step ID="S1" command="getFile"> <credList> <cred usage="defaultHostCred" reference="osCreds"/> </credList> <paramList> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="sourceFile">%sourceFile%</param> <param name="destType">%destType%</param> <param name="destFile">%destFile%</param> <param name="destParam">%destParam%</param> </paramList> </step>
The getFile
command has the following parameters:
sourceFile
: The location of the file to be transferred on the Agent.
targetName
: The name of the target whose Agent will be contacted to get the file.
targetType
: The type of the target.
The getFile
command succeeds if the file was transferred successfully and the status code is set to 0. On failure, the status code is set to an integer appropriate to the reason for failure.
The execAndSuspend
command accepts a credential usage with name as defaultHostCred
. These credentials are required to perform the operation on target host of the target. The binding can be performed as follows:
<step ID="Ta_S1_suspend" command="execAndSuspend"> <credList> <cred usage="defaultHostCred" reference="osCreds"/> </credList> <paramList> <param name="remoteCommand">%command%</param> <param name="args">%args%</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="suspendTimeout">2</param> </paramList> </step>
Here defaultHostCred
is the credential usage which is understood by the command. For example, the java code in the command would make a call for credential with this sting, whereas the osCreds
is the credential usage declared in the job type at the top level.
The remoteOp
, putFile
, fileTransfer
and getFile
commands return the error codes listed in Table 7-2, "Command Error Codes". In the messages below, "command process" refers to a process that the Agent executes that actually executes the specified remote command and grabs the standard output and standard error of the executed command.
On a UNIX install, this process is called nmo and lives in $EMD_ROOT/bin. It must be SETUID to root before it can be used successfully. This does not pose a security risk since nmo will not execute any command unless it has a valid username and password.
Error Code | Description |
---|---|
0 |
No error. |
1 |
Could not initialize core module. Most likely, something is wrong with the install or environment of the Agent. |
2 |
The Agent ran out of memory. |
3 |
The Agent could not read information from its input stream. |
4 |
The size of the input parameters was too large for the Agent to handle. |
5 |
The command process was not setuid to root. (Every UNIX Agent installation has an executable called nmo, which must be setuid root). |
6 |
The specified user does not exist on this system. |
7 |
The password was incorrect. |
8 |
Could not run as the specified user. |
9 |
Failed to fork the command process (nmo). |
10 |
Failed to execute the specified process. |
11 |
Could not obtain the exit status of the launched process. |
12 |
The command process was interrupted before exit. |
13 |
Failed to redirect the standard error stream to standard output. |
The job system allows plug-in developers to write commands that perform their work at the Management Service level. For example, a command that reads two LOBs from the database and performs various transformations on them and writes them back. The job system expects such commands to implement an (empty) interface called LongRunningCommand, which is an indication that the command executes synchronously on the middle tier, and could potentially execute for a long time. This will allow a component of the job system called the dispatcher to schedule the long-running command as efficiently as possible, in such as way as to not degrade the throughput of the system.
The dispatcher is a component of the job system that executes the various steps of a job when they are ready to execute. The command class associated with each step is called and any asynchronous operations requested by it are dispatched; a process referred to as dispatching a step. The dispatcher uses thread-pools to execute steps. A thread-pool is a collection of a specified number of worker threads, any one of which can dispatch a step.
The job system dispatcher uses two thread-pools, namely, a short-command pool for dispatching asynchronous steps and short synchronous steps, and a long-command pool for dispatching steps that have long-running commands. Typically, the short-command pool will have a larger number of threads (for example, 25) compared to the long-running pool (for example, 10).
The theory is that long-running middle-tier steps will be few compared to more numerous, short-running commands. However, the sizes of the two pools will be fully configurable in the dispatcher to suit the job mix at a particular site. Since multiple dispatchers can be run on different nodes, the site administrator will be able to even dedicate a dispatcher to only dispatch long-running or short-running steps.
By default, the job system expects plug-in developers to provide values for all job parameters, either when the job is submitted or at execution time (by adding/updating parameters dynamically). Typically, an application supplies these parameters in one of the following ways:
Asking the user of the application at the time of submitting the job.
Fetching parameter values from application-specific data (such as a table) and then inserting them into the job parameter list.
Generating new parameters dynamically through the command blocks in the output of a remote command. These could be used by subsequent steps.
The job system offers the concept of parameter sources so that plug-in developers can simplify the amount of application-specific code they have to write to fetch and populate job or step parameters (such as the second category above). A parameter source is a mechanism that the job system uses to fetch a set of parameters, either when a job is submitted or when it is about to start executing.
The job system supports SQL (a PL/SQL procedure to fetch a set of parameters), credential (retrieval of username and password information from the Enterprise Manager credentials table) and user sources. Plug-in developers can use these pre-built sources to fetch a wide variety of parameters. When the job system has been configured to fetch one or more parameters using a parameter source, the parameters need not be specified in the parameter list to the job when a job is submitted. The job system will automatically fetch the parameters and add them to the parameter list of the job.
A job type can embed information about the parameters that need to be fetched by having an optional paramInfo section in the XML specification. The following is a snippet of a job type that executes a SQL query on an application-specific table to fetch three parameters, a, b, and c.
<jobType version="1.0" name="OSCommand" > <paramInfo> <!-- Set of scalar params --> <paramSource paramNames="a,b,c" sourceType="sql" overrideUser="true"> select name, value from name_value_pair_table where name in ('a', 'b', 'c'); </paramSource> </paramInfo> .... description of job type follows .... </jobType>
As can be seen from the example, the paramInfo section consists one or more paramSource tags. Each paramSource tag references a parameter source that can be used to fetch one or more parameters. The paramNames attribute is a comma-separated set of parameter names that the parameter source is expected to fetch. The sourceType attribute indicates the source that will be used to fetch the parameters (one of sql, credential or user).
The overrideUser attribute, if set to true, indicates that this parameter-fetching mechanism will always be used to fetch the value of the parameters, even if the parameter was specified by the user (or application) at the time the job was submitted. The default for the overrideUser attribute is false, meaning that the parameter source mechanism will be disabled if the parameter was already specified when the job was submitted. A parameter source could have additional source-specific properties that describe the fetching mechanism in greater detail and these will be described in the following sections.
The evaluateOnRetry attribute is an optional attribute, applicable for all. The default setting is false for all, except credentials (credentials ignores the value set and forces true). It indicates whether the parameter source must be re-run when a failed execution of this job type is retried.
The SQL parameter source allows the plug-in developer to specify a SQL query or a PL/SQL procedure that will fetch a set of parameters.
The job type XML syntax is as follows:
<paramSource sourceType="sql" paramNames="param1, param2, ..."> <sourceParam name="procName" value="MyPackage.MyPLSQLProc"/> <sourceParam name="procParams" value="%a%, %b%[1], ..."/> </paramSource>
The values specified in paramNames
are the names of the parameters that are expected to be returned by the PL/SQL procedure specified in procName
. The values in procParams
specify the list of values to be passed to the PL/SQL procedure.
The definition of the PL/SQL procedure must adhere to the following guidelines:
The PL/SQL procedure must be accessible from the SYSMAN schema
The PL/SQL procedure must have the following signature:
PROCEDURE MySQLProc(p_param_names MGMT_JOB_VECTOR_PARAMS, p_proc_params MGMT_JOB_VECTOR_PARAMS, p_param_list OUT MGMT_JOB_PARAM_LIST)
The list of parameters specified in paramNames are passed as parameter p_param_names to the procedure.
The comma-separated list of values specified in procParams allows the plug-in developer to pass a list of scalar (string/VARCHAR2) values as parameters to the procedure. These values are substituted with job parameter references (if used), bundled into an array (in the order specified in the XML) and passed to the PL/SQL procedure as the second parameter (p_proc_params)
The third parameter is an OUT parameter that should contain the list of parameters fetched by the procedure. The names of the parameters returned by this OUT parameter must match the names specified in p_param_names.
Note:
Although this check is not currently enforced, plug-in developers are strongly advised to ensure that the names of the parameters returned byp_param_list
matches or is a subset of the list of parameter names passed in p_param_names
.The following SQL parameter source creates a parameter named db_role_suffix based on an existing parameter named db_role. It also preserves the type (scalar/vector) of the original parameter and therefore looks up the parameter from the internal tables rather than have its value passed (db_role is passed as a literal rather than as a substituted value). The values of job_id and job_execution_id are passed substituted.
<paramSource sourceType="sql" paramNames="db_role_suffix"> <sourceParam name="procName" value="MGMT_JOB_FUNCTIONS.get_dbrole_ prefix"/> <sourceParam name="procParams" value="%job_id%, %job_execution_id%, db_ role"/> </paramSource>
Within the PL/SQL procedure MGMT_JOB_FUNCTIONS.get_dbrole_prefix, the p_proc_params list contains the values corresponding to the job_id at index 1 and the execution_id at index 2, while the element at index 3 corresponds to the literal text db_role.
Available SQL Paramsource Procedures
The following PL/SQL procedures have been provided by the job system team for use in job types across Enterprise Manager:
is_null
Checks whether the passed job variable is null. A missing variable is also considered to be null. For each variable passed, the procedure creates a corresponding variable with the scalar value true if the passed variable is non-existent or is null. For all other cases, the scalar value false is set. A vector of zero elements is considered non-null.
Example:
<paramSource sourceType="sql" paramNames="a_is_null, b_is_null, c_is_null"> <sourceParam name="procName" value="MGMT_JOB_FUNCTIONS.is_null"/> <sourceParam name="procParams" value="%job_id%, %job_execution_id%, a, b, c"/> </paramSource>
In this example, the job variables a, b, and c are checked for null-ness and the variables a_is_null, b_is_null, and c_is_null are assigned the values of true or false correspondingly.
add_dbrole_prefix
For every variable passed, the procedure prefixes the string AS if the value is not null or Normal (case-insensitive), otherwise it returns null. Therefore, a variable with value SYSDBA would result in a value of AS SYSDBA, but a value of Normal would return null. If the passed variable corresponds to a vector, the same logic is applied to each individual element of the vector. This is useful while using DB credentials to connect to a SQL*Plus session.
Example:
<paramSource sourceType="sql" paramNames="db_role_suffix1, db_role_ suffix2"> <sourceParam name="procName" value="MGMT_JOB_FUNCTIONS.get_dbrole_ prefix"/> <sourceParam name="procParams" value="%job_id%, %job_execution_id%, db_ role1, db_role2"/> </paramSource>
Here, the values of the variables db_role1 and db_role2 are prefixed with AS as necessary and saved into variables db_role_suffix1 and db_role_suffix2 respectively.
The job system also offers a special parameter source called "user"
which indicates that a set of parameters must be supplied when a job of that type is submitted. If a parameter is declared to be of source "user" and the "required" attribute is set to "true", the job system will validate that all specified parameters in the source are provided when a job is submitted.
The user source can be evaluated at job submission time or job execution time. When evaluated at submission time, it causes an exception to be thrown if any required parameters are missing. When evaluated at execution time, it causes the execution to abort if there are any missing required parameters.
<paramInfo> <!-- Indicate that parameters a, b and c are required params --> <paramSource paramNames="a, b, c" required="true" sourceType="user" /> </paramInfo>
The user source can also be used to indicate that a pair of parameters are target parameters. For example:
<paramInfo> <!-- Indicate that parameters a, b, c, d, e, f are target params --> <paramSource paramNames="a, b, c, d, e, f" sourceType="user" > <sourceParam name="targetNameParams" value="a, b, c" /> <sourceParam name="targetTypeParams" value="d, e, f" /> </paramSource> </paramInfo>
The example shown above indicates that parameters (a,d), (b,e), (c,f) are parameters that hold target information. Parameter "a" holds target names and "d" holds the corresponding target types. Similarly with parameters "b" and "e", and "c" and "f". For each parameter that holds target names, there must be a corresponding parameter that holds target types. The parameters may be either scalar or vector.
The inline
parameter source allows job types to define parameters in terms of other parameters. It is a convenient mechanism to construct parameters that can be reused in other parts of the job type. For example, the section below creates a parameter called filename
based on the job execution id, presumably for use in other parts of the job type.
<jobType> <paramInfo> <!-- Indicate that value for parameter filename is provided inline --> <paramSource paramNames="fileName" sourceType="inline" > <sourceParam name="paramValues" value="%job_execution_id%.log" /> </paramSource> </paramInfo> ..... <stepset ID="main" type="serial"> <step command="putFile" ID="S1"> ... <param name="destFile">%fileName%</param> ... </step> </stepset> </jobType>
The following example sets a vector parameter called vparam to be a vector of the values v1, v2, v3, and v4. Only one vector parameter at a time can be set using the inline source.
<jobType> <paramInfo> <!-- Indicate that value for parameter vparam is provided inline --> <paramSource paramNames="vparam" sourceType="inline" > <sourceParam name="paramValues" value="v1,v2,v3,v4" /> <sourceParam name="vectorParams" value="vparam" /> </paramSource> </paramInfo> ....
The checkValue
parameter source allows job types to have the job system check that a specified set of parameters has a specified set of values. If a parameter does not have the specified value, the job system will either terminate or suspend the job.
<paramInfo> <!-- Check that the parameter halt has the value true. If not, suspend the job --> <paramSource paramNames="halt" sourceType="checkValue" > sourceParam name="paramValues" value="true" /> <sourceParam name="action" value="suspend" /> </paramSource> </paramInfo>
The following example checks whether a vector parameter v has the values v1,v2,v3, and v4. Only one vector parameter at a time can be specified in a checkValue parameter source. If the vector parameter does not have those values, in that order, then the job is terminated.
<paramInfo> <!-- Check that the parameter halt has the value true. If not, suspend the job --> <paramSource paramNames="v" sourceType="checkValue" > <sourceParam name="paramValues" value="v1,v2,v3,v4" /> <sourceParam name="action" value="abort" /> <sourceParam name="vectorParams" value="v" /> </paramSource> </paramInfo>
The properties
parameter source fetches a named set of target properties for each of a specified set of targets and stores each set of property values in a vector parameter.
The example below fetches the properties "OracleHome"
and "OracleSID"
for the specified set of targets (dlsun966 and ap952sun), into the vector parameters ohomes
and osids
, respectively. The first vector value in the ohomes
parameter will contain the OracleHome property for dlsun966, and the second will contain the OracleHome property for ap952sun. Likewise with the OracleSID
property.
<paramInfo> <!-- Fetch the OracleHome and OracleSID property into the vector params ohmes, osids --> <paramSource paramNames="ohomes,osids" overrideUser="true" sourceType="properties"> <sourceParams> <sourceParam name="propertyNames" value="OracleHome,OracleSID" /> <sourceParam name="targetNames" value="dlsun966,ap952sun" /> <sourceParam name="targetTypes" value="host,host" /> </sourceParams> </paramSource> </paramInfo>
As with the credentials source, vector parameter names can be provided for the target names and types.
<paramInfo> <!-- Fetch the OracleHome and OracleSID property into the vector params ohmes, osids --> <paramSource paramNames="ohomes,osids" overrideUser="true" sourceType="properties"> <sourceParams> <sourceParam name="propertyNames" value="OracleHome,OracleSID" /> <sourceParam name="targetNamesParam" value="job_target_names" /> <sourceParam name="targetTypes" value="job_target_types" /> </sourceParams> </paramSource> </paramInfo>
Parameter sources are applied in the order they are specified. Parameter substitution (of the form %param%) can be used inside sourceParam
tags, but the parameter that is being substituted must exist when the parameter source is evaluated. Otherwise, the job system will substitute an empty string in its place.
The job system offers the facility of storing specified parameters in encrypted form. Parameters that contain sensitive information, such as passwords, must be stored encrypted. A job type can indicate that parameters fetched through a parameter source be encrypted by setting the encrypted attribute to true in a parameter source.
For example:
<paramInfo>
<!-- Fetch params from the credentials table into vector parameters; store them encrypted -->
<paramSource paramNames="vec_usernames,vec_passwords" overrideUser="true"
sourceType="credentials" encrypted="true">
<sourceParams>
<sourceParam name="credentialType" value="patch" />
<sourceParam name="credentialColumns" value="node_username,node_password" />
<sourceParam name="targetNames" value="dlsun966,ap952sun" />
<sourceParam name="targetTypes" value="host,host" />
<sourceParam name="credentialScope" value="system" />
</sourceParams>
</paramSource>
</paramInfo>
A job type can also specify that parameters supplied by the user be stored encrypted:
<paramInfo> <!-- Indicate that parameters a, b and c are required params --> <paramSource paramNames="a, b, c" required="true" sourceType="user" encrypted="true" /> </paramInfo>
Until Oracle Enterprise 11g release 1, credentials were represented as two parameters, one for username and one for password. The job type owner can either have a credential parameter source to extract these parameters or define these as user parameters, and then pass on the parameters to the various steps that require the parameters.
This required knowledge about the credential set, credential types, and their columns, along with knowledge about various authentication mechanisms, must be supported by the job type, irrespective of the pool of authentication schemes that could be supported by the Enterprise Manager. This restricted the freedom of the job type owner to model just the job type and ignore the authentication required to perform the operations. To overcome these issues and to evolve a unified mechanism in the job type to specify the credentials, Oracle has introduced a new concept called credential usage.
A credential usage is the point where the credential is required to perform an operation. The various tags present in the credential usage are required to mainly paint the credential selector UI component provided by the project 28263. Credential submissions should be made against these usages only.
A credential binding is a reference of a credential by a step. Each step exposes its credential usage which needs to be fulfilled in the metadata. Therefore, each credential binding refers to a reference credential usage that is defined in the credential usage section of the metadata. When the step requests its own credential usage, a binding helps resolve which credential submission in a particular automation entity (Job or DP instance) should be passed to that step.
In earlier releases, the job types would have a credential parameter source to extract the username and password from the credentials (JobCredRecord
) passed to the job and then these were available as parameters to the entire job type. This behavior has been deprecated with no support and is being superseded by the new credential usage structure.
The following Job type example shows the use of credentials declaration in the job type:
<jobType version="1.0" name="OSCommandNG" singleTarget="true" targetTypes="all" defaultTargetType="host" editable="true" restartable="true" suspendable="true" > <credentials> <credential usage=”hostCreds” authTargetType=”host” defaultCredentialSet=”HostCredsNormal”/> </credentials> <paramInfo> <paramSource sourceType="user" paramNames="command" required="true" evaluateAtSubmission="true" /> <paramSource sourceType="inline" paramNames="TargetName,TargetType" overrideUser="true" evaluateAtSubmission="true"> <sourceParam name="paramValues" value="%job_target_names%[1], %job_target_types%[1]" /> </paramSource> <paramSource sourceType="properties" overrideUser="true" evaluateAtSubmission="false" > <sourceParam name="targetNamesParam" value="job_target_names" /> <sourceParam name="targetTypesParam" value="job_target_types" /> </paramSource> <paramSource sourceType="substValues" paramNames="host_command,host_args,os_script" overrideUser="true" evaluateAtSubmission="false"> <sourceParam name="sourceParams" value="command,args,os_script" /> </paramSource> </paramInfo> <stepset ID="main" type="serial" > <step ID="Command" command="sampleRemoteOp"> <credList> <cred usage=”OS_CRED” reference=”hostCreds”/> </credList> <paramList> <param name="remoteCommand">%host_command%</param> <param name="args">%host_args%</param> <param name="input"><![CDATA[%os_script%]]></param> <param name="largeInputParam">large_os_script</param> <param name="substituteLargeParam">true</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="executeSynchronous">false</param> </paramList> </step> </stepset> </jobType>
The first set of three lines declares a credential usage in the job type. The next set of lines binds the credential usage to that of the step. Note that user name and password cannot be extracted by the jobs system and therefore can no longer be exposed as parameters.
The XSD element credential usage and credentials binding are explained in Table 7-3 and Table 7-4.
Table 7-3 Credential Usage (credential)
Attribute | Required (Y/N) | Description |
---|---|---|
|
Y |
Name of the credential through which it will be referred in the job type. All credential submissions are to be made for this name. |
|
Y |
Target type against which authentication is to be performed for any operation. For example, running “ls” any target means authentication against the host. |
|
Y |
Name of the credential set to be picked up as a credential if no submissions are found for the credential usage when required. |
|
N |
Name of the credential types which can only be used for specifying the credentials. This is to facilitate filtering of credentials in the credential selector UI component. |
|
N |
Name that is intended to be shown in the credential selector UI. |
|
N |
Description that is intended to be shown in the credential selector UI. |
Table 7-4 Credential Binding (cred)
Attribute / sub element | Required (Y/N) | Description |
---|---|---|
|
Y |
Credential usage understood by the step. |
|
Y |
Credential usage referred to and present in the declarations of the job type or DP metadata. |
Note:
The Credential Binding element can only be used inside the step or job elements in the job type xml.Typically a job type will tend to perform actions that can be considered to be "privileged". For example, patching a production database or affecting the software installed in an Oracle home directory or appltop. Accordingly, such job types should only be submitted by Enterprise Manager users that have the appropriate level of privileges to perform these actions.
The job system provides a section called securityInfo, which the author of a job type can use to specify the minimum level of privileges (system and target) that the submitter of a job of this type must have.
Having a securityInfo section allows the author of a job type to encapsulate the security requirements associated with submitting a job in the job type itself. No further code needs to be written to enforce security. Also, it ensures that Enterprise Manager users cannot directly submit jobs of a specific type (using the job system APIs and bypassing the application) unless they have the set of privileges defined by the job type author.
The following example shows what a typical securityInfo section looks like. Suppose you are writing a job type that clones a database. This job type requires two targets, namely, a source database and a destination node on which the destination database will be created. This job type will probably require that the user submitting a clone job have a CLONE FROM privilege on the source (database) and a MAINTAIN privilege on the destination (node).
In addition, the user will require the CREATE TARGET system privilege to be able to introduce a new target into the system. Assuming that the job type is written so that the first target in the target list is the source and the second target in the target list is the destination, the security requirements for such a job type could be addressed as shown below:
<jobType> <securityInfo> <privilege name="CREATE TARGET" type="system" /> <privilege name="CLONE FROM" type="target" evaluateAtSubmission="false" > <target name="%job_target_names%[1]" type="%job_target_types%[1]" /> </privilege> <privilege name="MAINTAIN" type="target" evaluateAtSubmission="false"> <target name="%job_target_names%[2]" type="%job_target_types%[2]" /> </privilege> </securityInfo> <!-- An optional <paramInfo> section will follow here, followed by the stepset definition of the job --> <paramInfo> .... </paramInfo> <stepset ...> </stepset> </jobType>
The securityInfo section is a set of <privilege>
tags. Each privilege could be a system or target privilege, as indicated by the type attribute of the tag. If the privilege is a target privilege, the targets that the privilege is attached to should be explicitly enumerated, or the target_names_param and target_types_param attributes should be used as shown in the example below. The usual %param% notation can be used to indicate job parameter and target placeholders.
By default, all <privilege>
directives in the securityInfo section are evaluated at job submission time, after all submit-time parameter sources have been evaluated. The job system throws an exception if the user does not have any of the privileges specified in the securityInfo section.
Note that execution-time parameter sources will not have been evaluated at job submission time, so care should be taken to not use job parameters that may not have been evaluated yet. You could also direct the job system to evaluate a privilege directive at job execution time by setting the evaluateAtSubmission parameter to false.
The only reason you might want to do this is if the exact set of targets that the job is operating on is unknown until job execution time (for example, it is computed using an execution-time parameter source). Execution-time privilege directives are evaluated after all execution-time parameter sources are evaluated.
Assume that you are writing a job type that requires MODIFY privilege on each one of its targets, but the exact number of targets is unknown at the time of writing. The target_names_param and target_types_param attributes could be used for this purpose. These specify vector parameters that the job system will get the target names and the corresponding target types from. These could be any vector parameters. This example uses the job target list (job_target_names and job_target_types).
<securityInfo> <privilege name="MODIFY" type="target" target_names_param="job_target_names" target_types_param="job_target_types" /> </securityInfo>
Often executing jobs will need to acquire resources. For example, a job applying a patch to a database may need a mechanism to ensure that other jobs (submitted by other users in the system) on the database are prevented from running while the patch is being applied. In other words, it may wish to acquire a lock on the database target so that other jobs that try to acquire the same lock block (or terminate). This will allow a patch job, once it starts, to perform its work without disruption.
Sometimes, locks could be at more than one level. A hot backup of a database, for example, can allow other hot backups to proceed (since they do not bring down the database), but cannot allow cold backups or database shutdown jobs to proceed (since they will end up shutting down the database, thereby causing the backup to fail).
A job execution can indicate that it is reserving a resource on a target by acquiring a lock on the target. A lock is really a proxy for reserving some part of the functionality of a target. When an execution acquires a lock, it will block other executions that try to acquire the same lock on the target. A lock is identified by a name and a type and can be of the following types:
Global: These are locks that are not associated with a target. An execution that holds a global lock will block other executions that are trying to acquire the same global lock (such as a lock with the same name).
Target Exclusive: These are locks that are associated with a target. An execution that holds an exclusive lock on a target will block executions that are trying to acquire any named lock on the target, as well as executions trying to acquire an exclusive lock on the target. Target exclusive locks have no name: there is exactly one exclusive lock per target.
Target Named: A named lock on a target is analogous to obtaining a lock on one particular functionality of the target. A named lock has a user-specified name. An execution that holds a named lock will block other executions that are trying to acquire the same named lock, as well as executions that are trying to acquire an exclusive lock on the target.
Locks that a job type wishes to acquire can be obtained by specifying a lockInfo section in the job type. This example lists the locks that the job is to acquire, their types, as well as the targets that it wishes to acquire the locks on:
<lockInfo action="suspend"> <lock type="targetExclusive"> <targetList> <target name="%backup_db%" type="oracle_database" /> </targetList> </lock> <lock type="targetNamed" name="LOCK1" > <targetList> <target name="%backup_db%" type="oracle_database" /> <target name="%job_target_names%[1]" type="%job_target_types%[1]" /> <target name="%job_target_names%[2]" type="%job_target_types%[2]" /> </targetList> </lock> <lock type="global" name="GLOBALLOCK1" /> </lockInfo>
This example shows a job type that acquires a target-exclusive lock on a database target whose name is given by the job parameter backup_db. It also acquires a named target lock named "LOCK1" on three targets, namely, the database whose name is stored in the job parameter backup_db, and the first two targets in the target list of the job. Finally, it acquires a global lock named "GLOBALLOCK1". The "action" attribute specifies what the job system should do to the execution if any of the locks in the section cannot be obtained (presumably because some other execution is holding them). Possible values are suspend (all locks are released and the execution state changes to "Suspended:Lock") and abort (the execution terminates). The following points can be made about executions and locks:
An execution can only attempt to obtain locks when it starts (although it is possible to override this by using nested jobs).
An execution can acquire multiple locks. Locks are always acquired in the order specified. Because of this, executions can potentially deadlock each other if they attempt to acquire locks in the wrong order.
Target locks are always acquired on targets in the same order as they are specified in the <targetList>
tag.
If a target in the target list is null or does not exist, the execution will terminate.
If an execution attempts to acquire a lock it already holds, it will succeed.
If an execution cannot acquire a lock (usually because some other execution is holding it), it has a choice of suspending itself or terminating. If it chooses to suspend itself, all locks it has acquired so far will be released, and the execution is put in the state Suspended/Lock.
All locks held by an execution will be released when an execution finishes (whether it completes, aborts, or is stopped). There may be several waiting executions for each released lock and these are sorted by time, with the earliest request getting the lock.
When jobs that have the lockInfo section are nested inside each other, the nested job's locks are obtained when the nested job first executes, not when an execution starts. If the locks are not available, the parent execution could be suspended or terminated, possibly after a few steps have already executed.
In this example, two job types called HOTBACKUP and COLDBACKUP perform hot backups and cold backups, respectively, on the database. The difference is that the cold backup brings the database down, but the hot backup leaves it up. Only one hot backup can execute at a time and it should keep out other hot backups as well as cold backups.
When a cold backup is executing, no other job type can execute (since it shuts down the database as part of its execution). A third job type called SQLANALYZE performs scheduled maintenance activity that results in modifications to database tuning parameters (two SQLANALYZE jobs cannot run at the same time).
Table 7-5 shows the incompatibilities between the job types. An 'X' indicates that the job types are incompatible. An 'OK' indicates that the job types are compatible.
Table 7-5 Job Type Incompatibilities
Job Type | HOTBACKUP | COLDBACKUP | SQLANALYZE |
---|---|---|---|
HOTBACKUP |
X |
X |
OK |
COLDBACKUP |
X |
X |
X |
SQLANALYZE |
OK |
X |
X |
The lockInfo sections for the three job types are shown below. The cold backup obtains an exclusive target lock on the database. The hot backup job does not obtain an exclusive lock, but only the named lock "BACKUP_LOCK". Likewise, the SQLANALYZE job obtains a named target lock called "SQLANALYZE_LOCK".
Assuming that the database that the jobs operate on is the first target in the target list of the job, the lock sections of the two jobs look as follows:
<jobType name="SQLANALYZE"> <lockInfo action="abort"> <lock type="targetNamed" name="SQLANALYZE_LOCK" > <targetList> <target name="%job_target_names%[1]" type="%job_target_names%[1]" /> </targetList> </lock> </lockInfo> ........ Rest of the job type follows </jobType>
Since a named target lock blocks all target exclusive locks, executing hot backups will suspend cold backups, but not analyze jobs (since they try to acquire different named locks). Executing SQL analyze jobs will abort other SQL analyze jobs and suspend cold backups, but not hot backups. Executing cold backups will suspend hot backups and abort SQL analyze jobs.
A job type called PATCHCHECK periodically checks a patch stage area and downloads information about newly staged patches into the Management Repository. Two such jobs cannot run at the same time; however, the job is not really associated with any target. The solution is for the job type to attempt to grab a global lock:
<jobType name="PATCHCHECK"> <lockInfo> <lock type="global" name="PATCHCHECK_LOCK" /> </lockInfo> ........ Rest of the job type follows </jobType>
A job type that nests the SQLANALYZE type within itself is shown below. Note that the nested job executes after the first step (S1) executes.
<jobType name="COMPOSITEJOB"> <stepset ID="main" type="serial"> <step ID="S1" ...> .... </step> <job name="nestedsql" type="SQLANALYZE"> .... </job> </stepset> </jobType>
In the previous example, the nested job tries to acquire locks when it executes (since the SQLANALYZE has a lockInfo section). If the locks are currently held by other executions, then the nested job terminates (as specified in the lockInfo), which will in turn end up terminating the parent job.
Suspended is a special state that indicates that steps in the job will not be considered for scheduling and execution. A step in an executing job can suspend the job, through the suspend_job PL/SQL API. This suspends both the currently executing step, as well as the job itself.
Suspending a job means that all steps in the job that are currently in a "scheduled" state will be marked as "suspended" and will thereafter not be scheduled or executed. All currently executing steps (this could happen, for example, in parallel stepsets) will continue to execute. However, when any currently executing step completes, the next steps in the job will not be scheduled. Instead they will be put in suspended state. When a job is suspended on submission, the above applies to the first steps in the job that would have been scheduled.
Suspended jobs may be restarted at any time by calling the restart_job() PL/SQL API. However, jobs that are suspended because of serialization (locking) rules are not restartable manually. The job system will restart such jobs automatically when currently executing jobs of that job type complete. Restarting a job will effectively change the state of all suspended steps to scheduled and job execution will proceed normally thereafter.
If a job has been suspended, failed or terminated, it is possible to restart it from any given step (typically, the stepset that contains a failed or terminated step). For failed or terminated jobs, what steps actually get scheduled again when a job is restarted depends on which step the job is restarted from.
If a step in a job is resubmitted, it means that it executes regardless of whether the original execution of the step completed or failed. If a stepset is resubmitted, then the first step/stepset/job in the stepset is resubmitted, recursively. Therefore, when a job is resubmitted the entire job is executed again by recursively resubmitting its initial stepset. The parameters and targets used are the same that were used when the job was first submitted. Essentially, the job executes as if it were submitted for the first time with the specified set of parameters and targets. A job can be resubmitted by using the resubmit_job API in the mgmt_jobs package. Jobs can also be resubmitted even if the earlier executions completed successfully.
Job executions that were aborted or failed can be restarted. Restarting a job generally refers to resuming job execution from the last failed step (although the job type can control this behavior using the restartMode attribute of steps/stepsets/jobs). In the common case, steps from the failed job execution that actually succeeded are not re-executed. A failed or terminated job can be restarted by calling the restart_job API in the mgmt_jobs package. A job that completed successfully cannot be restarted.
Restarting a job creates a new execution called the restart execution. The original, failed execution of the job is called the source execution. All parameters and targets are copied over from the source execution to the restart execution. Parameter sources are not reevaluated, unless the original job aborted because of a parameter source failure.
To restart a serial (or iterative stepset), the job system first examines the status of the serial stepset. If the status of the serial stepset is "Completed", then all the entries for its constituent steps are copied over from the source execution to the restart execution. If the status of the stepset is "Failed" or "Aborted", then the job system starts top down from the first step in the stepset.
If the step previously completed successfully in the source execution, it is copied to the restart execution. If the step previously failed or aborted, it is rescheduled for execution in the restart execution. After such a step has finished executing, the job system determines the next steps to execute. These could be successOf or failureOf dependencies, or simply steps/stepsets/jobs that execute after the current step.
If the subsequent step completed successfully in the source execution, then it will not be scheduled for execution again and the job system merely copies the source execution status to the restart execution for that step. It continues in this fashion until it reaches the end of the stepset. It then recomputes the status of the stepset based on the new executions.
To restart a parallel stepset, the job system first examines the status of the parallel stepset. If the status of the stepset is "Completed", then all the entries for its constituent steps are copied over from the source execution to the restart execution. If the status of the stepset is "Failed" or "Aborted", the job system copies over all successful steps in the steps from the source to the restart execution. It reschedules all steps that failed or aborted in the source execution, in parallel. After these steps have finished executing, the status of the stepset is recomputed.
To restart a nested job, the restart algorithm is applied recursively to the first (outer) stepset of the nested job.
Note that in the previous paragraphs, if one of the entities being considered is a stepset or a nested job, the restart mechanism is applied recursively to the stepset or job. When entries for steps are copied over to the restart execution, the child execution entries point to the same output CLOB entries as the parent execution.
A job type can affect the restart behavior of each step, stepset, or job within it by the use of the restartMode attribute. This can be set to "failure" (the default) or "always". When set to failure and the top-down copying process described in the previous section occurs, the step, stepset, or job is copied without being re-executed if it succeeded in the source execution. If it failed or terminated in the source execution, it is restarted recursively at the last point of failure.
When the restartMode attribute is set to "always" for a step, the step is always re-executed in a restart, regardless of whether it succeeded or failed in the source execution. The use of this attribute is useful when certain steps in a job must always be re-executed in a restart (for example, a step that shuts down a database prior to backing it up).
For a stepset or nested job, if the restartMode attribute is set to "always", then all steps in the stepset/nested job are restarted, even if they completed successfully in the source execution. If it is set to "failure", then restart is attempted only if the status of the stepset or nested job was set to Failed or Aborted in the source execution.
Note that individual steps inside a stepset or nested job may have their restartMode set to "always" and such steps are always re-executed.
The following sections discuss a range of scenarios related to restarting stepsets.
Consider the serial stepset with the sequence of steps below:
<jobtype ...> <stepset ID="main" type="serial" > <step ID="S1" ...> ... </step> <step ID="S2" ...> ... </step> <step ID="S3" failureOf="S2"...> ... </step> <step ID="S4" successOf="S2"...> ... </step> </stepset> </jobtype>
In the above stepset, assume the source execution had S1 execute successfully and step S2 and S3 (the failure dependency of S2) fail. When the job is restarted, steps S1 is copied to the restart execution from the source execution without being re-executed (since it successfully completed in the source execution). Step S2, which failed in the source execution, is rescheduled and executed. If S2 completes successfully, then S4, its success dependency (which never executed in the source execution), is scheduled and executed. The status of the stepset (and the job) is the status of S4. If S2 fails, then S3 (its failure dependency) is rescheduled and executed (since it had failed in the source execution), and the status of the stepset (and the job) is the status of S3.
Assume that step S1 succeeded, S2 failed, and S3 (its failure dependency) succeeded in the source execution. As a result, the stepset (and therefore the job execution) succeeded. This execution cannot be restarted since the execution completed successfully although one of its steps failed.
Finally, assume that steps S1 and S2 succeed, but S4 (S2's success dependency) failed. Note that S3 is not scheduled in this situation. When the execution is restarted, the job system copies over the executions of S1 and S2 from the source to the restart execution, and reschedules and executes S4. The job succeeds if S4 succeeds.
Consider the following:
<jobtype ...> <stepset ID="main" type="serial" stepsetStatus="S2" > <step ID="S1" restartMode="always" ...> ... </step> <step ID="S2" ...> ... </step> <step ID="S3" ...> ... </step> </stepset> </jobtype>
In the previous example, assume that step S1 completes and S2 fails. S3 executes (since it does not have a dependency on S2) and succeeds. The job, however, fails, since the stepset main has its stepsetStatus set to S2. When the job is restarted, S1 is executed all over again, although it completed the first time, since the restartMode of S1 was set to "always". Step S2 is rescheduled and executed, since it failed in the source execution. After S2 executes, step S3 is not rescheduled for execution again, since it executed successfully in the source execution. If the intention is that S3 must execute in the restart execution, its restartMode must be set to "always".
If, in the above example, S1 and S2 succeeded and S3 failed, the stepset main would still succeed (since S2 determines the status of the stepset). In this case, the job would succeed, and cannot be restarted.
Consider the following example:
<jobtype ...> <stepset ID="main" type="serial" > <stepset type="serial" ID="SS1" stepsetStatus="S1"> <step ID="S1" ...> ... </step> <stepset ID="S2" ...> ... </step> </stepset> <stepset type="parallel" ID="PS1" successOf="S1" > <step ID="P1" ...> ... </step> <step ID="P2" ...> ... </step> <step ID="P3" ...> ... </step> </stepset> </stepset> </jobtype>
In the above example, let us assume that steps S1 and S2 succeeded (and therefore, stepset SS1 completed successfully). Thereafter, the parallel stepset PS1 was scheduled, and let us assume that P1 completed, but P2 and P3 failed. As a result, the stepset "main" (and the job) failed. When the execution is restarted, the steps S1 and S2 (and therefore the stepset SS1) will be copied over without execution. In the parallel stepset PS1, both the steps that failed (P2 and P3) will be rescheduled and executed.
Now assume that S1 completed and S2 failed in the source execution. Note that stepset SS1 still completed successfully since the status of the stepset is determined by S1, not S2 (because of the stepsetStatus directive). Now, assume that PS1 was scheduled and P1 failed, and P2 and P3 executed successfully. When this job is rescheduled, the step S2 will not be re-executed (since the stepset SS1 completed successfully). The step P1 will be rescheduled and executed.
Consider a slightly modified version of the XML in "Example 3":
<jobtype ...> <stepset ID="main" type="serial" > <stepset type="serial" ID="SS1" stepsetStatus="S1" restartMode="always" > <step ID="S1" ...> ... </step> <stepset ID="S2" ...> ... </step> </stepset> <stepset type="parallel" ID="PS1" successOf="S1" > <step ID="P1" ...> ... </step> <step ID="P2" ...> ... </step> <step ID="P3" ...> ... </step> </stepset> </stepset> </jobtype>
In the previous example, assume that S1 and S2 succeeded (and therefore, stepset SS1 completed successfully). Thereafter, the parallel stepset PS1 was scheduled, and let us assume that P1 completed, but P2 and P3 failed. When the job is restarted, the entire stepset SS1 is restarted (since the restartMode is set to "always"). This means that steps S1 and S2 are successively scheduled and executed. Now the stepset PS1 is restarted, and since the restartMode is not specified (it is always "failure" by default), it is restarted at the point of failure, which in this case means that the failed steps P2 and P3 are re-executed, but not P1.
In order to make a new job type accessible from the Enterprise Manager console Job Activity, or Job Library page, or both, you need to modify specific XML tag attributes.
To display the job type on Job Activity page, set useDefaultCreateUI to "true" as shown in the following example.
<displayInfo useDefaultCreateUI="true"/>
To display the job type on the Job Library page, in addition to setting useDefaultCreateUI attribute, you must also set the jobtype editable attribute to "true."
<jobtype name="jobType1" editable="true">
If only useDefaultCreateUI="true" and editable="false", then the job type will only be displayed on the Job Activity page and not on Job Library page. Also the job definition will be not editable.
As shown it Figure 7-1, setting the useDefaultCreateUI attribute to "true" allows users creating a job to select the newly added job type from the Create Job menu.
Making the job type available from the Job Activity page also permits access to the default Create Job user interface when a user attempts to create a job using the newly added job type.
The displayInfo
tag can be added to the job definition file at any point after the </stepset>
tag and before the </jobtype>
tag at the end of the job definition file, as shown in the following example.
<jobtype ...>
<stepset ID="main" type="serial" >
<stepset type="serial" ID="SS1" stepsetStatus="S1">
<step ID="S1" ...>
...
</step>
<stepset ID="S2" ...>
...
</step>
</stepset>
<stepset type="parallel" ID="PS1" successOf="S1" >
<step ID="P1" ...>
...
</step>
<step ID="P2" ...>
...
</step>
<step ID="P3" ...>
...
</step>
</stepset>
</stepset>
<displayInfo useDefaultCreateUI="true"/>
</jobtype>
To make the job type available from the Job Library page, you must also set the jobType
tag's editable attribute to "true" in addition to adding the displayInfo
tag, This makes the newly added job type a selectable option from the Create Library Job menu.
The editable attribute of the jobtype
tag is set at the beginning of the job definition file, as shown in the following example.
<jobtype name="jobType1" editable="true"> <stepset ID="main" type="serial" > <stepset type="serial" ID="SS1" stepsetStatus="S1"> <step ID="S1" ...> ... </step> <stepset ID="S2" ...> ... </step> </stepset> <stepset type="parallel" ID="PS1" successOf="S1" > <step ID="P1" ...> ... </step> <step ID="P2" ...> ... </step> <step ID="P3" ...> ... </step> </stepset> </stepset> <displayInfo useDefaultCreateUI="true"/> </jobtype>
The following sections provide examples of specifying job types in XML.
The following XML describes a job type called jobType1 that defines four steps, S1, S2, S3, and S4. It executes S1 and S2 serially, one after another. It executes step S3 only if step S2 succeeds, and step S4 only if S2 fails. Note that all the steps execute within an iterative subset, so these actions are performed in parallel on all targets in the job target list of type database.
Note also, the use of % signs to indicate parameters, %patchno%, %username%, %password%, and %job_target_name%. The job system will substitute the value of a job parameter named "patchno" in place of the %patchno%. Likewise, it will substitute the values of the corresponding parameters for %username% and %password%. %job_target_name% and %job_target_type% are "pre-built" placeholders that will substitute the name of the target that the step is currently executing against.
The steps S2, S3, and S4 illustrate how the remoteOp command can be used to execute a SQL*Plus script on the Agent.
The status of a job is failed if any of the following occurs:
S2 fails and S4 fails
S2 succeeds and S3 fails
Note that since S2 executes after S1 (regardless of whether S1 succeeds or fails), the status of S1 does not affect the status of the job in any way.
<jobtype name="jobType1" editable="true" version="1.0"> <credentials> <credential usage="defaultHostCred" authTargetType="host" defaultCredentialSet="DBHostCreds"/> <credential usage="defaultDBCred" authTargetType="oracle_database" credentialTypes=”DBCreds” defaultCredentialSet="DBCredsNormal"/> </credentials> <stepset ID="main" type="iterativeParallel" iterate_param="job_target_types" iterate_param_filter="oracle_database" > <step ID="s1" command="remoteOp""> <credList> <cred usage="defaultHostCred" reference="defaultHostCred"/> </credList> <paramList> <param name="remoteCommand">myprog</param> <param name="targetName">%job_target_names%[%job_iterate_ index%] </param> <param name="targetType">%job_target_types%[%job_iterate_ index%] </param> <param name="args">-id=%patchno%</param> <param name="successStatus">3</param> <param name="failureStatus">73</param> </paramList> </step> <step ID="s2" command="remoteOp""> <credList> <cred usage="defaultHostCred" reference="defaultHostCred"/> </credList> <paramList> <param name="remoteCommand">myprog2</param> <param name="targetName">%job_target_names%[%job_iterate_ index%]</param> <param name="targetType">%job_target_types%[%job_iterate_ index%]</param> <param name="args">-id=%patchno%</param> <param name="successStatus">3</param> <param name="failureStatus">73</param> </paramList> </step> <step ID="s3" successOf="s2" command="remoteOp"> <credList> <cred usage="defaultHostCred" reference="defaultHostCred"/> <cred usage="defaultDBCred" reference="defaultDBCred"> <map toParam="db_username" credColumn="DBUserName"/> <map toParam="db_passwd" credColumn="DBPassword"/> <map toParam="db_alias" credColumn="DBRole"/> </cred> </credList> <paramList> <param name="command">prog1</command> <param name="script"> <![CDATA[ select * from MGMT_METRICS where target_name=%job_target_type%[%job_ iterate_param_index%] ]]> </param> <param name="args">%db_username%/%db_passwd%@%db_alias%</param> <param name="targetName">%job_target_names%[%job_iterate_ index%]</param> <param name="targetType">%job_target_types%[%job_iterate_ index%]</param> <param name="successStatus">0</param> <param name="failureStatus">1</param> </paramList> </step> <step ID="s4" failureOf="s2" command="remoteOp"> <credList> <cred usage="defaultHostCred" reference="defaultHostCred"/> </credList> <paramList> <param name="input"> <![CDATA[ This is standard input to the executed progeam. You can use placeholders for parameters, such as %job_target_name%[%job_iterate_param_index%] ]]> </param> <param name="remoteCommand">prog2</param> <param name="targetName">%job_target_names%[%job_iterate_ index%]</param> <param name="targetType">%job_target_types%[%job_iterate_ index%]</param> <param name="args"></param> <param name="successStatus">0</param> <param name="failureStatus">1</param> </paramList> </step> </stepset> <displayInfo useDefaultCreateUI="true"/> </jobtype>
The following XML describes a job type that has two steps, S1 and S2, that execute in parallel (within a parallel stepset ss1) and a third step, S3, that will execute only after both S1 and S2 have completed successfully. This is achieved by placing the step S3 in a serial stepset ("main") that also contains the parallel stepset ss1. This job type is a "multi-node" job. Note that use of %job_target_name%[1], %job_target_name%[2] in the parameters to the commands. In stepsets other than an iterative stepset, job targets can only be referred to using their position in the targets array (which is ordered).
So, %job_target_name%[1] refers to the first target, %job_target_name%[2] to the second, and so on. The assumption is that most multi-node jobs will expect their targets to be in some order. For example, a clone job might expect the source database to be the first target, and the target database to be the second target. This job fails if any of the following occurs:
The parallel stepset SS1 fails (either one of S1 or S2, or both fail)
Both S1 and S2 succeed, but S3 fails
Also note that the job type has declared itself to be Agent-bound. This means that the job will be set to Suspended/Agent Down state if either emd (corresponding to the first target or the second target) goes down.
<jobtype name="jobType2" version="1.0" agentBound="true" > <stepset ID="main" type="serial" editable="true"> <!-- All steps in this stepset ss1 execute in parallel --> <credentials> <credential usage=”hostCreds” authTargetType=”host” defaultCredentialSet=”HostCredsNormal”/> </credentials> <stepset ID="ss1" type="parallel" > <step ID="s1" command="remoteOp" > <credList> <cred usage="defaultHostCred" reference="defaultHostCred"/> </credList> <paramList> <param name="remoteCommand">myprog</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="args">-id=%patchno%</param> <param name="successStatus">3</param> <param name="failureStatus">73</param> </paramList> </step> <step ID="s2" command="remoteOp" > <credList> <cred usage=”defaultHostCred” reference=”hostCreds”/> </credList> <paramList> <param name="remoteCommand">myprog</param> <param name="targetName">%job_target_names%[2]</param> <param name="targetType">%job_target_types%[2]</param> <param name="args">-id=%patchno%</param> <param name="successStatus">3</param> <param name="failureStatus">73</param> </paramList> </step> </stepset> <!-- This step executes after stepset ss1 has executed, since it is inside the serial subset "main" --> <step ID="s3" successOf="ss1" command="remoteOp" > ... </step> </stepset> <displayInfo useDefaultCreateUI="true"/> </jobtype>
The following example defines a new job type called jobType3 that executes jobs of type jobType1 and jobType2, one after another. The job job2 of type jobType2 is executed only if the first job fails. In order to execute another job, the target list and the param list must be passed. The targetList
tag has a parameter called allTargets, which when sent to true passes along the entire target list passed to this job. By setting allTargets to false, a job type has the option of passing along a subset of its targets to the other job type.
In the example below, jobType3 passes along all its targets to the instance of the job of type jobType1, but only the first two targets in its target list (in that order) to the job instance of type jobType2. There is another attribute called allParams (associated with paramList) that performs a similar function with respect to parameters. If allParams is set to true, then all parameters of the parent job are passed to the nested job. More typically though, the nested job will have a different set of parameters (with different names).
If allParams is set to false (the default), then the job type can name the nested job parameters explicitly and they need not have the same names as those in the parent job. Parameter substitution can be used to express the nested job parameters in terms of the parent job parameters, as shown in this example.
Note that dependencies can be expressed between nested jobs just as if they were steps or stepsets. In this example, a job of type jobType3 succeeds if either the nested job job1 succeeds or if job1 fails and job2 succeeds.
<jobType name="jobType3" editable="true" version="1.0"> <stepset ID="main" type="serial"> <job type="jobType1" ID="job1" > <target_list allTargets="true" /> <paramList> <param name="patchno">%patchno%</param> </paramList> </job> <job type="jobType2" ID="job2" failureOf="job1" > <targetList> <target name="%job_target_names%[1]" type="%job_target_types%[1]" /> <target name="%job_target_names%[2]" type="%job_target_types%[2]" /> </targetList> <paramList> <param name="patchno">%patchno%</param> </paramList> </job> </stepset> <displayInfo useDefaultCreateUI="true"/> </jobType>
This example illustrates the use of the generateFile
command. Let us assume that you are executing a sequence of scripts, all of which need to source a common file that sets up some environment variables, which are known only at runtime. One way to do this is to generate the variables in a file with a unique name. All subsequent scripts are passed this file name as one of their command-line arguments, which they read to set the needed environment or shell variables.
The first step, S1, in this job uses the generateFile command to generate a file named <app-home>/<execution-id>.env. Since the execution id of a job is always unique, this ensures a unique file name. It generates three environment variables, ENVVAR1, ENVVAR2, and ENVVAR3, which are set to the values of the job parameters param1, param2 and param2, respectively. These parameters must be set to the right values when the job is submitted. Note that %job_execution_id% is a placeholder provided by the job system, while %app-home% is a job parameter which must be explicitly provided when the job is submitted.
The second step, S2, executes a script called myscript. The first command-line argument to the script is the generated filename. This script must "source" the generated file, which will set the required environment variables, and then go about it's other tasks, in the manner shown below:
#!/bin/ksh ENVFILE=$1 # Execute the generated file, sets the required environment vars . $ENVFILE # I can now reference the variables set in the file doSomething $ENVVAR1 $ENVVAR2 $ENVVAR3...
The full job type specification is given below. Note the step S3 removes the file that was created by the first step S1. It is important to clean up after yourself when using the putFile and generateFile commands to write temporary files on the Management Agent. The cleanup is done here explicitly as a separate step, but it could also be done by one of the scripts that execute on the remote host.
Additionally, note the use of the securityInfo section that specifies that the user that submits a job of this job type must have maintain privilege on both the targets that the job operates on.
<jobtype name="jobType4" editable="true" version="1.0"> <securityInfo> <privilege name="MAINTAIN" type="target" evaluateAtSubmission="false"> <target name="%job_target_names%[1]" type="%job_target_types%[1]" /> <target name="%job_target_names%[2]" type="%job_target_types%[2]" /> </privilege> </securityInfo> <credentials> <credential usage=”hostCreds” authTargetType=”host” defaultCredentialSet=”HostCredsNormal”/> </credentials> <stepset ID="main" type="serial"> <step ID="s1" command="putFile" > <paramList> <param name=sourceType>inline</param> <param name="destFile">%app-home%/%job_execution_id%.env</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name=contents"> <![CDATA[#!/bin/ksh export ENVVAR1=%param1% export ENVVAR2=%param2% export ENVVAR3=%param3% ]]> </param> </paramList> </step> <step ID="s2" command="remoteOp" > <credList> <cred usage=”defaultHostCred” reference=”hostCreds”/> </credList> <paramList> <param name="remoteCommand">myscript</param> <param name="targetName">%job_target_names%[2]</param> <param name="targetType">%job_target_types%[2]</param> <param name="args">%app-home%/%job_execution_id%.env</param> <param name="successStatus">3</param> <param name="failureStatus">73</param> </paramList> </step> <step ID="s3" command="remoteOp" > <credList> <cred usage=”defaultHostCred” reference=”hostCreds”/> </credList> <paramList> <param name="remoteCommand">rm</param> <param name="targetName">%job_target_names%[2]</param> <param name="targetType">%job_target_types%[2]</param> <param name="args">-f, %app-home%/%job_execution_id%.env</param> <param name="successStatus">0</param> </paramList> </step> </stepset> <displayInfo useDefaultCreateUI="true"/> </jobtype>
This example illustrates the use of the repSQL
command to execute SQL statements and anonymous PL/SQL blocks against the Management Repository. The job type specification below calls a simple SQL statement in the first step S1, and a PL/SQL procedure in the second step. Note the use of the variables %job_id% and %job_name%, which are special job-system placeholders. Other job parameters can be similarly escaped as well. Also note the use of bind parameters in the SQL queries. The parameters sqlinparam[n]
can be used to specify bind parameters. There must be one parameter of the form sqlinparam[n]
for each bind parameter. Bind parameters must be used as far as possible to make optimum use of database resources.
<jobtype name="repSQLJob" editable="true" version="1.0"> <stepset ID="main" type="serial"> <step ID="s1" command="repSQL" > <paramList> <param name="sql">update mytable set status='executed' where name=?</param> <param name="sqlinparam1">%job_name%</param> </paramList> </step> <step ID="s2" command="repSQL" > <paramList> <param name="sql">begin mypackage.job_done(?,?,?); end;</param> <param name="sqlinparam1">%job_id%</param> <param name="sqlinparam2">3</param><param name="sqlinparam3">mgmt_rep</param> </paramList> </step> </stepset> <displayInfo useDefaultCreateUI="true"/> </stepset> </jobtype>
This example illustrates the use of the switch stepset. The main stepset of this job is a switch stepset whose switchVarName is a job parameter called stepType. The possible values (switchCaseVal) that this parameter can have are "simpleStep", "parallel", and "OSJob", which will end up selecting, respectively, the step SWITCHSIMPLESTEP, the parallel stepset SWITCHPARALLELSTEP, or the nested job J1.
<jobType version="1.0" name="SwitchSetJob" editable="true"> <stepset ID="main" type="switch" switchVarName="stepType" > <credentials> <credential usage=”hostCreds” authTargetType=”host” defaultCredentialSet=”HostCredsNormal”/> </credentials> <step ID="SWITCHSIMPLESTEP" switchCaseVal="simpleStep" command="remoteOp"> <credList> <cred usage=”defaultHostCred” reference=”hostCreds”/> </credList><paramList> <param name="remoteCommand">%command%</param> <param name="args">%args%</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> </paramList> </step> <stepset ID="SWITCHPARALLELSTEP" type="parallel" switchCaseVal="parallelStep"> <step ID="P11" command="remoteOp" > <credList> <cred usage=”defaultHostCred” reference=”hostCreds”/> </credList> <paramList> <param name="remoteCommand">%command%</param> <param name="args">%args%</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> </paramList> </step> <step ID="P12" command="remoteOp" > <credList> <cred usage=”defaultHostCred” reference=”hostCreds”/> </credList> <paramList> <param name="remoteCommand">%command%</param> <param name="args">%args%</param> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> </paramList> </step> </stepset> <job ID="J1" type="OSCommandSerial" switchCaseVal="OSJob" > <paramList> <param name="command">%command%</param> <param name="args">%args%</param> </paramList> <targetList> <target name="%job_target_names%[1]" type="%job_target_types%[1]" /> </targetList> </job> </stepset> <displayInfo useDefaultCreateUI="true"/> </jobType>
This example shows the use of the <securityInfo> tag to ensure that only users that have CLONE FROM privilege over the first target and MAINTAIN privilege over the second target will be able to submit jobs of the following type:
<jobType name="Clone" editable="true" version="1.0" > <securityInfo> <privilege name="CREATE TARGET" type="system" /> <privilege name="CLONE FROM" type="target" evaluateAtSubmission="false" > <target name="%job_target_names%[1]" type="%job_target_types%[1]" /> </privilege> <privilege name="MAINTAIN" type="target" evaluateAtSubmission="false"> <target name="%job_target_names%[2]" type="%job_target_types%[2]" /> </privilege> </securityInfo> <!-- An optional <paramInfo> section will follow here, followed by the stepset definition of the job --> <paramInfo> .... </paramInfo> <stepset ...> ....... </stepset> <displayInfo useDefaultCreateUI="true"/> </jobType>
The following shows an example of a scenario where credentials are passed to a nested job in the job type xml:
<jobType version="1.0" name="SampleJobType001" singleTarget="true" editable="true" defaultTargetType="host" targetTypes="all"> <credentials> <credential usage="osCreds" authTargetType="host" defaultCredentialSet="HostCredsNormal" credentialTypes="HostCreds"> <displayName nlsid="LABEL_NAME">OS Credentials</displayName> <description nlsid="LABEL_DESC">Please enter credentials.</description> </credential> </credentials> <stepset ID="main" type="serial"> <step ID="Step" command="remoteOp"> <credList> <cred usage="defaultHostCred" reference="osCreds" /> </credList> <paramList> <param name="targetName">%job_target_names%[1]</param> <param name="targetType">%job_target_types%[1]</param> <param name="remoteCommand">/bin/sleep</param> <param name="args">1</param> </paramList> </step> <job ID="Nested_Job" type="OSCommand"> <credList> <cred usage="defaultHostCred" reference="osCreds" /> </credList> <targetList allTargets="true" /> <paramList> <param name="command">/bin/sleep</param> <param name="args">1</param> </paramList> </job> </stepset> </jobType>
This section provides a brief discussion on issues you should consider when designing your job type. These issues may impact the performance of your job type as well as the overall job system.
The following issues are important in relation to the use of parameter sources:
Parameter sources are a convenient way to obtain needed parameters from known sources, such as the Management Repository or the credentials table. The parameter sources must be used only for quick queries that fetch information already stored somewhere else.
Parameter sources that are evaluated at job execution time will ,in general, effect the throughput of the job dispatcher and must be used with care. In some cases, the fetching of parameters at execution time may be unavoidable and if you do not care whether the parameters are fetched at execution time or submission time, set evaluateAtSubmission to false.
When executing SQL queries to obtain parameters (using the SQL parameter source) the usual performance improvement guidelines apply. These include using indexes only where necessary and avoiding the joining of large tables.
To package a new job type with a metadata plug-in, you should adhere to the following implementation guidelines:
New job types packaged with a metadata plug-in will have two new files:
Job type definition XML file: used by the job system during metadata plug-in deployment to define your new job type. There is one XML file for each job type.
Job type script file: installed on selected Agents during metadata plug-in deployment. A single script may be shared amongst different jobs.
The following two properties must be set to "true" in the first line of the job type definition XML file:
agentBound
singleTarget
Here is an example:
<jobType version="1.0" name="PotatoUpDown" singleTarget="true" agentBound="true" targetTypes="potatoserver_os">
Because the use of Java for a new job type is not supported for job types packaged with a metadata plug-in, new job types are agentBound and perform their work through a script delivered to the Agent (the job type script file). The job type definition XML file contains a reference to the job type script file and will execute it on the Agent whenever the job is run from the Enterprise Manager console.
Adding a Job Type to an Oracle Plug-in Archive (OPAR)
After you have created the job type definition XML file and modified the target type definition file, you can add your files to an Oracle Plug-in Archive (OPAR) just as you would any other target type. See Chapter 13, "Validating, Packaging, and Deploying the Plug-in" for more information.
Release 11.1 Job Types Versus Enterprise Manager Cloud Control 12c Job Types
In Oracle Enterprise Manager Cloud Control 12c, the job type parser has moved to an XSD-based parser. However, Enterprise Manager release 11.1 job types should work, as there are no major changes required to enable an 11.1 job type to be parsed with a Cloud Control 12c parser.
The following are some of the known changes required by the Cloud Control 12c parser in the job type XML:
<jobtype
> should change to <jobType
>.
<paramInfo
> should not contain <stepset
>.
<ParameterUrisource
> tag should end like <parameterUrisource attr1=”” attr2=”” /
> and not like <parameterUrisource attr1=”” attr2=”> </parameterUriSource>
.
<paramInfo/
> should be removed.
stepSet does not contain successOf
or failureOf
attributes.
Make sure the ID specified in the stepDisplayInfo
does exist in the job type (that is, a step with that ID should exist).
In Cloud Control 12c, job types can be registered through an emctl
command, see the following command information:
emctl register oms metadata –service jobTypes –file <file name with absolute path> -sysman <sysman password> -pluginId <plugin id>