Oracle® Enterprise Manager Framework, Host, and Services Metric Reference Manual 12c Release 1 (12.1.0.2.0) Part Number E25162-03 |
|
|
PDF · Mobi · ePub |
The OMS and Repository, Oracle Management Service, OMS Console, and OMS Platform targets expose metrics that are useful for monitoring the Oracle Enterprise Manager Management Service (OMS) and Management Repository.
This category provides information on Loader usage and performance, including throughput and rows processed in last hour.
This is the number of rows processed.
The mgmt_system_performance_log table in the Management Repository.
If this number continues to rise over time, then the user may want to consider adding another Management Service or increasing the number of loader threads for this Management Service. To increase the number of loader threads, add or change the em.loader.threadPoolSize entry in the emoms.properties file. The default number of threads is 2. Values between 2 and 10 are common.
This is the amount of time in seconds that the loader thread has been running in the past hour.
The mgmt_system_performance_log table in the Management Repository.
If this number is steadily increasing along with the Loader Throughput (rows per hour) metric, then perform the actions described in the User Action section of the help topic for the Loader Throughput (rows per hour) metric. If this number increases but the loader throughput does not, check for resource constraints, such as high CPU utilization by some process, deadlocks in the Management Repository database, or processor memory problems.
This category of metrics provides information on Active Management Servlets Category.
The total number of notifications delivered by the Management Service over the previous 10 minutes. The metric is collected every 10 mins and no alerts will be generated.
The mgmt_system_performance_log table in the Management Repository.
If the number of notifications processed is continually increasing over several days, then you may want to consider adding another Management Service.
This indicates average number of EM console accesses in a minute. The metric is collected every 10 mins and alerts will not be generated.
This metric is obtained using the following query of the mgmt_oms_parameters table in the Management Repository.
SELECT value FROM mgmt_oms_parameters where name='loaderOldestFile'
None.
This category of metrics provides information on the agent status.
The number of times the agent has been restarted in the past 24 hrs.
Derived by:
(SELECT t.target_name, COUNT(*) down_count FROM mgmt_availability a, mgmt_targets t WHERE a.start_collection_timestamp = a.end_collection_timestamp AND a.target_guid = t.target_guid AND t.target_type = MGMT_GLOBAL.G_AGENT_TARGET_TYPE AND a.start_collection_timestamp > SYSDATE-1 GROUP BY t.target_name)
If this number is high, check the agent logs to see if a system condition exists causing the system to bounce. If an agent is constantly restarting, the Targets Not Uploading Data metric may also be set for targets on the agents with restart problems. Restart problems may be due to system resource constraints or configuration problems.
This category of metrics provides information on configuration.
The number of administrators defined for Enterprise Manager.
The mgmt_created_users table in the Management Repository.
The number of groups defined for Enterprise Manager.
The mgmt_targets table in the Management Repository.
If you have a problem viewing the All Targets page, you may want to check the number of roles and groups.
The number of roles defined for Enterprise Manager.
The mgmt_roles table in the Management Repository.
If you have a problem viewing the All Targets page, you may want to check the number of roles and groups.
The number of targets defined for Enterprise Manager.
The mgmt_targets table in the Management Repository.
This metric is informational only
This is the total number of MB that the Management Repository tablespaces are currently using.
The dba_data_files table in the Management Repository.
This metric is informational only.
The rate at which targets are being created. The target addition rate should be greatest shortly after EM is installed and then should increase briefly whenever a new agent is added. If the rate is increasing abnormally, you should check for abnormal agent or administrator activity and verify that the targets are useful. Check to see that group creation is not being over utilized.
The metric is derived from the mgmt_target table, the current target count - target count at last sampling.
This metric is informational only.
The total MB allocated to the Management Repository tablespaces. This will always be greater than or equal to the space used.
The dba_free_space table in the Management Repository.
This metric is informational only.
The rate at which users are being created. The target addition rate should be low. If the rate is increasing abnormally, you should check for abnormal administrator activity.
The metric is derived from the mgmt_created_users table, the current user count - user count at last sampling.
This metric is informational only.
This category of metrics provides information on the DBMS job status.
This metric flags a DBMS job whose schedule is invalid. A schedule is marked 'Invalid' if it is scheduled for more than one hour in the past, or more than one year in the future. An invalid schedule means that the job is in serious trouble.
The user_schedule_jobs table in the Management Repository.
None.
The percentage of the past hour the job has been running.
The mgmt_system_performance_log table in the Management Repository.
If the value of this metric is greater than 50%, then there may be a problem with the job. Check the System Errors page for errors reported by the job. Check the Alerts log for any alerts related to the job.
The down condition equates to the dbms_job "broken" state. The Up Arrow means not broken.
The broken column is from the all_users table in the Management Repository.
Determine the reason for the dbms job failure. Once the reason for the failure has been determined and corrected, the job can be restarted through the dbms_job.run command.
To determine the reason the dbms job failed, take the following steps (replacing myjob with the displayed name of the down job):
Copy down the DBMS Job Name that is down from the row in the table. This DBMS Job Name is 'yourDBMSjobname' in the following example.
Log onto the database as the repository owner.
Issue the following SQL statement:
select dbms_jobname from mgmt_performance_names where display_name='yourDBMSjobname';
If the dbms_jobname is 'myjob', then issue the following SQL statement:
select job from all_jobs where what='myjob';
Using the job id returned, look for ORA-12012 messages for this jobid in the alerts log and trace files and try to determine and correct the problem.
The job can be manually restarted through the following database command:
execute dbms_job.run (jobid);
This category of metrics provides information on the Incident target
The alert log error trace file is the name of an associated server trace file generated when the problem generating this incident occurred. If no additional trace file was generated, this field will be blank.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 5 Minutes |
The alert log error trace file name is extracted from the database alert log.
The alert log error trace file name is provided so that the user can look in this file for more information about the problem that occurred.
The fully specified (includes directory path) name of the current XML alert log file.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
This name is retrieved by searching the OMS ADR_HOME/alert directory for the most recent (current) log file.
The alert log file name is provided so that the user can look in this file for more information about the problem that occurred.
A diagnostic incident is a single occurrence of a problem (critical error) that occurred in the OMS process while using Enterprise Manager.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
Text describing a diagnostic incident is extracted from the database alert log, which is an XML file stored in the Automatic Diagnostic Repository (ADR) that stores a chronological list of database messages and errors.
Diagnostic incidents usually indicate software errors and should be reported to Oracle using the Enterprise Manager Support Workbench.
The Execution Context ID (ECID) tracks requests as they move through the application server. This information is useful for diagnostic purposes because it can be used to correlate related problems encountered by a single user attempting to accomplish a single task.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The ECID is extracted from the database alert log.
Diagnostic incidents usually indicate software errors and should be reported to Oracle using the Enterprise Manager Support Workbench. When packaging problems using Support Workbench, the ECID will be used by Support Workbench to correlate and include any additional problems in the package.
An optional field (may be empty) assessing the impact of the problem that occurred.
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The impact is extracted from the database alert log.
This field is purely informational. Diagnostic incidents usually indicate software errors and should be reported to Oracle using the Enterprise Manager Support Workbench.
The Incident ID is a number that uniquely identifies a diagnostic incident (single occurrence of a problem).
The following table shows how often the metric's value is collected.
Target Version | Collection Frequency |
---|---|
All Versions | Every 15 Minutes |
The incident ID is extracted from the database alert log.
Diagnostic incidents usually indicate software errors and should be reported to Oracle using the Enterprise Manager Support Workbench. Problems are one or more occurrences of the same incident. Using Support Workbench, the incident ID can be used to select the correct Problem to package and send to Oracle. Using the command line tool ADRCI, the incident ID can also be used with the show incident command to get details about the incident.
This category of metrics provides information on the performance of job dispatcher.
The job dispatcher is responsible for scheduling jobs as required. It starts up periodically and checks if jobs need to be run. If job dispatcher is running more than the threshold levels, then it is having problems handling the job load.
This is the sum of the amount of time the job has run over the last hour from the mgmt_system_performance_log table in the Management Repository divided by one hour, multiplied by 100 to arrive at the percent.
This metric is informational only.
This category of metrics provides information on active agents.
The number of active agents in the repository. If this number is 0, then Enterprise Manager is not monitoring any external targets. May be a problem if unexpected.
The number of agents whose status is up in the mgmt_current_availability table.
If no agents are running, determine the reasons they are down, correct if needed and restart. Log files in the agent's $ORACLE_HOME/sysman/log directory can provide information about possible causes of agent shutdown.
This category of metrics provides information on the performance of repository collections. They are collected by background dbms jobs in the repository database called collection workers. Repository metrics are sub divided into long and short running metrics. These are called task classes (short task class and long task class). Some collection workers (Default 1) process the short task class and some (Default 1) process long task class. Repository collection performance metrics measure the performance data for repository metric collections for each task class. This metric is a repository metric and hence collected by the collection workers.
The total amount of time in seconds the collection workers were running in last 10 minutes. This is an indicator of the load on the repository collection subsystem. This could be due to two reasons, the number of collections have increased or some of the metrics are taking a long time to complete. This needs to be related with collections processed metric to find out if number of collections have increased or metrics are taking a long time.
The data for this metrics come from entries in mgmt_system_performance log where job_name=MGMT_COLLECTION.Collection Subsystem.
The total number of collections that were processed in the last 10 minutes.
The data for this metrics come from entries in mgmt_system_performance log where job_name=MGMT_COLLECTION.Collection Subsystem
The total number of collections that were waiting to run at the point this metric was collected. An increasing value would mean the collection workers are falling behind and would need to be increased. The collections waiting to run could be high initially on system startup and should ideally go down towards zero.
The data for this metrics come from entries in mgmt_collection_tasks table which holds all the list of collections.
This metric is informational only.
The total number of workers that were processing the collections.
The data for this metric come from entries in mgmt_collection_workers table.
This metric is informational only.
This category of metrics provides information on the Repository Job Dispatcher.
The number of job steps that were ready to be scheduled but could not be because all the dispatchers were busy.
When this number grows steadily, it means the job scheduler is not able to keep up with the workload.
This is the sum of job steps whose next scheduled time is in the past - job steps eligible to run but not yet running. If the graph of this number increases steadily over time, the user should take one of the following actions:
Increase the em.jobs.shortPoolSize
, em.jobs.longPoolSize
and em.jobs.systemPoolSize
properties in the web.xml file. The web.xml file specifies the number of threads allocated to process different types of job steps. The short pool size should be larger than the long pool size.
Property | Default Value | Recommended Value | Description |
---|---|---|---|
em.jobs.shortPoolSize | 10 | 10 50 | Steps taking less than 15 minutes |
em.jobs.longPoolSize | 8 | 8 - 30 | Stars taking more than 15 minutes |
em.jobs.systemPoolSize | 8 | 8 - 20 | Internal jobs (e.g. agent ping) |
Add another Management Service on a different host.
Check the job step contents to see if they can be made more efficient.
This page indicates whether Enterprise Manager is up or down. It contains historical information for periods in which it was down.
This metric indicates whether Enterprise Manager is up or down. If you have configured the agent monitoring the oracle_emrep target with a valid email address, you will receive an email notification when Enterprise Manager is down.
The following table shows how often the metric's value is collected and compared against the default thresholds. The 'Consecutive Number of Occurrences Preceding Notification' column indicates the consecutive number of times the comparison against thresholds should hold TRUE before an alert is generated.
Table 3-1 Metric Summary Table
Target Version | Evaluation and Collection Frequency | Upload Frequency | Operator | Default Warning Threshold | Default Critical Threshold | Consecutive Number of Occurrences Preceding Notification | Alert Text |
---|---|---|---|---|---|---|---|
All Versions |
Every 5 Minutes |
Not Uploaded |
= |
Not Defined |
0 |
1 |
%Message% |
sysman/admin/scripts/emrepresp.pl
This metric checks for the following:
Is the Management Repository database up and accessible?
If the Management Repository database is down, start it. If 'Invalid Username or Password' error is displayed, verify that the name and password for the oracle_emrep target is the same as the repository owner's name and password.
Is at least one Management Service running?
If a Management Service is not running, start one.
Is the Repository Metrics dbms job running?
If the DBMS job is down or has an invalid schedule, it should be restarted by following the instructions in the User Action section of the help topic for the DBMS Job Bad Schedule metric.
This category provides information on any initialization errors encountered by services like loader or events.
This metric is generated if any of the OMS services (such as Loader, Notification, or PingRecorder) failed to get initialized during the OMS startup. At present this metric is used only by Loader service.
This metric has two key columns and one non-key columns:
The key columns are Management Service and Service Name. The key values uniquely identify the Service instance that has initialization errors.
The non-key column is Service Status. This column indicates whether the Service is running fine or encountered an error during OMS startup.