Tuesday, March 27, 2012

DBA Scripts

Here are some of the scripts I use regularly. Some of them are not necessary when using Oracle Enterprise Manager (OEM), but if like me you hate waiting around for OEM to start up you'll often get the job done more quickly with these. You can look at the individual scripts, or download the lot here

 10g...
active_session_waits.sql db_usage_hwm.sql dynamic_memory.sql
event_histogram.sql feature_usage.sql generate_multiple_awr_reports.sql
job_chain_rules.sql job_chain_steps.sql job_chains.sql
job_classes.sql job_programs.sql job_running_chains.sql
job_schedules.sql jobs.sql jobs_running.sql
lock_tree.sql segment_advisor.sql services.sql
session_waits.sql sga_buffers.sql sysaux_occupants.sql
test_calendar_string.sql window_groups.sql windows.sql

11g...
extended_stats.sql identify_trace_file.sql job_credentials.sql
memory_dynamic_components.sql memory_target_advice.sql network_acl_privileges.sql
network_acls.sql temp_free_space.sql

constraints...
disable_chk.sql disable_fk.sql disable_pk.sql
disable_ref_fk.sql enable_chk.sql enable_fk.sql
enable_pk.sql enable_ref_fk.sql

miscellaneous...
analyze_all.sql base64decode.sql base64encode.sql
column_comments.sql comments.sql compare_schemas.sql
compile_all.sql compile_all_bodies.sql compile_all_funcs.sql
compile_all_procs.sql compile_all_specs.sql compile_all_trigs.sql
compile_all_views.sql conversion_api.sql csv.sql
dict_comments.sql drop_all.sql dsp.pkb
dsp.pks err.pkb err.pks
ftp.pkb ftp.pks gen_health.sql
get_pivot.sql login.sql part_hv_to_date.sql
proc_defs.sql rebuild_index.sql soap_api.sql
string_agg.sql string_api.sql switch_schema.sql
table_comments.sql table_defs.sql table_differences.sql
trc.pkb trc.pks



monitoring...
access.sql active_sessions.sql cache_hit_ratio.sql
call_stack.sql code_dep.sql code_dep_on.sql
column_defaults.sql db_cache_advice.sql db_info.sql
db_links.sql db_properties.sql dispatchers.sql
error_stack.sql errors.sql explain.sql
file_io.sql fk_columns.sql fks.sql
free_space.sql health.sql hidden_parameters.sql
high_water_mark.sql hot_blocks.sql identify_trace_file.sql
index_extents.sql index_monitoring_status.sql index_partitions.sql
index_usage.sql invalid_objects.sql jobs.sql
jobs_running.sql latch_hit_ratios.sql latch_holders.sql
latches.sql library_cache.sql license.sql
locked_objects.sql longops.sql lru_latch_ratio.sql
max_extents.sql min_datafile_size.sql monitor.sql
monitor_memory.sql monitoring_status.sql non_indexed_fks.sql
obj_lock.sql object_status.sql open_cursors.sql
open_cursors_by_sid.sql open_cursors_full_by_sid.sql parameter_diffs.sql
parameters.sql pga_target_advice.sql pipes.sql
profiler_run_details.sql profiler_runs.sql rbs_extents.sql
rbs_stats.sql recovery_status.sql roles.sql
search_source.sql segment_stats.sql session_events.sql
session_events_by_sid.sql session_events_by_spid.sql session_io.sql
session_rollback.sql session_stats.sql session_stats_by_sid.sql
session_undo.sql session_waits.sql sessions.sql
show_indexes.sql show_space.sql show_tables.sql
source.sql spfile_parameters.sql sql_area.sql
sql_text.sql sql_text_by_sid.sql system_events.sql
system_parameters.sql system_stats.sql table_dep.sql
table_extents.sql table_indexes.sql table_partitions.sql
table_stats.sql temp_extent_map.sql temp_io.sql
temp_segments.sql temp_usage.sql tempseg_usage.sql
top_latches.sql top_sessions.sql top_sql.sql
trace_run_details.sql trace_runs.sql ts_extent_map.sql
ts_full.sql tuning.sql unusable_indexes.sql
unused_space.sql user_hit_ratio.sql user_temp_space.sql
user_undo_space.sql

rac...
locked_objects_rac.sql longops_rac.sql monitor_memory_rac.sql
session_undo_rac.sql session_waits_rac.sql sessions_rac.sql

resource manager...
active_plan.sql consumer_group_usage.sql consumer_groups.sql
plan_directives.sql resource_plans.sql

script creation...
backup.sql build_api.sql build_api2.sql
create_data.sql drop_cons_on_table.sql drop_fks_on_table.sql
drop_fks_ref_table.sql drop_indexes.sql fks_on_table.sql
fks_ref_table.sql index_monitoring_off.sql index_monitoring_on.sql
monitoring_off.sql monitoring_on.sql rbs_structure.sql
recreate_table.sql seq_structure.sql table_constraints.sql
table_indexes.sql table_structure.sql tablespace_structure.sql
view_structure.sql

security...
grant_delete.sql grant_execute.sql grant_insert.sql
grant_select.sql grant_update.sql package_synonyms.sql
sequence_synonyms.sql table_synonyms.sql view_synonyms.sql

Wednesday, March 21, 2012

Original Export & Import Vs Data Pump Export & Import

If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database. Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from original export/import. The difference is listed below.

1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior.

5)Expdp/Impdp access files on the server rather than on the client.

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import.

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

Missing Datafile in Archive and NoArchivelog mode

Losing Datafiles Whenever you are in NoArchivelog Mode:
###################################################

If you are in noarchivelog mode and you loss any datafile then whether it is temporary or permanent media failure, the database will automatically shut down. If failure is temporary then correct the underline hardware and start the database. Usually crash recovery will perform recovery of the committed transaction of the database from online redo log files. If you have permanent media failure then restore a whole database from a good backup. How to restore a database is as follows:

If a media failure damages datafiles in a NOARCHIVELOG database, then the only option for recovery is usually to restore a consistent whole database backup. As you are in noarchivelog mode so you have to understand that changes after taken backup is lost.

If you logical backup that is export file you can import that also.

In order to recover database in noarchivelog mode you have to follow the following procedure.

1)If the database is open shutdown it.
SQL>SHUTDOWN IMMEDIATE;

2)If possible, correct the media problem so that the backup database files can be restored to their original locations.

3)Copy all of the backup control files, datafiles to the default location if you corrected media failure. However you can restore to another location. Remember that all of the files not only the damaged files.

4)Because online redo logs are not backed up, you cannot restore them with the datafiles and control files. In order to allow the database to reset the online redo logs, you must have to do incomplete recovery:

RECOVER DATABASE UNTIL CANCEL
CANCEL


5)Open the database in RESETLOGS mode:
ALTER DATABASE OPEN RESETLOGS;

In order to rename your control files or in case of media damage you can copy it to another location and then by setting (if spfile)
STARTUP NOMOUNT
alter system set control_files='
+ORQ/orq1/controlfile/control01.ctl','+ORQ/orq1/controlfile/control02.ctl' scope=spfile;
STARTUP FORCE MOUNT;


In order to rename data files or online redo log files first copy it to new location and then point control file to new location by,
ALTER DATABASE RENAME FILE '+ORQ/orq1/datafile/system01.dbf';'
TO '+ORQ/orq1/datafile/system02.dbf';


Losing Datafiles Whenever you are in Archivelog Mode:###################################################
If the datafile that is lost is under SYSTEM tablespace or if it is a datafile contain active undo segments then database shuts down. If the failure is temporary then correct the underline hardware and start the database. Usually crash recovery will perform recovery of the committed transaction of the database from online redo log files.

If the datafile that is lost in not under SYSTEM tablespace and not contain active undo segments then the affected datafile is gone to offline. The database remains open. In order to fix the problem take the affected tablespace offline and then recover the tablespace. 

Missing Online Redo Logs

Consider a scenario where you have lost your redo logs on a DB instance. It is possible to recreate them. It is also possible to recover the database with no data loss in certain situations. Let’s jump to action immediately. But first we need to know different ways of redo loss and which category we fit in.
A Redo log is a journal of events happening in the DB at any given time. Redo logs are the most crucial part of the database after the controlfile. The documentation defines the redo logs as follows:
The online redo log is a set of operating system files that record all changes made to any database block, including data, index, and rollback segments, whether the changes are committed or uncommitted. All changes to Oracle blocks are recorded in the online log.
A redo log, at the time of being deleted, will be in one of the below scenarios:
1) The redo log is not CURRENT and archived
2) The redo log is not CURRENT but not archived
3) The redo log is CURRENT (DB was shut down cleanly)
4) The redo log is CURRENT (DB was not shut down cleanly)
Identify which scenario fits you. In all the scenarios except (1) we will need to open the database using OPEN RESETLOGS option. You may use the below SQL to find the above details:
SQL> select a.group#, b.member, a.archived, a.status
  2  from v$log a, v$logfile b where a.group#=b.group#;

    GROUP# MEMBER                                   ARC STATUS
---------- ---------------------------------------- --- ----------------
         3 u002/oradata/ORA1020/redo01.LOG   YES INACTIVE
         2 u002/oradata/ORA1020/redo02.LOG   YES UNUSED
         1 u002/oradata/ORA1020/redo03.LOG   NO  CURRENT
Remember: RESETLOGS is compulsory when there is an incomplete recovery.
1) Redo log is not CURRENT and archived
If a redo log file is already archived, its loss can safely be ignored. Since all the changes in the DB are now archived and the online log file is only waiting for its turn to be re-written by LGWR (redo log files are written circularly) the loss of the redo log file doesnt matter much. It may be re-created using the command
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE CLEAR LOGFILE GROUP <group#>;

This will re-create all groups and no transactions are lost. The database can be opened normally after this.
2) Redo log is not CURRENT and not archived
If the redo log is lost before it is archived then there are a few transactions in the redo log that are lost too. Since a crash recovery is no more possible, a media recovery is required. The DB needs to be restored from a previous backup and recovered until the last available archive log is applied. The reason I say crash recovery is not possible is that by definition a crash recovery should need only online redo logs to recover the database to a consistent state. If a recovery needs a log file that is not online (like archive logs) then the recovery is no more a crash recovery. It becomes media recovery. In our current case since we are applying the archives and using the previous backup to restore it is not a crash recovery.
Also since we cannot recover database completely as some data is lost in the redo log files before being archived, this becomes an incomplete recovery too. The DB needs to be opened up in RESETLOGS mode. The steps to recover are below:
-- Restore the DB from previous backup and open the DB in mount state.
SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
-- Apply all the archives available
SQL> ALTER DATABASE OPEN RESETLOGS;

In a similar scenario an RMAN restore will be as below:
RMAN> RESTORE CONTROLFILE FROM '<backup tag location>';
RMAN> ALTER DATABASE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE UNTIL TIME "to_date('MAR 05 2009 19:00:00','MON DD YYYY HH24:MI:SS')";
RMAN> ALTER DATABASE OPEN RESETLOGS;

The above commands can also be used in a normal recovery scenario from Production to Test with a few modifications. The actual plan of steps that I had followed can be found here. There is a metalink note 1044466.6 that discusses this recovery in greater detail.

3) Redo log is CURRENT (DB was shut down cleanly)
If the CURRENT redo log is lost and if the DB is closed consistently, OPEN RESETLOGS can be issued directly without any transaction loss. It is advisable to take a full backup of DB immediately after the STARTUP.
4) Redo log is CURRENT (DB was not shut down cleanly)
When a current redo log is lost, the transactions in the log file are also lost before making to archived logs. Since a DB startup can no more perform a crash recovery (since all the now-available online log files are not sufficient to startup the DB in consistent state), an incomplete media recovery is the only option. We will need to restore the DB from a previous backup and restore to the point just before the lost redo log file. The DB will need to be opened in RESETLOGS mode. There is some transaction loss in this scenario.
RMAN> RESTORE CONTROLFILE FROM '<backup tag location>';
RMAN> ALTER DATABASE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE UNTIL TIME "to_date('MAR 05 2009 19:00:00','MON DD YYYY HH24:MI:SS')";
RMAN> ALTER DATABASE OPEN RESETLOGS;

RMAN Backup failed with ORA-00600


###########################################
RMAN BACKUP FAILURE WITH ORA-00600
###########################################
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 04/14/2011 19:00:34
RMAN-00600: internal error, arguments [8201] [] [] [] []

Sol:
Cause
You are hitting Bug 5916459 with base bug Bug 6035495
There is mismatch between the information stored in the Controlfile and rman Catalog.

Solution
Try to do a explicit resync catalog with the target database .

RMAN> resync catalog;

If after the resync still the problem exist then recreate the controlfile of the target database .

Catbundle failed with ORA-20000: Write_Error during insertscriptFile while running catbundle after applying PSU Patches


####################################
CATBUNDLE ISSUE IN ABC DATABASE
####################################
SQL> @catbundle.sql psu apply

PL/SQL procedure successfully completed.

PL/SQL procedure successfully completed.

Generating apply and rollback scripts...
Check the following file for errors:
/u2590/app/oracle/product/11.1.0/db_1/cfgtoollogs/catbundle/catbundle_PSU_ABC_GENERATE_2011Sep06_23_59_46.log
Apply script: /u2590/app/oracle/product/11.1.0/db_1/rdbms/admin/catbundle_PSU_ABC_APPLY.sql
Rollback script: /u2590/app/oracle/product/11.1.0/db_1/rdbms/admin/catbundle_PSU_ABC_ROLLBACK.sql

PL/SQL procedure successfully completed.

Executing script file...

Here is the information available in the log.
------------------------------------------
Processing components and files for bundle 7: PSU 11.1.0.7.7                                                                                                 
Exception - patch NOT installed                                                                                                                              
ORA-20000: WRITE_ERROR during insertScriptFile                                                                                                               

Solution:
---------
It was some bug related to line size while opening the os file.
I have added the large line size explicitly to the below command in catbundle script and it ran good.

UTL_FILE.FOPEN('ADMIN_DIR', apply_file_name, 'w',32000) instead of UTL_FILE.FOPEN('ADMIN_DIR', apply_file_name, 'w')

Query to check for Table Locks

#########################
TO CHECK FOR TABLE LOCKS
#########################
set pagesize 400
set linesize 600
col USERNAME for a15
col OS_USER_NAME for a15
col TERMINAL for a15
col OBJECT_NAME for a30
SELECT a.sid,a.serial#, a.username,c.os_user_name,a.terminal, b.object_id,substr(b.object_name,1,40) object_name
from v$session a, dba_objects b, v$locked_object c where a.sid = c.session_id and b.object_id = c.object_id;

Adphrept to get list of patches applied to Appstier using ADPatch

####################################################################
ADPHREPT REPORT TO GET LIST OF PATCHES APPLIED USING ADPATCH
####################################################################


METHOD 1
###########
Check Patches applied in Oracle Applications Manager (OAM).

a) Connect to OAM:

http://hostname.domain:PORT/servlets/weboam/oam/oamLogin

Go to Site Map-->Maintenance-->Applied Patches

Enter Patch ID and press 'Go'

See if Patch was returned.


METHOD 2
#########
Use 'adphrept' utility


Patch History report usage:

adphrept query_depth \
bug_number or ALL \
bug_product or ALL \
end_date_from (mm/dd/rr or ALL) \
end_date_to (mm/dd/rr or ALL) \
patchname/ALL \
patchtype/ALL \
level/ALL \
language/ALL \
appltop/ALL \
limit to forms server? (Y/N) \
limit to web server? (Y/N) \
limit to node server? (Y/N) \
limit to admin server? (Y/N) \
only patches that change DB? (Y/N)

Specify 1 or 2 or 3 for query_depth

1-> Details of patches only
2-> Details of patches and their Bug Fixes only
3-> Details of patches their Bug Fixes and Bug Actions

Example: To get the complete patch details for patches applied in Dec 2000:

On UNIX:

$ cd $AD_TOP/patch/115/sql
$ sqlplus APPS/<PWD>

SQL> @adphrept.sql 1 ALL ALL 10/01/11 11/08/11 ALL ALL ALL ALL ALL N N N N N

This report should take less than a minute.

If you are prompted for any parameters, please exit and review the parameter list you have submitted.

Gathering data for report. Report output will be written to adfilerep.xml.

Writing data to report file.
Your AD Patch History XML Report is ready  at adfilerep.xml.

Note:Please copy the file to OA_HTML top to view it thru browser

Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

[applmgr@hostname: sql]$ cp adfilerep.xml /u1590/oracle/oad5/apps/apps_st/comn/webapps/oacore/html/.

How RMAN Duplicates a database

###################################################
HOW RMAN DUPLICATES A DATABASE
###################################################
For backup-based duplication, the principal work of the duplication is performed by the auxiliary channels. These channels correspond to a server
session on the auxiliary instance on the destination host. For active database duplication the primary work is performed by target channels.

RMAN must perform database point-in-time recovery, even when no explicit point in time is provided for duplication. Point-in-time recovery is
required because the online redo log files in the source database are not backed up and cannot be applied to the duplicate database. The farthest
point of recovery of the duplicate database is the most recent redo log file archived by the source database.

As part of the duplicating operation, RMAN automates the following steps:

1) Creates a default server parameter file for the auxiliary instance if the following conditions are true:

    -    Duplication does not involve a standby database.

    -    Server parameter files are not being duplicated.

    -    The auxiliary instance was not started with a server parameter file.

2) Restores from backup or copies from active database the latest control file that satisfies the UNTIL clause requirements.

3) Mounts the restored or copied backup control file from the active database.

4) Uses the RMAN repository to select the backups for restoring the data files to the auxiliary instance. This step applies to backup-based
duplication.

5) Restores and copies the duplicate data files and recovers them with incremental backups and archived redo log files to a noncurrent point in time.

6) Shuts down and restarts the database instance in NOMOUNT mode.

7) Creates a new control file, which then creates and stores the new DBID in the data files.

8) Opens the duplicate database with the RESETLOGS option and creates the online redo log for the new database.

ORA-00959: TABLESPACE '_$DELETED$11$0' DOES NOT EXIST

######################################################
ORA-00959: TABLESPACE '_$DELETED$11$0' DOES NOT EXIST
#######################################################
we got the above error after cutdown process for few tables, as a workaround for this error we need to identify impacted partition tables/indexes:



Here is the query to identify impacted partition tables/indexes:
-------------------------------------------------------------------------
SET lines 180
COL segment_name format a55
SELECT DISTINCT table_name segment_name, def_tablespace_name
           FROM dba_part_tables
          WHERE def_tablespace_name LIKE '%deleted%'
UNION
SELECT DISTINCT index_name segment_name, def_tablespace_name
           FROM dba_part_indexes
          WHERE def_tablespace_name LIKE '%deleted%'
       ORDER BY 1
/

Query to generate the fix commands
-------------------------------------------------------------------------
SELECT DISTINCT    'ALTER TABLE '
                || owner
                || '.'
                || b.table_name
                || ' MODIFY DEFAULT ATTRIBUTES TABLESPACE '
                || tablespace_name
                || ';'
           FROM dba_part_tables b, dba_tab_partitions d
          WHERE d.table_name = b.table_name
            AND def_tablespace_name LIKE '%deleted%'
UNION
SELECT DISTINCT    'ALTER INDEX '
                || owner
                || '.'
                || b.index_name
                || ' MODIFY DEFAULT ATTRIBUTES TABLESPACE '
                || tablespace_name
                || ';'
           FROM dba_part_indexes b, dba_ind_partitions d
          WHERE d.index_name = b.index_name
            AND def_tablespace_name LIKE '%deleted%'
/

18:28:23 [SYS][ORQ21]>> select owner, table_name, def_tablespace_name from dba_part_tables where lower(def_tablespace_name) like '%deleted%';

OWNER                          TABLE_NAME                  DEF_TABLESPACE_NAME
------------------------------ --------------------------  ------------------------------
MGIXX                          MGIHZ_EXTR_RQST_CNTL_PARMS  _$deleted$393$0
MGIXX                          MGIAR_TRANS                 _$deleted$393$0 ? this is the table they accessed and hit with “ORA-959”
MDSYS                          OLS_DIR_BUSINESSES          _$deleted$393$0


18:36:15 [SYS][ORQ21]>> alter table MDSYS.OLS_DIR_BUSINESSES modify default attributes tablespace SYSTEM;

18:36:59 [SYS][ORQ21]>> alter table MGIXX.MGIAR_TRANS modify default attributes tablespace MGIXX_DATA;

18:37:23 [SYS][ORQ21]>> alter table MGIXX.MGIHZ_EXTR_RQST_CNTL_PARMS modify default attributes tablespace MGIXX_DATA;

18:37:44 [SYS][ORQ21]>> select owner, table_name, def_tablespace_name from dba_part_tables where lower(def_tablespace_name) like '%deleted%';

Tuesday, March 20, 2012

WHICH SESSIONS GENERATING MORE REDO LOGS IN ORACLE

##################################################
WHICH SESSIONS GENERATING MORE REDO LOGS IN ORACLE
##################################################

SELECT s.sid, s.serial#, s.username, s.program,
i.block_changes
FROM v$session s, v$sess_io i
WHERE s.sid = i.sid
ORDER BY 5 desc;

SELECT s.sid, s.serial#, s.username, s.program,
t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5,6 desc;

According to the Redo generation size.
-----------------------------------------
select sid, name,
round(value/1024/1024) redo_mb
from v$statname n, v$sesstat s
where n.name like 'redo size'
and s.statistic# = n.statistic#
/



Below Query will list the top 10 redo generated sessions info.
----------------------------------------------------------------
select b.inst_id, b.SID, b.serial# sid_serial, b.username, machine, b.osuser, b.status, a.redo_mb MB
from (select n.inst_id, sid, round(value/1024/1024) redo_mb from gv$statname n, gv$sesstat s
where n.inst_id=s.inst_id and n.statistic#=134 and s.statistic# = n.statistic# order by value desc) a, gv$session b
where b.inst_id=a.inst_id
  and a.sid = b.sid
and   rownum <= 10;

Saturday, March 17, 2012

How To Clear Too Many Pending Requests For Password Reset [ID 875559.1]

On 12.0.6 in Production:
When attempting to use the "FORGET PASSWORD" feature from the Oracle Applications, the
user is unable to reset it's password due to the following error:

ERROR
Too many pending requests already exists in the system. Please check your email or contact the
System Administrator for further assistance.


Cause
The cause of the issue is that there are five 'Forgot password' requests that are currently active
for a user that have possibly timed out but not been processed by 'Workflow Background Process'.

If user doesn't respond within 4 hours after receiving email notification to reset password, the activity will time out but will remain active till 'Workflow Background Process' is run to process 'Timeout'
activities. When the number of such requests for a user exceeds five, the above warning message is displayed.

The following SQL shows status of request for a user:

select * from
(
SELECT WorkflowItemEO.ITEM_TYPE,
WorkflowItemEO.ITEM_KEY,
wf_fwkmon.getitemstatus(WorkflowItemEO.ITEM_TYPE,
WorkflowItemEO.ITEM_KEY, WorkflowItemEO.END_DATE,
WorkflowItemEO.ROOT_ACTIVITY,
WorkflowItemEO.ROOT_ACTIVITY_VERSION) STATUS_CODE
FROM WF_ITEMS WorkflowItemEO,
WF_ITEM_TYPES_VL WorkflowItemTypeEO,
WF_ACTIVITIES_VL ActivityEO,
WF_ITEM_ATTRIBUTE_VALUES attrib
WHERE WorkflowItemEO.ITEM_TYPE = WorkflowItemTypeEO.NAME
AND ActivityEO.ITEM_TYPE = WorkflowItemEO.ITEM_TYPE
AND ActivityEO.NAME = WorkflowItemEO.ROOT_ACTIVITY
AND ActivityEO.VERSION = WorkflowItemEO.ROOT_ACTIVITY_VERSION
AND attrib.item_type = WorkflowItemEO.ITEM_TYPE
AND attrib.item_key = WorkflowItemEO.ITEM_KEY
AND attrib.name = 'USER_NAME'
AND attrib.text_value = '<USER_NAME>'
) QRSLT
where
item_type = 'UMXLHELP';

.
Solution
To close the pending items, please execute the following steps:

1. submit concurrent request 'Workflow Background Process with following parameters:

Item Type : UMX Login Help
Process deferred: Yes
Process Timeout: Yes

2. Retest the issue after request completes to confirm that warning message no longer appears.

Form Compilation Against a 11g Database Hangs or Takes a Very Long Time [ID 880660.1]


When trying to compile some forms against an 11g Database using 'frmcmp.sh' and 'frmcmp_batch.sh' compilers, it seems to hang or it takes too much time (10 min +)
Compile command is like:

frmcmp_batch.sh module=test.fmb module_type=FORM compile_all=YES userid=scott/tiger@orcl


If we compile the same source with the same command line against the 9i/10g database there is no problem.
Cause
The cause of this problem has been identified and verified in an unpublished Forms Bug: 8557019 - APPSPERF: ONE SQL USING ALL_OBJECTS HAS PERFORMANCE ISSUE UNDER 11G INSTANCE. and the underlying unpublished Database (Dictionary) Bug: 8560951 - PERFORMANCE ISSUE WHEN RUNNING A QUERY IN 11G

These Bugs explain that there is a performance issue when executing a similar 'problematic' SQL Query against the 11g database like in the following example:


SELECT COUNT (*)
FROM ALL_OBJECTS
WHERE ( OWNER = 'SYS'
AND OBJECT_NAME = 'DBMS_JAVA'
AND OBJECT_TYPE = 'PACKAGE'
AND ALL_OBJECTS.STATUS = 'VALID' )
OR ( OWNER = 'SYSTEM'
AND OBJECT_NAME = 'ORA_DE_REFLECTION'
AND OBJECT_TYPE = 'PACKAGE'
AND ALL_OBJECTS.STATUS = 'VALID' )
OR ( OWNER = 'SYSTEM'
AND OBJECT_NAME = 'oracle/opb/Reflection'
AND OBJECT_TYPE = 'JAVA CLASS'
AND ALL_OBJECTS.STATUS = 'VALID' )
OR ( OWNER = 'SYSTEM'
AND OBJECT_NAME = 'oracle/opb/SchemaClassLoader'
AND OBJECT_TYPE = 'JAVA CLASS'
AND ALL_OBJECTS.STATUS = 'VALID' ) ;
Solution
To implement the solution, you can either:
A). First, apply the database Patch 8560951 on top of your Database. 
Important Notes:
  • The Patch 8560951 brings modifications in a sensible area and it is needed to use _FIX_CONTROL to enable the fix. This can be done usually either at the DB or session level, but here, as the compiler will launch its own session, it is needed to activate the fix at the DB level.
  • This patch is already included in higher database versions (e.g. 10.2.0.5, 11.2.0.2). For these it's not necessary to install the patch.
    But, "_FIX_CONTROL"='8560951:ON' still needs to be set as the fix is disabled by default.
Second, execute an additional step on your database to activate this fix:
SQL> ALTER SYSTEM SET "_FIX_CONTROL"='8560951:ON''
This will enable the fix in memory. If needed to restore things as they were, you can similarly turn the fix off with:
SQL> ALTER SYSTEM SET "_FIX_CONTROL"='8560951:OFF''
More info about _FIX_CONTROL in:
NOTE 782926.1 Info About Hidden Parameter _fix_control
(You can add scope=spfile and restart the DB to have the fix remain active in the future.)



OR


B) Use the following workaround:


1 Connect to the DB with SQL*Plus as the user who compiles the Forms application


2. Use the following command to create a synonym all_objects with:
SQL> create synonym all_objects for sys.dba_objects;
Note. If it's not working, grant SELECT privelege on sys.dba_objects to the user who will be compiling the form.
3. Compile again your Form.

If you want at the end, you can drop this synonym with:
SQL> drop synonym all_objects;

Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12

The purpose of this document is to provide guidelines for setting up JVMs (Jserv groups/OC4J instances) for online applications in Apps 11i and R12.

This is supposed to be a generic rule of thumb or a starting point. This document is written based on Sun's JDK. Other platforms, like Linux and HP, can also use pretty much the same parameter. AIX  does not have all those parameters and some differences will be pointed. Although there are differences in the JDK parameters, the concepts presented here should apply to all platforms.


Although this has shown to be sufficient for most of the Apps 11i and R12 installations, some additional tuning may be necessary.
Scope and Application
Oracle Application 11i (11.5.9 and up) and R12
Guidelines to setup the JVM in Apps Ebusiness Suite 11i and R12
JDK Version:
It is recommended to use JDK 1.6, because of better performance and additional features. On Solaris and Linux, Sun's JVM will automatically detect whether the Hotspot Server Compiler or the Client Compiler should be used based on the number of CPUs and memory available on the machine.
It is recommended also to use always the latest version for the specific JDK release.
For information about upgrading the JDK, refer to:
Note 401561.1 Using J2SE Version 6.0 with Oracle E-Business Suite 11i
Note 455492.1 Using Latest Update of Java 6.0 with Oracle E-Business Suite Release 12.
Note 418664.1 Overview of Using Java with Oracle E-Business Suite Release 12
Number of JVMs:
1) For OACoreGroup:
Assumption: eBusiness Suite  is running in a 2-tier environment (middle tier in a separate machine)
- No more than 100 active users per JVM/OC4J instance

- There should be no more than 1 active JVM/OC4J instance per CPU  (ideally, there should be 1 JVM per 2 CPUs)


- Watch out for available memory (make sure that you have enough memory to run all the configured JVMs/OC4J instances without swapping)
Note: Each 'core' in a multi-core system is considered to be a separate CPU. For example, a dual quad-core chip would be the same as stating 2 x 4 x CPU  or 8 CPUs.

In regards to hyper-threading cores, in theory the same thing applies, but in some cases it didn't equate to the exact same as a separate core. One could tune greater then a single CPU, but not treating the same as two CPUs.


2) For 
FormsGroup:
For the forms servlet, the actual JVM/servlet usage is minimal, and a single JVM can normally handle up to 100-250 forms users.
Note: that depends on the type of forms. For more complex forms, those numbers are likely to be substantially lower
Notes:
a) The number of jvms (oc4j in R12) is configured by the autoconfig variables s_oacore_nprocs, s_disco_nprocs, s_forms_servlet_nprocs and s_xmlsvcs_nprocs.


b) In 11i, those changes are made in the $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.conf file, while in R12 in the $ORA_CONFIG_HOME/10.1.3/opmn/conf/opmn.xml
Heap Configuration:
1) For OACore, start with the following configuration:
For R12.0 and 11i:
       -Xmx512M -Xms256M -XX:MaxPermSize=128M
       -XX:NewRatio=2 -XX:+PrintGCTimeStamps
For R12.1 and higher:
      -Xmx1024M -Xms512M -XX:MaxPermSize=256M
      -XX:NewRatio=2 -XX:+PrintGCTimeStamps


     Also, ensure you add the following parameter to the DBC file:


     JDBC\:oracle.jdbc.maxCachedBufferSize=262144
    
Adjust the heap sizes accordingly to minimize garbage collection frequency and user pause times due to full garbage collections.
In JDK 1.6, the JVM detects that you have a server class machine (2 or more CPUs with 2GB or more memory), and will automatically enable Parallel Throughput Garbage Collector. The number of GC threads defaults to the number of CPUs on the machine. If you are running multiple JVMs on the same machine, or if your machine has more than 2 CPUs, to avoid the GC threads to be overly active you should reduce the number of GC threads by using:

      -XX:+UseParallelGC -XX:ParallelGCThreads=2


In most cases, using 2 GC threads should be sufficient. If you are using heap size of over 1GB, you can experiment with 4 GC threads and see if it gives you better performance. 


2) If using Forms in Servlet mode, the following configuration for FormsGroup should be enough for most of the cases:
      -Xmx256M -Xms128M -XX:MaxPermSize=128M -XX:NewRatio=2
Notes:
a) The heap is configured by the following autoconfig variables:

11i: s_jvm_options, s_forms_jvm_options

R12: s_oacore_jvm_start_option, s_forms_jvm_start_options,


b) In 11i, those changes are made in the $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties file, while in R12 in the $ORA_CONFIG_HOME/10.1.3/opmn/conf/opmn.xml
Additional Recommendations:

1) It is recommended to use Forms in socket mode which is more efficient than servlet mode, unless there is a requirement to use servlet (like, need for external users access).


2) Although the setup of those parameters can be temporarily done manually changing (jserv.conf and jserv.properties in 11i; opmn.xml in R12), the correct way to update those parameters is using the context editor, update the variables s_oacore_nprocs and s_forms_servlet_nprocs, and then run autoconfig.


3) Oracle Development recommends to be on the last ATG rollup patch, which is currently ATG-rollup7 (
Note.783600.1).

5) Make sure to have the following setting in the jserv.properties file:

wrapper.bin.parameters=-DLONG_RUNNING_JVM=true


If there is any entry in jserv.properties setting -DCACHEMODE, either comment it out or remove that entry:

#wrapper.bin.parameters=-DCACHEMODE=LOCAL


6) Check also recommended patches for Performance issues in:

Note 244040.1  - Oracle E-Business Suite Recommended Performance Patches
7) AIX accepts only the parameters "-Xmx" -Xms". The reminder of the parameters should not be used on those platforms. More
More info about JDK parameters on AIX can be found at

http://www.ibm.com/developerworks/ibm/library/i-garbage3.html


8) A very good comparison of jdk parameters can be find at

http://blogs.sun.com/watt/resource/jvm-options-list.html