Saturday, August 13, 2011

How to set the JSP compilation to Automatic

Use the vi editor to edit the $CONTEXT_NAME.xml context file used by AutoConfig and change the value for "s_jsp_main_mode" from "justrun" to "recompile" (without the double quotes)

Backup the context file before editing it.

Run Autoconfig to propagate the changes to the configuration files.

Verify that now the
$INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml has

init-param
[param-name]main_mode[/param-name]
[param-value]recompile[/param-value]
/init-param

Restart the Middle Tier services and access the applications to load JSP's which will cause them to be recompiled on the run.
After successfully accessing the applications, use the vi editor to edit the $CONTEXT_NAME.xml context file used by AutoConfig and revert the value for "s_jsp_main_mode" to "justrun" (without the double quotes) in order not to hamper the performance of your system.

Run AutoConfig to propagate the changes.

How to resize redo logs

Modifying the redo logs:
------------------------
SQL> select group#, bytes, status from v$log;

GROUP# BYTES STATUS
---------- ---------- ----------------
1 52428800 INACTIVE
2 52428800 CURRENT
3 52428800 INACTIVE

SQL> select group#, member,status from v$logfile;

GROUP# MEMBER
--------------------------------------------------------------------------------
3 /TESTDATA/TEST/testdata/redo03.log
2 /TESTDATA/TEST/testdata/redo02.log
1 /TESTDATA/TEST/testdata/redo01.log


SQL> alter database add logfile group 4 '/TESTDATA/TEST/testdata/redo04.log' size 125M;

alter database add logfile group 5 '/TESTDATA/TEST/testdata/redo05.log' size 125M;

alter database add logfile group 6 '/TESTDATA/TEST/testdata/redo06.log' size 125M;


select group#, bytes, status from v$log;

Switch until we are into log group 4

alter database drop logfile group 1;
alter database drop logfile group 2;
alter database drop logfile group 3;

remove the files at OS level

Pre-requisites for Oracle Apps R12 installation

R12 installation is one of the crucial task for Apps DBA.Below are the steps described:Preparation for Oracle Apps R12 Installation:
User & Mount point/File system :

We have created two users for installing Oracle R12 Application (one for database tier & another for middle tier also called as application tier). We need 73 GB space (45GB DB & 28GB application tier)

1)Create groups & Users for Oracle R12 Application software installation:
su - root
#groupadd dba => group of users to be granted SYSDBA system privilege
#groupadd oinstall => group owner of Oracle files
#useradd -m -c -g oinstall -G dba oracle =>For Database tier
#useradd -m -c -g oinstall -G dba applmgr =>For Application tier
passwd oracle
passwd applmgr

2) Minimum Requirement for Oracle R12 Application installation:
CPU & RAM
CPU 1.6 GHz
RAM 1GB
Staging Area

We need to use adautostg.pl available in Start Here CD.Advantage of using staging area is that after start CD you don’t have to manually insert CD’s and Installer will automatically pick next CD from staging Area. We can pass on configuration setting on first 3-4 screens & once installer starts we can see complete installation. In case of any failure or subsequent installations (If you have to install more than one Instance its always better to install using Staging Area)

[root@test]# more /etc/hosts
127.0.0.1 localhost.localdomain localhost
Open your hosts file wiht editor(I will use vi editor in this article),put your info in this file and save it. In this article my localhost will be test, my localdomain will be helios.com
PS: If you are using standalone server or pc you can give use any domain name you want. But if your server or pc on any domain you must use localdomain.
[root@test]# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost
10.100.x.x test.helios.com test

2. Set Kernel Parameters:
Add following line in end of the /etc/sysctl.conf file.
[root@test]# vi /etc/ sysctl.conf
#R12.1.1 kernel settings
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
# semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 256 32000 100 142
fs.file-max = 327679
net.ipv4.ip_local_port_range = 10000 65000
net.core.rmem_default=262144
net.core.rmem_max=262144
net.core.wmem_default=262144
net.core.wmem_max=4194304
#full qualified hostname(format is host.localdomain) << optional
kernel.hostname = test.helios .com << optional
#correct domain name !! << optional
kernel.domainname = helios.com << optional
After this steps you must set your new settings wiht below command:
/sbin/sysctl –p

3. Update /var/tmp/.oracle with 777 permissions
chmod 777 /var/tmp/.oracle
4. Edit /etc/security/limits.conf file:
Add the following lines to the /etc/security/limits.conf file:
[root@test]# vi /etc/ security/limits.conf
* hard nofile 65535
* soft nofile 4096
* hard nproc 16384
* soft nproc 2047

5. Edit /etc/resolv.conf file:
Add the following lines to the /etc/resolv.conf file:
options attempts:5
options timeout:15

6. Check /etc/sysconfig/network files format:
Verify that the /etc/sysconfig/network file is formatted as follows:
HOSTNAME=[node_name].[domain_name]

7. If the /etc/sysconfig/networking/profiles/default/network file exists, remove it.
If you changed any files in the previous steps, restart the system.
Be sure we have below rpms on our system:
The following i386 packages must be installed from the OEL 5 or RHEL 5 distribution media:
• compat-glibc-2.3.4-2.26
• gcc-4.1.2-14.el5
• gcc-c++-4.1.2-14.el5
• glibc-2.5-123
• glibc-common-2.5-123
• glibc-devel-2.5-12
• libgcc-4.1.2-14.el53
• libstdc++-devel-4.1.2-14.el5
• libstdc++-4.1.2-14.el53
• make-3.81-1.13
• gdbm-1.8.0-26.2.13
• libXp-1.0.0-8.1.el5
• libaio-0.3.106-3.23
• libgomp-4.1.2-14.el5
• sysstat-7.0.0-3.el5
• compat-libstdc++-296-2.96-138
• compat-libstdc++-33-3.2.3-61
Additionally, the following RPMs are required for an 11gR1 Database (which is bundled with the 12.1.1 release) on the database tier:
• elfutils-libelf-devel-0.125
• elfutils-libelf-devel-static-0.125
• libaio-devel-0.3.106
• unixODBC-2.2.11
• unixODBC-devel-2.2.11
• kernel-headers-2.6

Create folder for Oracle E-Business Suite R12.1.1 installation
Firstly check the mount point size of server.
As we can see i have free space, so i will create my stage folder and my installation folder on this mount point.
[root@test]# cd /data
[root@test data ]#mkdir TESTR12
Here we are doing Fresh Installation .Give 777 to installation folder.
[root@test]# chmod 777 /data/TESTR12
Create related folder for Application and Database tier(optional):
we want to put application tier installation folder together(also database tier installation folder) together we can create 2 subfolders under my main installation folder TESTR12.
[root@test]# cd /data/TESTR12
[root@test TESTR12]# mkdir apps
[root@test TESTR12]#mkdir db

We must give permisson to our user for can make installation of those folder
[root@testR12]# chown –R appltest:dba /data/TESTR12/apps
[root@testR12]# chown –R oratest:dba /data/TESTR12/db

C. Installation wizard steps:

D. After Installing of R12 Application:
Perform the following steps after installing or upgrading to Release 12.1.1 and before allowing users to access the system.
Relink Advanced Supply Chain Planning executables (for SLES 10 and OEL/RHEL 5.4 or higher only)
During the relink phase of the installation of EBS Release 12 (12.1.1) on OEL/RHEL 5.4 (failures will result while relinking the Advanced Supply Chain Planning (ASCP) executables. To fix this problem, users are required to replace the following line under the Linux section of the $AD_TOP/bin/adrelinknew.sh:
CPP_LDFLAGS=' -L$(ORACLE_HOME)/lib -L$(ORACLE_HOME)/lib/stubs -lclntsh'
with
CPP_LDFLAGS=' -L$(ORACLE_HOME)/lib -L$(ORACLE_HOME)/lib/stubs -lclntsh -Wl,--noinhibit-exec'
After making this change, users are then required to run the adadmin utility and relink application executables.

Oracle Workflow Related.

Purge WF_NOTIFICATION_OUT queue cd $FND_TOP/patch/115/sql
sqlplus apps/ @wfntfqup.sql APPS (appspwd) APPLSYS
e.g sqlplus apps/apps @wfntfqup.sql APPS apps APPLSYS

This purges the WF_NOTIFICATION_OUT queue and rebuilds it with data currently in the WF_NOTIFICATIONS table. This is what purges all notifications
waiting in the queue to be sent. It will then populate the queue with the current data in the WF_NOTIFICATIONS table.
Only notifications with mail_status = 'MAIL' and status = 'OPEN' will be re-enqueued in the WF_NOTIFICATION_OUT queue and sent by the mailer.

Workflow TEST Address Update
sqlplus apps/ @$FND_TOP/sql/afsvcpup.sql

Enter Component Id: 10006

Enter the Comp Param Id to update : 10093

Enter a value for the parameter : WFdevUsers@abc.com

Setup Test address/override address for WF
Below is the script to update the override address from backend. You do not need the verification code to set the override address using the below script

update fnd_svc_comp_param_vals
set    parameter_value = '&EnterEmailID'
where  parameter_id =
( select parameter_id
 from   fnd_svc_comp_params_tl
 where  display_name = 'Test Address'
);


Workflow From Address Update :
sqlplus apps/ @$FND_TOP/sql/afsvcpup.sql

Enter Component Id: 10006

Enter the Comp Param Id to update : 10065

Enter a value for the parameter : Lenovo DEV Workflow Mailer"

set overriding address:
update FND_SVC_COMP_PARAM_VALS
set parameter_value = 'Configuration.Workflows@abc.com'
where PARAMETER_ID = 10057;


Scipt to see workflow configuration
SQL> select p.parameter_id,p.parameter_name,v.parameter_value value
from fnd_svc_comp_param_vals_v v,
fnd_svc_comp_params_b p,
fnd_svc_components c
where c.component_type = 'WF_MAILER'
and v.component_id = c.component_id
and v.parameter_id = p.parameter_id
and p.parameter_name in
('OUTBOUND_SERVER', 'INBOUND_SERVER',
'ACCOUNT', 'FROM', 'NODENAME', 'REPLYTO','DISCARD' ,'PROCESS','INBOX')
order by p.parameter_name;


SQL to monitor (check status) of Workflow Notification Mailer (Java)

SELECT COMPONENT_STATUS from APPS.FND_SVC_COMPONENTS where COMPONENT_ID=10006;

sqlplus -s system/${PASSWD}@${ORACLE_SID} << SQLEND >${script}/apps/wf_status.out
set linesize 90
col COMPONENT_NAME format a50
col COMPONENT_status format a20
spool ${script}/apps/wf_status.lst
select COMPONENT_ID,COMPONENT_NAME,COMPONENT_STATUS from apps.fnd_svc_components
where COMPONENT_ID not in (10000,10001,10020,10021,10022) and COMPONENT_STATUS <> 'RUNNING';

select mail_status, count(*) from apps.wf_notifications
where status = 'OPEN'
and mail_status in ('MAIL','ERROR')
group by mail_status;
spool off"



To see error message for a workflow notification
SQL> select ERROR_MESSAGE from wf_item_activity_statuses_v WHERE NOTIFICATION_ID = 7377659;

Here are steps/events for Oracle Workflow Notification Outbound Processing(eMail from Oracle Applications Workflow to Users)

1.When workflow Engine determines that a notification message must be sent, it raises an event in BES (Business Event System) oracle.apps.wf.notifications.send
Event is raised with Notification ID (NID) as event key
2. There is seeded subscription to this Event
3. Event is placed on WF_DEFERRED agent
4.Event is dequeued from WF_DEFERRED and subscription is processed
5. Subscription places event message to WF_NOTIFICATION_OUT agent.
6.Notification Mailer dequeues message from WF_NOTIFICATION_OUT agent and
6.1convert XML representation of notification into MIME encoded message (Multi-purpose Internet Mail Extensions) and
6.2 Send message by SMTP (Simple Mail Transfer Protocol) to intended user (If Test Address/Override Address is set then email is sent to Test Address

E-Mail Notification is sent if all below conditions are true
a) Notification status is OPEN or CANCELED and
b) Notification mail_status is MAIL or INVALID and
c) Recipient Role has a valid e-mail address and Notification Preference is in the format MAIL%
d) Workflow Deferred Agent Listener is running
e) Workflow Notification Mailer is running

To check a) & b) run below query
SELECT status, mail_status FROM wf_notifications WHERE notification_id = ‘&NID’;

mail_status >> SENT means mail has gone out of mailer to user

To check c) run below query
SELECT email_address, nvl(WF_PREF.get_pref(name, ‘MAILTYPE’),notification_preference)
FROM wf_roles
WHERE name = upper(’&recipient_role’);

To check d) & e) Use OAM (Oracle Application Manager)


Workflow version embedded in 11i
************************
Run following SQL from apps user ;
SQL>select TEXT from WF_RESOURCES where NAME='WF_VERSION';

You should see output like
TEXT
-----------------------
2.6.0
Which means you are on Workflow Version 2.6.0

You can also use script wfver.sql in FND_TOP/sql to find version of workflow in Apps.

To Configure Workflow Notification Mailer need below information

PARAMETER_NAME VALUE
------------------------------ ------------------------------
Inbound Server Name wfmailer.abc.com
Username wftst
Password oracle
Outbound Server Name wfmailer.abc.com
Reply-to Address wftst@abc.com


Taking a backup of the workflow configuration
cd $HOME/
sqlplus “/ as sysdba”
spool wf_mailer.log
set lines 130
set pages 200
col value format a30
select p.parameter_id,p.parameter_name,v.parameter_value value
from apps.fnd_svc_comp_param_vals_v v,
apps.fnd_svc_comp_params_b p,
apps.fnd_svc_components c
where c.component_type = 'WF_MAILER'
and v.component_id = c.component_id
and v.parameter_id = p.parameter_id
and p.parameter_name in
('OUTBOUND_SERVER', 'INBOUND_SERVER',
'ACCOUNT', 'FROM', 'NODENAME', 'REPLYTO','DISCARD' ,'PROCESS','INBOX')
order by p.parameter_name;


To schdule Gather schema stats and purge workflow runtime data

Login into application as sysadmin and schedule below Requests.
Gather Schema Statistics (ALL , 20% , degree 6 ) to run every SUN and WED
Workflow Background Process to run every 10 mts and Apply the interval as :From the completion of the prior run
Purge Obsolete Workflow Runtime data - Every week
Workflow Control Queue Cleanup - Every 12 hours


To update WF notification status
IMPORTANT STEP ! Connect to SQL*PLUS as APPS user and do the following steps
update applsys.wf_notifications
set status ='CLOSED', mail_status ='SENT', end_date ='01-JAN-01'
where mail_status='MAIL';
update wf_notifications set status=’CLOSED’;
commit;"
Select distinct status from wf_notification" this should return only one value CLOSED.
or simple do
update wf_notifications set status = 'CLOSED', mail_status = 'SENT';
commit;


Update workflow a/c pwd from backend
set define off
update APPLSYS.FND_SVC_COMP_PARAM_VALS set PARAMETER_VALUE= '_@#0@##^90@#!4^86#!$^68$#9$4#@@$6!!!9#0@`$B9+}*0&9&@0&8#|'
where
PARAMETER_ID=
( select parameter_id from APPLSYS.FND_SVC_COMP_PARAMS_B where parameter_name = 'INBOUND_PASSWORD');

To check WF mail status
select count(*), mail_status
from wf_notifications
where begin_date > sysdate - 1
group by mail_status;

There are 141 messages with status `MAIL¿, this status should remain very short and then progress to status `SENT¿.
: OAM Login > Site Map > Workflow > Service Components.
These same detail are also given in the output from script $FND_TOP/sql/wfver.sql


To check the WF status from backend

check the status of Agent Listeners:

set pagesize 400
set linesize 120
set pagesize 50
column COMPONENT_NAME format a45
column STARTUP_MODE format a15
column COMPONENT_STATUS format a15
select fsc.COMPONENT_NAME,fsc.STARTUP_MODE,fsc.COMPONENT_STATUS
from APPS.FND_CONCURRENT_QUEUES_VL fcq, fnd_svc_components fsc
where fsc.concurrent_queue_id = fcq.concurrent_queue_id(+)
order by COMPONENT_STATUS , STARTUP_MODE , COMPONENT_NAME;


Typical output from this

COMPONENT_NAME STARTUP_MODE COMPONENT_STATU
--------------------------------------------- --------------- ---------------
WF_JMS_IN Listener(M4U) AUTOMATIC RUNNING
Workflow Deferred Agent Listener AUTOMATIC RUNNING
Workflow Deferred Notification Agent Listener AUTOMATIC RUNNING
Workflow Error Agent Listener AUTOMATIC RUNNING
Workflow Inbound Notifications Agent Listener AUTOMATIC RUNNING
Workflow Java Deferred Agent Listener AUTOMATIC RUNNING
Workflow Java Error Agent Listener AUTOMATIC RUNNING
Workflow Notification Mailer AUTOMATIC RUNNING
ECX Inbound Agent Listener MANUAL STOPPED
ECX Transaction Agent Listener MANUAL STOPPED
Web Services IN Agent MANUAL STOPPED
Web Services OUT Agent MANUAL STOPPED
Workflow Inbound JMS Agent Listener MANUAL STOPPED

13 rows selected.


Steps to start/stop notification mailer

1.a Check workflow mailer service current status
sqlplus apps/

select running_processes
from fnd_concurrent_queues
where concurrent_queue_name = 'WFMLRSVC';

Number of running processes should be greater than 0

1.b Find current mailer status
sqlplus apps/

select component_status
from fnd_svc_components
where component_id =
(select component_id
from fnd_svc_components
where component_name = 'Workflow Notification Mailer');

Possible values:
RUNNING
STARTING
STOPPED_ERROR
DEACTIVATED_USER
DEACTIVATED_SYSTEM

2. Stop notification mailer
sqlplus apps/

declare
p_retcode number;
p_errbuf varchar2(100);
m_mailerid fnd_svc_components.component_id%TYPE;
begin
-- Find mailer Id
-----------------
select component_id
into m_mailerid
from fnd_svc_components
where component_name = 'Workflow Notification Mailer';
--------------
-- Stop Mailer
--------------
fnd_svc_component.stop_component(m_mailerid, p_retcode, p_errbuf);
commit;
end;
/

3. Start notification mailer
sqlplus apps/

declare
p_retcode number;
p_errbuf varchar2(100);
m_mailerid fnd_svc_components.component_id%TYPE;
begin
-- Find mailer Id
-----------------
select component_id
into m_mailerid
from fnd_svc_components
where component_name = 'Workflow Notification Mailer';
--------------
-- Start Mailer
--------------
fnd_svc_component.start_component(m_mailerid, p_retcode, p_errbuf);
commit;
end;
/


TO configure workflow
Workflow Mailer Notification settings.
Log on to OAM
Click on Workflow Administrator -> Business Event Local System
Change VIS.ABC.COM to PROD.ABC.COM

Click on Workflow Manager ->Notification Mailer -> Workflow Notification Mailer -> Edit
Edit outbound Email Account (SMTP) -> Server Name =Concurrent Tier
IMAP Server -> Server Name = xxx.xxx.x.xx
Username/password: wfvis/xxxxx :
Reply-To Address: wfvis@abc.com

Click on Advance -> Next->Next
Under General Tab
Mailer Node =WFVIS
Under IMAP: Enter server IP address, username and password of wfvis
Under Outbound Email Account
Outbound Server Name= concurrent tier
Click Next –>Next….Save and Apply

Workflow Version with Apps
Connect to Database as apps user
SQL> select TEXT Version from WF_RESOURCES where NAME = ‘WF_VERSION’;
Output like 2.6.0 means workflow version 2.6.0


Workflow logfile location
$APPLCSF/$APPLLOG with filename like FNDCPGSC[pid].txt

After executing autoconfig make sure Workflow System Administrator is NOT set to SYSADMIN .
Please set this to ""Workflow Administrator Web (New)""
Following script will take care of this .

sqlplus apps/""pwd""
SQL> update wf_resources set text = 'FND_RESP|FND|FNDWF_ADMIN_WEB_NEW|STANDARD' where name = 'WF_ADMIN_ROLE';

1 row updated.

SQL> commit;

Commit complete.

or xml changes can be done link this.
$ grep -i s_wf_admin_role PROD_mdsxaorit.xml
username oa_var=""s_wf_admin_role"">FND_RESP|SYSADMIN|SYSTEM_ADMINISTRATOR|STANDARD



To set the Workflow Administrator from Sys Admin to Workflow Administrator
Navigate to Responsibility -- Workflow Administarator
then Administrator Workflow -- Administration

On top we will find Workflow Configuration :

Workflow System Administrator :: This value we need to change to Workflow Administrator

Or we can update from backend
Update wf_resources set text='FND_RESP|FND|FNDWF_ADMIN|STANDARD' where name = 'WF_ADMIN_ROLE';

Or can be set in xml,
grep -i s_wf_admin_role TRAIN2_vcosxaor09.xml username oa_var=""s_wf_admin_role"">FND_RESP|FND|FNDWF_ADMIN|STANDARD


modify this profile option to have the value of an actual App Server
WF: Workflow Mailer Framework Web Agent : http://hostname.domainname:8000

Make sure following parameters are set for Workflow :
As system administrator –
Oracle Applications Manager
Workflow Manager

Inbound Server Name wfmailer.domain.com
Outbound Server Name wfmailer.domain.com
From The lenovo PROD Workflow Mailer
Reply-to Address wfvis@domain.com


Workflow smtp port 25 and imap port 143
[root@hostname]# telnet wfmailer 143

Monday, August 1, 2011

Difference between: Full, Differential, and Incremental Backup

Difference between: Full, Differential, and Incremental Backup

Type Definition Benefits Drawbacks
Full Backup: A complete backup of everything you want to backup. Restoration is fast, since you only need one set of backup data. The backing up process is slow.
High storage requirements.
Differential Backup: The backup software looks at which files have changed since you last did a full backup. Then creates copies of all the files that are different from the ones in the full backup.

If you do a differential backup more than once, it will copy all the files, or parts of files that have changed since the last full backup, even if you already have identical copies of those files in a previous differential backup.

For restoring all the data, you will only need the the last full backup, and the last differential backup.
Faster to create than a full backup.

Restoration is faster than using incremental backup.

Not as much storage needed as in a full backup.
Restoration is slower than using a full backup.

Creating a differential backup is slower than creating an incremental backup.
Incremental Backup: The backup software creates copies of all the files, or parts of files that have changed since previous backups of any type (full, differential or incremental).

For example if you did a full backup on Sunday. An incremental backup made on Monday, would only contain files changed since Sunday, and an incremental backup on Tuesday, would only contain files changed since Monday, and so on.
This method is the fastest when creating a backup.

The least storage space is needed.
Restoring from incremental backups is the slowest because it may require several sets of data to fully restore all the data. For example if you had a full backup and six incremental backups. To restore the data would require you to process the full backup and all six incremental backups.

Oracle - What are different types of backup?

What are different types of backup? (Hot, Cold, logical, Physical)? 

Cold Backup
In this type of backup, after the database is shut down, DBA exits the SVRMGR utility and copies the log files, data files and control files onto a backup media.Once the backup has been completed the DBA can restart the database.
Physical Backup 
The operating system saves the database files onto tape or some other media. This is useful to restire the system to an earlier point whenever needed.
Logical Backup
In logical backup technique, the IMPORT/EXPORT utilities are used to create the backup of the database. A logical backup backs-up the contents of the database. A logical backup can be used to restore the database to the last backup. However, unlike physical back, it should not be used to create an OS back up copy because restoring using this approach would make it possible to correct the damaged datafiles. Therefoe in these situations physical backups should be preferred.
Hot backup
A few systems that need to support continuous operation, it is difficult to bring down the database without interrupting the service provided to the users. In such cases, hot backup approach should be used.
There are two modes in which the hot backup works:
ARCHIEVELOG mode
NOARCHIVELOG mode
The database operations are suspended until archiving has been completed if the on-line redo log files are filled before they can be archived.
It is important that the directory containing the archived log files doesnt become full because if it does the ARCH thread wouldnt be able to archive the redo log files. The DBA has to continuously monitor the used-space percentage in the archive directory.
The database can be placed into ARCHIVELOG mode at the creation time or after the database has been created.
However, the database is shutdown before connecting as the user INTERNAL.
It was originally created in NOARCHIVELOG mode, this technique can be used to place the database in ARCHIVELOG mode 

Friday, July 22, 2011

WF_DEFERRED Queue is having few million messages in "Ready" State

I would like to publish this blog post in the form of a set of questions and answers that will help you resolve a situation if "Ready" state messages in WF_DEFERRED queue are in few millions and Workflow Deferred Agent listener is down from years which caused the pile up of records in WF_DEFERRED Queue.


Q1. How do I find out in first place WF_DEFERRED Queue is having millions of records in "Ready" state ?

The simplest way to find out this is by executing the script $FND_TOP/sql/wfver.sql. In the spool file that the script creates, look for

--Clip--

Notification Subsystem Queue Info This requires package WF_QUEUE to be valid ______________________________________________________________________________ WF_DEFERRED messages ready:8406878, waiting:16, expired:0, undeliverable:0, processed:2586
--End Clip--

The highlighted one indicates the number of messages in "Ready" state. If you see big numbers in this section of wfver.sql output, this is a matter of concern. Get ready to do an operation ;)

Q2. What happens if the number of messages in this queue are very high.

One of the obvious things that can happen when this queue is so big is the delay in processing of the messages. Just imagine if the Order Management related messages are not processed on time and delay of few hours in notifying the users.

Q3. What is the reason for this queue to grow up in such huge numbers ?

"Workflow Deferred Agent Listener" is the one which processes the messages in WF_DEFERRED queue. If the listener is down, you should try and bring it up. If the number of messages in WF_DEFERRED queue are very huge then this listener fails to come up or dies immediately after few minutes.

Q4. How do I bring up "Workflow Deferred Agent Listener" in this situation.

Believe me. It is not easy.

Step 1: Execute the following script to find out event wise "Ready" state messages.

--Script Begins here--

set linesize 100
col corrid format a50
col state format a15
col count format 99999999
select NVL(substr(wfe.corrid,1,50),'NULL - No Value') corrid, decode(wfe.state,0,'0 = Ready',1,'1 = Delayed','2=Retained','3=Exception',to_char(substr(wfe.state,1,12))) State, count(*) COUNT from applsys.wf_deferred wfe group by wfe.corrid, wfe.state ;
--Script Ends here--

Note: This script might take longtime depending up on the number of messages in WF_DEFERRED Queue.

You should see the output something like this.

CORRID STATE COUNT
------------------------------------------------------------ -------------------- ---------
APPS:oracle.apps.ar.adjustments.Adjustment.create 0 = Ready 72488
APPS:oracle.apps.ar.applications.CashApp.apply 0 = Ready 557692
APPS:oracle.apps.ar.applications.CashApp.unapply 0 = Ready 145695
APPS:oracle.apps.ar.applications.CreditMemoApp.app 0 = Ready 58107
APPS:oracle.apps.ar.applications.CreditMemoApp.una 0 = Ready 2512
APPS:oracle.apps.ar.batch.AutoInvoice.run 0 = Ready 32950
Step 2: This step is to process the events manually. The job of "Workflow Deferred Agent Listener" is being carried out manually by executing the following scripts.

--Script Begins here--

spool oracle.apps.ar.adjustments.Adjustment.create.log
set serveroutput on size 100000;
begin wf_log_pkg.wf_debug_flag := TRUE;
wf_event.listen(p_agent_name =>'WF_DEFERRED',p_correlation=>'oracle.apps.ar.adjustments.Adjustment.create');
end;
/
commit
/
--Script Ends here--

Execute the above script for all the events that are in the "Ready" state. Once after the count of "Ready" state messages comes down to some number, try starting "Workflow Deferred Agent Listener". This process is really time consuming, so plan out your vacation after the issue is resolved. You need a break !! :)

Q5. What do you as an Apps DBA need to do when users complain that OM notifications are stuck.

Use the following query to check to see whatever the users are saying is correct ;).

SQL> select message_type,count(1) from wf_notifications where status='OPEN' and mail_status='MAIL' group by message_type;
MESSAGE_ COUNT(1)
-------- ----------
FNDCMMSG 6
IBEALERT 32
JTFTASK 2441
OEOH 112
OEOL 462
OKCAUKAP 116
POAPPRV 121
REQAPPRV 40
MESSAGE_ COUNT(1)
-------- ----------
WFERROR 4524
Q6. What can you do to speed up the processing of sending notifications.

Execute the following script manually to speedup delivering of notifications.

--Script Begins here--

spool oracle.apps.wf.notification.log
set serveroutput on size 100000;
begin wf_log_pkg.wf_debug_flag := TRUE;
wf_event.listen(p_agent_name => 'WF_DEFERRED',p_correlation=>'oracle.apps.wf.notification.%');
end;
/
--Script Ends here--

Sunday, June 5, 2011

Oracle DBA Cheat Sheet PART-1

alter database: Alter a Data File

ALTER DATABASE DATAFILE 4 OFFLINE;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' OFFLINE;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf' RESIZE 100m;
ALTER DATABASE DATAFILE '/opt/oracle/datafile/users01.dbf'
AUTOEXTEND ON NEXT 100M MAXSIZE 1000M;
ALTER DATABASE DATAFILE 4 END BACKUP;

alter database: Alter a Tempfile

ALTER DATABASE TEMPFILE 4 RESIZE 100M;
ALTER DATABASE TEMPFILE 4 AUTOEXTEND ON NEXT 100M MAXSIZE 1000M;
ALTER DATABASE TEMPFILE 4 DROP INCLUDING DATAFILES;
ALTER DATABASE TEMPFILE 4 OFFLINE;

alter database: ARCHIVELOG Mode Commands


ALTER DATABASE ARCHIVELOG;
ALTER DATABASE NOARCHIVELOG;

ALTER DATABASE FORCE LOGGING;
ALTER DATABASE CLEAR LOGFILE '/opt/oracle/logfiles/redo01.rdo';
ALTER DATABASE CLEAR UNARCHIVED LOGFILE '/opt/oracle/logfiles/redo01.rdo';
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE);
ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

alter database: Control File Operations


ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS '/opt/oracle/logfile_backup/backup_logfile.trc' REUSE RESETLOGS;
ALTER DATABASE BACKUP CONTROLFILE TO '/opt/oracle/logfile_backup/backup_logfile.ctl';

alter database: Create a Data File

ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE 4 AS '/opt/oracle/datafile/users01.dbf';
ALTER DATABASE CREATE DATAFILE '/opt/oracle/datafile/users01.dbf' AS NEW;

alter database: Datafile Offline/Online

See alter database: Alter a Data File

alter database: Logfile Commands

ALTER DATABASE ADD LOGFILE GROUP 2
('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo') SIZE 300M REUSE;
ALTER DATABASE ADD LOGFILE MEMBER '/opt/oracle/logfiles/redo02c.rdo' to GROUP 2;
ALTER DATABASE ADD LOGFILE thread 3 GROUP 2
('/opt/oracle/logfiles/redo02a.rdo', '/opt/oracle/logfiles/redo02b.rdo') SIZE 300M REUSE;
ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE DROP LOGFILE MEMBER '/opt/oracle/logfiles/redo02b.rdo';

alter database: Mount and Open the Database

ALTER DATABASE MOUNT;
ALTER DATABASE OPEN;

alter database: Move or Rename a Database File or Online Redo Log

NOTE
The database must be mounted to rename or move online redo logs.
The database must be mounted or the data files taken offline to move database data files.

ALTER DATABASE RENAME FILE '/ora/datafile/oldfile.dbf' TO '/ora/datafile/newfile.dbf';
alter database: Open the Database Read-Only
ALTER DATABASE OPEN READ ONLY;
alter database: Open the Database with resetlogs
ALTER DATABASE OPEN RESETLOGS;

What Do All 10g Flashback Features Rely on and their Limitations ?

The New Oracle 10g Flashback Technologies Consist Of The Following :
  • Flashback Database
  • Flashback Drop
  • Flashback Table
  • Flashback Query
· Flashback Database :
The Flashback Database allows you to flash the entire database back to a specific point-in time. It is best used as a replacement for incomplete recovery of a complete database. The main benefit of the Oracle Flashback Database over incomplete database recovery is that the Flashback Database is much quicker and more efficient. The Flashback Database is not based on undo data but on flashback logs.
If flashback database is enabled, its flashback logs are stored in the Flash Recovery area. Flashback logs are written sequentially During normal database operation, and they are not archived. Oracle automatically creates, deletes, and resizes Flashback logs in the flash recovery area. You only need to be aware of Flashback logs for monitoring performance and deciding how much disk space to allocate to the flash recovery area for Flashback logs.
The amount of time it takes to Flashback a database is proportional to how far back you need to revert the database, rather than the time it would take to restore and recover the whole database, which could be much longer. The before images in the Flashback logs are only used to restore the database to a point in the past, and forward recovery is used to bring the database to a consistent state at some time in the past. Oracle returns datafiles to the previous point-in-time, but not auxiliary files, such as initialization parameter files.
DB_FLASHBACK_RETENTION_TARGET:
A parameter value that determines how far back in time you can recover the flashback database, This value is in minutes.
The setting of the DB_FLASHBACK_RETENTION_TARGET initialization parameter determines, indirectly, how much flashback log data the database retains. The size of flashback logs generated by the database for a given time period can vary considerably, however, depending on the specific database workload. If more blocks are affected by database updates during a given interval, then more disk space is used by the flashback log data generated for that interval.
The V$FLASHBACK_DATABASE_LOG view can help you estimate how much space to add to your flash recovery area for flashback logs. After you have enabled logging for Flashback Database and set a flashback retention target, allow the database to run under a normal workload for a while, to generate a representative sample of flashback logs. Then run the following query:
SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM      
       V$FLASHBACK_DATABASE_LOG;
Limitations of Flashback Database :
Because Flashback Database works by undoing changes to the datafiles that exist at the moment that you run the command, it has the following limitations:
· Flashback Database can only undo changes to a datafile made by an Oracle database. It cannot be used to repair media failures, or to recover from accidential deletion of datafiles.
· You cannot use Flashback Database to undo a shrink datafile operation.
· If the database control file is restored from backup or re-created, all accumulated flashback log information is discarded. You cannot use FLASHBACK DATABASE to return to a point in time before the restore or re-creation of a control file.
· When using Flashback Database with a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation. For example, if you perform a direct-path INSERT operation in NOLOGGING mode, and that operation runs from 9:00 to 9:15 on April 3, 2005, and you later need to use Flashback Database to return to the target time 09:07 on that date, the objects and datafiles updated by the direct-path INSERT may be left with block corruption after the Flashback Database operation completes.
If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING operation. Also, perform a full or incremental backup of the affected datafiles immediately after any NOLOGGING operation to ensure recoverability to points in time after the operation. If you expect to use Flashback Database to return to a point in time during an operation such as a direct-path INSERT, consider performing the operation in LOGGING mode.
For Flashback Drop >> Click Here

Steps to recover Applications context file if it is corrupted or deleted accidentally?

The Applications context file can be retrieved by running the adclonectx.pl script.
To retrieve the Applications tier context file,
  • perl /clone/bin/adclonectx.pl retrieve
  • On being prompted for the context file to be retrieved, select the option of retrieving the
          Applications tier context file that has been lost and retrieve it to the default location specified
          by the script.
The above command can be used only when INST_TOP the is still intact. In case that has also been lost
accidentally, the Applications tier context file may be retrieved as follows:
  • Execute the following command on the Database tier: perl /appsutil/clone/bin/adclonectx.pl retrieve
  • On being prompted for the context file to be retrieved, select the option of retrieving the
          Applications tier context file that has been lost.
  • While confirming the location for the context file, set it to any existing directory with write permission.
  • Once the context file has been generated in the specified location, move it to the location specified
          for the context file in the context variable 's_contextfile'.
To retrieve the Database tier context file,
  • Execute the following command on the Database tier: perl /appsutil/clone/bin/adclonectx.pl retrieve
  • On being prompted for the context file to be retrieved, select the Database tier context file and
          retrieve it to the default location specified by the script.

How to relink all binaries after a OS Upgrade to ensure that everything is working as expected on a Release 11i or a Release 12 Installation on Unix ?

To get everything relinked and to ensure that the adadmin Utility was relinked before it is used, follow these steps :

Release 11i (Autoconfig enabled) :

Technology Stack Binaries inclusive Database :

Source the RDBMS ORACLE_HOME Environment
Execute the script $ORACLE_HOME/appsutil/install/adlnkoh.sh
Source the iAS ORACLE_HOME Envrionment
Execute the script $ORACLE_HOME/bin/adlnkiAS.sh
Source the APPS Environment
Execute the script $ORACLE_HOME/bin/adlnk806.sh


Oracle E-Business-Suite Binaries :
1. Source the APPS Environment

2. Execute following command :
adrelink.sh force=y "ad adadmin"
If you are using the command
adrelink.sh force=y "ad all"you are relinking all binaries under the $AD_TOP/bin - otherwise the AD Utilities are relinked via the adadmin Utility.


3. Execute adadmin
select --> '2. Maintain Applications Files menu'

select --> '1. Relink Applications programs'
--> select all modules to relink

Before starting the Instance, please open a new shell and source the Environment.
Release 12 :


Technology Stack Binaries inclusive Database :

Source the RDBMS ORACLE_HOME Environment
Execute the script $ORACLE_HOME/appsutil/clone/adlnkoh.sh
Source the APPS Envrionment
Execute the script $IAS_ORACLE_HOME/appsutil/clone/adlnkweboh.sh
Source the APPS Environment
Execute the script $ORACLE_HOME/appsutil/clone/adlnktools.sh


Oracle E-Business-Suite Binaries :

1. Source the APPS Environment

2. Execute following command
adrelink.sh force=y "ad adadmin"


If you are using the command
adrelink.sh force=y "ad all"


you are relinking all binaries under the $AD_TOP/bin - otherwise the AD Utilities are relinked via the adadmin Utility.

3. Execute adadmin
select --> '2. Maintain Applications Files menu'

select --> '1. Relink Applications programs'
--> select all modules to relink

Before starting the Instance, please open a new shell and source the Environment.

Relink ASM, Agent, OMS Home

. oraenv [+ASM]
cdo
# Note: cdo is a shell alias defined as: alias cdo='cd $ORACLE_HOME'

relink oracle
relink client_sharedlib
relink client
relink utilities
relink ldap
#relink oemagent - no target
relink network
relink ctx
relink interMedia

DBA Paranoia: To be doubly sure, run make files as listed in $ORACLE_HOME/inventory/make/makeorder.xml
make -f rdbms/lib/ins_rdbms.mk ioracle
make -f rdbms/lib/ins_rdbms.mk client_sharedlib
make -f sqlplus/lib/ins_sqlplus.mk install
make -f network/lib/ins_net_client.mk preinstall
make -f network/lib/ins_net_client.mk itnsping
make -f rdbms/lib/ins_rdbms.mk "irman"
make -f plsql/lib/ins_plsql.mk "install"
make -f network/lib/ins_net_client.mk "nnfgt.o"
make -f network/lib/ins_net_client.mk "mkldflags"
make -f rdbms/lib/ins_rdbms.mk "utilities"
make -f ldap/lib/ins_ldap.mk "clientonlyinstall"
make -f network/lib/ins_net_client.mk "ntcontab.o"
make -f network/lib/ins_net_client.mk  "nnfgt.o"
make -f network/lib/ins_net_client.mk  "install"  
make -f network/lib/ins_nau.mk "ioklist"
make -f network/lib/ins_nau.mk  "iokinit"
make -f network/lib/ins_nau.mk "iokdstry"
make -f sysman/lib/ins_emagent.mk "agent"
make -f sysman/lib/ins_emagent.mk "nmb"
make -f sysman/lib/ins_emagent.mk  "nmo"
make -f sysman/lib/ins_emagent.mk "nmhs"
make -f sysman/lib/ins_emagent.mk "tclexec"
make -f rdbms/lib/ins_rdbms.mk "all_no_orcl"
make -f srvm/lib/ins_srvm.mk "install"
make -f racg/lib/ins_has.mk  "racg_install"
make -f network/lib/ins_net_server.mk "install"
make -f rdbms/lib/ins_rdbms.mk "ctx_on"
make -f ctx/lib/ins_ctx.mk "install"
make -f rdbms/lib/ins_rdbms.mk  "ipc_g"
make -f sysman/lib/ins_emagent.mk "emsubagent"
make -f network/lib/ins_net_client.mk "mkldflags"
make -f rdbms/lib/ins_rdbms.mk "idg4odbc"
make -f precomp/lib/ins_precomp.mk "links"
make -f precomp/lib/ins_precomp.mk "relink"
make -f precomp/lib/ins_precomp.mk  "gen_pcscfg"
make -f rdbms/lib/ins_rdbms.mk "svr_tool"
make -f rdbms/lib/ins_rdbms.mk "patchset_opt"

Execute the $ORACLE_HOME/root.sh script to reset the appropriate file permissions/flags:
sudo $ORACLE_HOME/root.sh

Startup Oracle listener
Start up ASM instance

Relink agent home


# agentenv is a custom script that sets Oracle environment variables appropriately 
# for the agent home - all homes are using the one OS user - 'oracle'. 
# Normally I recommend one OS user per home.

. agentenv
relink client
relink oemagent

DBA Paranoia: To be doubly sure, run make files as listed in $ORACLE_HOME/inventory/make/makeorder.xml
cdo
make -f network/lib/ins_net_client.mk itnsping
make -f network/lib/ins_net_client.mk nnfgt.o
make -f network/lib/ins_net_client.mk mkldflags
make -f network/lib/ins_net_client.mk client_sharedlib
make -f sysman/lib/ins_emagent.mk agent
make -f sysman/lib/ins_emagent.mk nmb
make -f sysman/lib/ins_emagent.mk nmo
make -f sysman/lib/ins_emagent.mk nmhs
make -f sysman/lib/ins_emagent.mk emsubagent

Execute the $ORACLE_HOME/root.sh script to reset the appropriate file permissions/flags:
sudo $ORACLE_HOME/root.sh

Relink OMS HOme


Note omsenv is a custom script setting shell vriables for the OEM/OMS home.
. omsenv
cdo

relink client
relink network
relink client_sharedlib
relink oemagent
relink utilities

DBA Paranoia: To be doubly sure, run make files as listed in $ORACLE_HOME/inventory/make/makeorder.xml
make -f network/lib/ins_net_client.mk "nnfgt.o"
make -f network/lib/ins_net_client.mk "mkldflags"
make -f network/lib/ins_net_client.mk "client_sharedlib"
make -f sysman/lib/ins_sysman.mk agent
make -f sysman/lib/ins_sysman.mk nmo
make -f sysman/lib/ins_sysman.mk nmb
make -f sqlplus/lib/ins_sqlplus.mk install
make -f network/lib/ins_net_client.mk install
make -f network/lib/network/lib/ins_nau.mk ioklist
make -f network/lib/network/lib/ins_nau.mk iokinit
make -f network/lib/network/lib/ins_nau.mk iokdstry
make -f webcache/lib/ins_calypso.mk install

Execute the $ORACLE_HOME/root.sh script to reset the appropriate file permissions/flags:
sudo $ORACLE_HOME/root.sh

Relinking Oracle Application Server 10g R2 (10.1.2) and 10g R3 (10.1.3) on UNIX

All Oracle Application Server 10g processes from the Oracle Home to be relinked need to be stopped using normal
administrative steps prior to beginning the relink.
Setup your environment correctly. Since both OracleAS 10g R2 and R3 are based on Oracle 10.1 database software,
pick the environment variable settings that apply to this version.

Relink Oracle Application Server 10g R2 (10.1.2/10.1.4.0.1)

Relinking the Infrastructure (Identity Management (IM) + Metadata Repository (MR), or MR Only)
  1. Relink Database software
    $ORACLE_HOME/bin/relink all
    cd $ORACLE_HOME/ldap/lib (Not required for MR Only)
    make -f ins_ldap.mk install
    make -f ins_ldap.mk hragentinstall
  2. Relink Application Server Control
    cd $ORACLE_HOME/sysman/lib
    make -f ins_sysman.mk agent
  3. As root execute $ORACLE_HOME/root.sh, and $ORACLE_HOME/root.sh.old if it exists.
This is needed to correctly set the permissions on the OID, and Application Server Control executables.
Relinking the Infrastructure (IM Only)
  1. Relink Database software
    cd $ORACLE_HOME/network/lib
    make -f ins_net_client.mk client_sharedlib
    make -f ins_net_client.mk install
    cd $ORACLE_HOME/rdbms/lib
    make -f ins_rdbms.mk utilities
    cd $ORACLE_HOME/plsql/lib
    make -f ins_plsql.mk install
    cd $ORACLE_HOME/sqlplus/lib
    make -f ins_sqlplus.mk install
    cd $ORACLE_HOME/ldap/lib
    make -f ins_ldap.mk install
    make -f ins_ldap.mk hragentinstall
  2. Relink Application Server Control
    cd $ORACLE_HOME/sysman/lib
    make -f ins_sysman.mk agent
  3. As root execute $ORACLE_HOME/root.sh, and $ORACLE_HOME/root.sh.old if it exists.
This is needed to correctly set the permissions on the OID, and Application Server Control executables.
Relinking the Middle Tier
  1. Relink Database software:
    cd $ORACLE_HOME/network/lib
    make -f ins_net_client.mk client_sharedlib
    make -f ins_net_client.mk install
    make -f ins_net_server.mk install (Business Intelligence & Forms only)
    cd $ORACLE_HOME/rdbms/lib (Not required for J2EE & Web Cache)
    make -f ins_rdbms.mk iexp
    make -f ins_rdbms.mk iimp
    make -f ins_rdbms.mk iloadpsp
    cd $ORACLE_HOME/plsql/lib (Not required for J2EE & Web Cache)
    make -f ins_plsql.mk install
    cd $ORACLE_HOME/sqlplus/lib
    make -f ins_sqlplus.mk install
    cd $ORACLE_HOME/ldap/lib (Not required for J2EE & Web Cache)
    make -f ins_ldap.mk toolsinstall
  2. Relink Application Server Control
    cd $ORACLE_HOME/sysman/lib
    make -f ins_sysman.mk agent
  3. Relink Web Cache
    cd $ORACLE_HOME/webcache/lib
    make -f ins_calypso.mk install
  4. Relink Forms (Business Intelligence & Forms only)
    cd $ORACLE_HOME/forms/lib
    make -f ins_forms.mk frmweb_install
    make -f ins_forms.mk frmcmp_install
    make -f ins_forms.mk frmcmpb_install
  5. Relink Reports (Business Intelligence & Forms only)
    cd $ORACLE_HOME/reports/lib
    make -f ins_reports.mk install
  6. As root execute $ORACLE_HOME/root.sh, and $ORACLE_HOME/root.sh.old if it exists.
This is needed to correctly set the permissions on the Application Server Control executables.

If permissions were previously modified for Web Cache, you will need to re-run any scripts (or apply manual changes)

to obtain any original functionality. For more information about webcache_setuser.sh,
Please refer to the following documentation:
Oracle Application Server Web Cache Administrator's Guide 10g (10.1.2)
http://download.oracle.com/docs/cd/B14099_19/caching.1012/b14046/toc.htm
8 Setup and Configuration
Running webcached with Root Privilege

Relink Oracle Application Server 10g R3 (10.1.3)

Relinking the Middle Tier
  1. Relink Database software:
    cd $ORACLE_HOME/network/lib
    make -f ins_net_client.mk client_sharedlib
    make -f ins_net_client.mk install
    cd $ORACLE_HOME/sqlplus/lib
    make -f ins_sqlplus.mk install
2. As root execute $ORACLE_HOME/root.sh, and $ORACLE_HOME/root.sh.old if it exists. This is needed to correctly
set the permissions on the newly relinked executables.

Thursday, June 2, 2011

Automatic Database Diagnostic Monitor (ADDM)

Automatic Database Diagnostic Monitor (ADDM)
Oracle now provides automatic performance tuning capabilities, the heart of this function is the new statistics collection facility the automatic workload repository (AWR), which automatically collects and stores statistical data in the sysaux tablespace. ADDM ranks both the problems and its recommendations according to the crucial DB time statistic.AWR collects new performance statistics in the form of hourly snapshots (MMON processes the AWR request) and saves these to the sysaux tablespace, it is a snapshot shot of a single point in time. Every time AWR runs ADDM will automatically does a top-down system analysis and reports its findings on the database control home page.
See AWR on how to setup and configure it.
The purpose of ADDM is to reduce a key database metric called DB Time which is the total time (in microseconds) the database spends actually processing users requests. DB time includes the total amount of time spent on actual database calls (at the user level) and ignores time spent on background process. ADDM will only report on processes that contribute excessive DB time.
ADDM will report on the following
  • Expensive SQL/Java statements
  • I/O performance issues
  • Locking and Concurrency issues
  • Excessive parsing
  • High checkpoint load
  • Resource bottlenecks, including memory and CPU bottlenecks
  • Undersized memory allocations
  • Connection management issues, such as excessive logon/logoff activity
The report itself will contain
  • Expert problem diagnosis
  • Emphasis on the root cause of the problem rather than on the symptoms
  • A ranking of the effects of the problems, which means you can quickly find the problem
  • Recommendations ranked according to their benefit
Configuring ADDM
To active the AWR change the system parameter statistics_level to one of three values
  • basic - this option disables the AWR
  • typical (default) - activates standard level of collection
  • all - same as typical but includes execution plans and timing info from the O/S
Active alter system set statistics_level = typical;
alter system set statistics_level = all;
De-active alter system set statistics_level = basic;
Display show parameter statistics_level;
To change the snapshot interval and how many days the snapshots are kept you use the package dbms_workload_repository or Enterprise Manager
Snapshot configuration
Change snapshotting values exec dbms_workload_repository.modify_snapshot_settings ( interval => 60, retention => 43200);
interval = minutes
retention = seconds
Display values select * from dba_hist_wr_control;
Snapshot Management
Create a snapshot exec dbms_workload_repository.create_snapshot;
Delete snapshots exec dbms_workload_repository.drop_snapshot_range (low_snap_id=>1077, high_snap_id=>1078);
Display snapshots select snap_id, begin_interval_time, end_interval_time from dba_hist_snapshot order by 1;
ADDM views
There are a number of views that should used when involving the ADDM
DBA_ADVISOR_FINDINGS finding identified by ADDM
DBA_ADVISOR_OBJECTS describe the objects that are referenced in findings
DBA_ADVISOR_RECOMMENDATIONS describe the recommendations based on ADDM findings
DBA_ADVISOR_RATIONALE describe the rationale behind each ADDM finding
DBA_SCHEDULER_JOBS list the gather_stats_job which runs the automatic snapshotting
dba_hist_baseline display baselines (see below)
dba_hist_snapshot display snapshots that are available
dba_hist_wr_control display the snapshot current settings
v$sys_time_model provides the accumulated time statistics for various operations in the entire database (in microseconds)
v$sess_time_model provides the accumulated time statistics for various operations in the session (in microseconds)
ADDM Report
To run ADDM report you can use the following operating system scripts or use Enterprise Manager.
addmrpt.sql the script will ask for begin snapshot and end snapshot plus if you want the report in text or html
ADDM baselines
The main purpose of a baseline is to preserve typical runtime statistics in the AWR repository, allowing you to run the AWR snapshot reports on the preserved baseline snapshots at any time and compare them to recent snapshots in the AWR.
Create baseline exec dbms_workload_repository.create_baseline (
  start_snap_id=> 1007,
  end_snap_id=> 1009,
  baseline_name=>'EOM baseline'
);
Remove baseline exec dbms_workload_repository.drop_baseline ( baseline_name => 'EOM baseline');
Display baselines select baseline_id, baseline_name, start_snap_id, end_snap_id from dba_hist_baseline;
Server Alerts
Server generated alerts are controlled by the MMON (manageability monitor) which is assisted by the MMNL (manageability monitor light). The metrics are gathered and thresholds checked once every minute. There are two thresholds warning and critical. There are over 100 alerts, MMON compares the metrics to the configured threshold and will write a message to the ALERT_QUEUE (in the AWR - sysaux tablespace). Processes can subscribe (only if you need to write your own alert handler) to this queue (and other queues) you pass on information from one process to another. The Enterprise manager daemon will action the ALERT_QUEUE and can places alerts messages in the GUI or can even send an email, you can also write your own alert handler. Remember the metrics are collected from the SYSAUX tablespace not the v$ views. There is no connection to the alert log and the server alert system.
Alert Types select internal_metric_name from v$alert_types;
Current alerts select reason, object_type type, object_name name from dba_outstanding_alerts;
Old Alerts select reason, object_type type, object_name name from dba_alert_history;
Useful Views
dba_outstanding_alerts current alert waiting for resolution
dba_alert_history alerts that have been cleared
dba_thresholds threshold settings defined for the instance
v$alert_types alert type and group information
v$metric system-level metric values in memory
v$metricname names, identifiers and other info about system metrics
v$metric_history historical system-level metric values in memory
Stateless or non thresholds alerts are stored in dba_alert_history because they are solved as soon as they occur. Remember that the parameter statistics_level must be set to typical or all in order to obtain alerts. Shutdown commands are only recorded in the alert log file.
The dbms_server_alert package contains procedures to change the alerts
Change threshold exec dbms_server_alert.set_threshold(
  dbms_server_alert.tablespace_pct_full,
  dbms_server_alert.operator_ge, 90,
  dbms_server_alert.operator_ge, 99,
  1, 1, null, dbms_server_alert.object_type_tablespace,
‘USERS2’);
Set_threshold fields
-----------------------------------------------------------------------------
Metrics_id - the name of the metric
Warning_operator - the comparison operator to compare values
Warning_value - warning threshold
Critical_operator - comparison operator for comparing current value to the warning threshold
Critical_value - critical threshold
Observation_period - timer period at which metrics are computed against the threshold
Consecutive_occurences - how many times the value exceeds the threshold before an alert is raised
Instance_name - the instance that the threshold is applied
Object_type - object type i.e tablespace, session, service, etc
Object_name - name of the object

Get Threshold set serveroutput on
DECLARE
  vWarnOp NUMBER(10);
  vWarnVal VARCHAR2(100);
  vCritOp NUMBER(10);
  vCritVal VARCHAR2(100);
  vObsvPer NUMBER(5);
  vConOcur NUMBER(5);
BEGIN
  dbms_server_alert.get_threshold(dbms_server_alert.tablespace_pct_full,
     vWarnOp, vWarnVal, vCritOp, vCritVal, vObsvPer, vConOcur, NULL,
  dbms_server_alert.object_type_tablespace, 'TEST');
  dbms_output.put_line('WarnOp: ' || TO_CHAR(vWarnOp));
  dbms_output.put_line('WarnVal: ' || vWarnVal);
  dbms_output.put_line('CritOp: ' || TO_CHAR(vCritOp));
  dbms_output.put_line('CritVal: ' || vCritVal);
  dbms_output.put_line('Observation: ' || vObsvper);
  dbms_output.put_line('Occurences: ' || vConOcur);
END;
/
Expand_message select dbms_server_alert.expand_message(null, 6, null, null, null, null, null) alert_msg from dual;
Expand Message fields
----------------------------------------------------------------------------
User_language - the current sessions language
Message_id - alert message ID
Argument_1 - 1 st argument
Argument_2 - 2nd argument
Argument_3 - 3rd argument
Argument_4 - 4th argument
Argument_5 - 5th argument

Automatic Workload Repository (AWR)

Automatic Workload Repository (AWR)
The AWR collects and stores database statistics relating to problem detection and tuning. AWR is a replacement for the statspack utility which helps gather database performance statistics. AWR generates snapshots of key performance data, such as system and session statistics, segment-usage statistics, time-model statistics, high-load statistics and stores it in the sysaux tablespace.
AWR provides statistics in two formats
  • temporary - in memory collection of statistics in the SGA, accessible via the V$ views
  • persistent - type of performance data in the form of regular AWR snapshots which you access via the DBA_ views
The MMON process is responsible for collecting the statistics in the SGA and saving them to the sysaux tablespaces.
AWR will collect data on the following
  • Base statistics that are also part of the v$SYSSTAT and V$SESSTAT views
  • SQL statistics
  • Database object-usage statistics
  • Time-model statistics
  • Wait statistics
  • ASH (active session history) statistics
  • Operating system statistics
Tables that AWR uses to collect statistics
v$sys_time_model time model stats (db time, java execution time, pl/sql execution time, etc)
v$osstat operating system stats (avg_busy_ticks, avg_idle_ticks, etc)
v$service_stats wait statistics ( db cpu, app wait time, user commits, etc)
v$sysstat system stats
v$sesstat session stats
Database performance stats fall into one of three categories:
  • Cumulative values - collect stats over a period of time from the v$sysstat, etc
  • Metrics - use the collected stats to make some sort of sense.
  • Sampled data - the ASH sampler is used to collect these stats.
AWR Setup
To active the AWR change the system parameter statistics_level to one of three values
  • basic - this option disables the AWR
  • typical (default) - activates standard level of collection
  • all - same as typical but includes execution plans and timing info from the O/S
Active alter system set statistics_level = typical;
alter system set statistics_level = all;
De-active alter system set statistics_level = basic;
Display show parameter statistics_level;
To change the snapshot interval and how many days the snapshots are kept you use the package dbms_workload_repository or Enterprise Manager
Snapshot configuration
Change snapshotting values exec dbms_workload_repository.modify_snapshot_settings ( interval => 60, retention => 43200);
interval = minutes
retention = seconds
Display values select * from dba_hist_wr_control;
Snapshot Management
Create a snapshot exec dbms_workload_repository.create_snapshot;
Delete snapshots exec dbms_workload_repository.drop_snapshot_range (low_snap_id => 1077, high_snap_id => 1078);
Create a baseline exec dbms_workload_repository.create_baseline (start_snap_id => 1070, end_snap_id => 1078, baseline_name => 'Normal Baseline');
Delete a baseline exec dbms_workload_repository.drop_baseline (baseline_name => 'Normal Baseline', cascade => FALSE);
Display snapshots select snap_id, begin_interval_time, end_interval_time from dba_hist_snapshot order by 1;
View the repository tables select table_name from dba_tables where tablespace_name = ‘SYSAUX’ and substr(table_name, 1,2) = ‘WR’ and rownum <= 20 order by 1;
Useful Views
dba_hist_active_sess_history ASH info (see below)
dba_hist_baseline baseline info
dba_hist_database_instance environment data
dba_hist_sql_plan sql execution path data
dba_hist_wr_control AWR settings
dba_hist_snapshot snapshot info in the AWR
AWR Report
To run AWR report you can use the following operating system scripts or use Enterprise Manager.
awrrpt.sql the script will ask for begin snapshot and end snapshot and will be generated in text format
Note: reports went in $oracle_home\db_1\bin
awrrpti.sql the script will ask for begin snapshot and end snapshot and will be generated in HTML format
Note: reports went in $oracle_home\db_1\bin
You can obtain more information regarding the AWR from automatic database diagnostic manager (ADDM).

Tuesday, May 31, 2011

How to Enable Automatic Compilation of JSP pages in R12

1. Login into E-Business suite and select System Administrator responsibility

2. Select function AutoConfig (under Oracle Applications Manager)
For each web tier server perform the following:
Click on pencil icon under Edit Parameters
Select tab System
Expand section jtff_server

3. Change value for the entry s_jsp_main_mode from justrun to recompile
Confirm the change by clicking Save button


4. Run AutoConfig to propagate the changes to the configuration files
Verify that the $INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml  has the following:

Check param-name "main_mode" under init-param variables
Its changed into "recompile"

5. Restart the web tier services

script below will get the list of all inactive users and end date all the responsibilities assigned to them.

Declare
 
  --cursor to get all inactive users
 
  CURSOR cur_inactive_user  
 
  IS
 
        select  
 
        fu.user_id, 
 
        fd.responsibility_id,
 
        fd.responsibility_application_id,
 
        fd.security_group_id,
 
        fd.start_date,
 
        fd.end_date
 
        from 
 
        fnd_user fu,
 
        fnd_user_resp_groups_direct fd
 
        where
 
        fu.user_id = fd.user_id
 
        and  (fu.end_date <= sysdate or fu.end_date is NOT NULL)
 
        and fd.end_date is null;
 
BEGIN 
 
FOR rec_inactive_user IN cur_inactive_user
 
LOOP
 
--checking if the responsibility is assigned to the user
 
IF (fnd_user_resp_groups_api.assignment_exists
 
   (rec_inactive_user.user_id,
 
    rec_inactive_user.responsibility_id,
 
    rec_inactive_user.responsibility_application_id, 
 
    rec_inactive_user.security_group_id)) then                                                                
 
    -- Call API to End date the responsibility
 
    fnd_user_resp_groups_api.update_assignment
 
     (user_id  =>rec_inactive_user.user_id,
 
      responsibility_id =>rec_inactive_user.responsibility_id,
 
      responsibility_application_id =>rec_inactive_user.responsibility_application_id, 
 
      security_group_id =>  rec_inactive_user.security_group_id ,
 
      start_date =>rec_inactive_user.start_date ,
 
      end_date =>rec_inactive_user.end_date,
 
      description =>NULL);      
 
    COMMIT;                                                                                          
 
END IF;
 
END LOOP;                  
 
END;

Saturday, April 16, 2011

Output Post Processor

Concurrent processing uses the Output Post Processor (OPP) to enforce post-processing
actions for concurrent requests.

Post-processing actions are actions taken on concurrent
request output. An example of a post-processing action is that used in publishing
concurrent requests with XML Publisher.

For example, say a request is submitted with
an XML Publisher template specified as a layout for the concurrent request output.
After the concurrent manager finishes running the concurrent program, it will contact
the OPP to apply the XML Publisher template and create the final output.


A concurrent manager contacts an available OPP process when a running concurrent
request needs an OPP processing action. A concurrent manager uses a local OPP
process (that, is, on the same node) by default, but will choose a remote OPP if no local
OPP process is available.

There should always be at least one OPP process active in the system. If no OPP service
is available, completed requests that require OPP processing will complete with a status
of Warning.


An OPP service is multi-threaded and will start a new thread for each concurrent
request it processes. You can control the number of simultaneous threads for an OPP
service instance by adjusting the Threads per Process parameter for the instance.

If all the OPP services have reached their respective maximum number of threads, the
requests waiting to be processed remain in a queue to be processed as soon as threads
become available.

If request throughput has become slow, you may want to increase the
number of threads per process for the OPP. It is recommended that you keep the
number of threads per process between 1 and 20.