Saturday, December 4, 2010

DBNEWID

DBNEWID

D:\oracle\Ora92\bin>nid help=yes
DBNEWID: Release 9.2.0.6.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Keyword Description (Default)
TARGET Username/Password (NONE)
DBNAME New database name (NONE)
LOGFILE Output Log (NONE)
REVERT Revert failed change NO
SETNAME Set a new database name only NO
APPEND Append to output log NO
HELP Displays these messages NO
D:\oracle\Ora92\bin>
DBNEWID is a database utility that can change the internal database identifier (DBID) and the database name (DBNAME) for an operational database.
Prior to the introduction of the DBNEWID utility, you could manually create a copy of a database and give it a new database name (DBNAME) by re-creating the control file.
However, you could not give the database a new identifier (DBID). The DBID is an internal, unique identifier for a database. Because Recovery Manager (RMAN) distinguishes databases by DBID, you could not register a seed database and a manually copied database together in the same RMAN repository.
The DBNEWID utility solves this problem by allowing you to change any of the following:
Only the DBID of a database
Only the DBNAME of a database
Both the DBNAME and DBID of a database
To perform changes the database should not be open
D:\oracle\Ora92\bin>nid target=sys/sys@sridevi dbname=sridevi
DBNEWID: Release 9.2.0.6.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to database SRIDEVI (DBID=937156271)
NID-00121: Database should not be open
Change of database name failed during validation - database is intact.
DBNEWID - Completed with validation errors.
D:\oracle\Ora92\bin>
D:\oracle\Ora92\bin>sqlplus "sys/sys as sysdba"
SQL*Plus: Release 9.2.0.6.0 - Production on Wed Dec 22 19:38:46 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area 126951372 bytes
Fixed Size 454604 bytes
Variable Size 109051904 bytes
Database Buffers 16777216 bytes
Redo Buffers 667648 bytes
Database mounted.
SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
D:\oracle\Ora92\bin>nid target=sys/sys@sridevi dbname=sridevi
DBNEWID: Release 9.2.0.6.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
Connected to database SRIDEVI (DBID=937156271)
Control Files in database:
D:\ORACLE\ORADATA\SRIDEVI\CONTROL01.CTL
D:\ORACLE\ORADATA\SRIDEVI\CONTROL02.CTL
D:\ORACLE\ORADATA\SRIDEVI\CONTROL03.CTL

NID-00144: New name for database SRIDEVI is the same as current name SRIDEVI
Change of database name failed during validation - database is intact.
DBNEWID - Completed with validation errors.
D:\oracle\Ora92\bin>
What Oracle Docs say??
Parameters for the DBNEWID Utility
TARGET
Specifies the username and password used to connect to the database. The user must have the SYSDBA privilege. If you are using operating system authentication, then you can connect with the slash (/). If the $ORACLE_HOME and $ORACLE_SID variables are not set correctly in the environment, then you can specify a secure (IPC or BEQ) service to connect to the target database. A target database must be specified in all invocations of the DBNEWID utility.

REVERT
Specify YES to indicate that a failed change of DBID should be reverted (default is NO). The utility signals an error if no change DBID operation is in progress on the target database. A successfully completed change of DBID cannot be reverted. REVERT=YES is only valid when a DBID change failed.

DBNAME=new_db_name
Changes the database name of the database. You can change the DBID and the DBNAME of a database at the same time. To change only the DBNAME, also specify the SETNAME parameter.

SETNAME
Specify YES to indicate that DBNEWID should change the database name of the database but should not change the DBID (default is NO). When you specify SETNAME=YES, the utility only writes to the target database control files.

LOGFILE=logfile
Specifies that DBNEWID should write its messages to the specified file. By default the utility overwrites the previous log. If you specify a log file, then DBNEWID does not prompt for confirmation.

APPEND
Specify YES to append log output to the existing log file (default is NO).

HELP
Specify YES to print a list of the DBNEWID syntax options (default is NO).
Restrictions and Usage Notes
The DBNEWID utility has the following restrictions:
The utility is available only on the UNIX and Windows NT operating systems.
The nid executable file should be owned and run by the Oracle owner because it needs direct access to the datafiles and control files. If another user runs the utility, then set the user ID to the owner of the datafiles and control files.
The DBNEWID utility must access the datafiles of the database directly through a local connection. Although DBNEWID can accept a net service name, it cannot change the DBID of a nonlocal database.
To change the DBID of a database, the database must be mounted and must have been shut down consistently prior to mounting. In the case of an Oracle Real Application Clusters database, the database must be mounted in NOPARALLEL mode.
You must open the database with the RESETLOGS option after changing the DBID. Note that you do not have to open with the RESETLOGS option after changing only the database name.
No other process should be running against the database when DBNEWID is executing. If another session shuts down and starts the database, then DBNEWID aborts.
All online datafiles should be consistent without needing recovery.
Normal offline datafiles should be accessible and writable. If this is not the case, you must drop these files before invoking the DBNEWID utility.
All read-only tablespaces must be accessible and made writable at the operating system level prior to invoking DBNEWID. If these tablespaces cannot be made writable (for example, they are on a CD-ROM), then you must unplug the tablespaces using the transportable tablespace feature and then plug them back in the database before invoking the DBNEWID utility (see the Oracle9i Database Administrator's Guide).
You can only specify REVERT when changing only the DBID.
Examples of Using DBNEWID
Changing Only the DBID
The following example connects with operating system authentication and changes only the DBID:
% nid TARGET=/
Changing the DBID and Database Name
The following example connects as user SYS and changes the DBID and also changes the database name to test2:
% nid TARGET=SYS/oracle@test1 DBNAME=test2
Changing Only the Database Name
The following example connects as user SYSTEM and changes only the database name, and also specifies a log file for the output:
% nid TARGET=SYSTEM/manager@test2 DBNAME=test3 SETNAME=YES LOGFILE=dbid.out 


http://www.ordba.net/Tutorials/OracleUtilities~DBNEWID.htm

Partitioning Enhancements in Oracle Database 11g Release 1

This article uses simple examples to describe the partitioning enhancements in Oracle 11g Release 1.

Extended Composite Partitioning

In previous releases of Oracle, composite partitioning was limited to Range-Hash and Range-List partitioning. Oracle 11g Release 1 extends this to allow the following composite partitioning schemes:
  • Range-Hash (available since 8i)
  • Range-List (available since 9i)
  • Range-Range
  • List-Range
  • List-Hash
  • List-List
The follow code provides an example of one of the new composite partitioning schemes. First we create a table with List-Hash composite partitioning.
CREATE TABLE list_hash_tab (
  id           NUMBER,
  code         VARCHAR2(10),
  description  VARCHAR2(50),
  created_date DATE
)
PARTITION BY LIST (code)
SUBPARTITION BY HASH (id)
(
   PARTITION part_aa values ('AA')
   (
      SUBPARTITION part_aa_01,
      SUBPARTITION part_aa_02
   ),
   partition part_bb values ('BB')
   (
      SUBPARTITION part_bb_01,
      SUBPARTITION part_bb_02
   )
);
Next we populate it with some data, which we expect to be spread throughout the subpartitions.
DECLARE
  l_code  VARCHAR2(10);
BEGIN
  FOR i IN 1 .. 40 LOOP
    IF MOD(i, 2) = 0 THEN
      l_code := 'BB';
    ELSE
      l_code := 'AA';
    END IF;

    INSERT INTO list_hash_tab (id, code, description, created_date)
    VALUES (i, l_code, 'Description for ' || i || ' ' || l_code, SYSDATE);
  END LOOP;
  COMMIT;
END;
/

EXEC DBMS_STATS.gather_table_stats(USER, 'LIST_HASH_TAB', granularity=>'ALL');
Finally, we query the USER_TAB_SUBPARTITIONS view to see if the data has been distributed across the subpartitions.
COLUMN table_name FORMAT A20
COLUMN partition_name FORMAT A20
COLUMN subpartition_name FORMAT A20

SELECT table_name, partition_name, subpartition_name, num_rows
FROM   user_tab_subpartitions
ORDER by table_name, partition_name, subpartition_name;

TABLE_NAME           PARTITION_NAME       SUBPARTITION_NAME      NUM_ROWS
-------------------- -------------------- -------------------- ----------
LIST_HASH_TAB        PART_AA              PART_AA_01                    7
LIST_HASH_TAB        PART_AA              PART_AA_02                   13
LIST_HASH_TAB        PART_BB              PART_BB_01                   10
LIST_HASH_TAB        PART_BB              PART_BB_02                   10

4 rows selected.

SQL>
The query shows the data has been split into the two partitions based on the vale of the CODE column, then hashed between the subpartitions.

Interval Partitioning

Interval partitioning is an extension of range partitioning, where the system is able to create new partitions as they are required. The PARTITION BY RANGE clause is used in the normal way to identify the transition point for the partition, then the new INTERVAL clause used to calculate the range for new partitions when the values go beyond the existing transition point.

The following code shows an example of a table using interval partitioning.
CREATE TABLE interval_tab (
  id           NUMBER,
  code         VARCHAR2(10),
  description  VARCHAR2(50),
  created_date DATE
)
PARTITION BY RANGE (created_date)
INTERVAL (NUMTOYMINTERVAL(1,'MONTH'))
(
   PARTITION part_01 values LESS THAN (TO_DATE('01-NOV-2007','DD-MON-YYYY'))
);
Querying the USER_TAB_PARTITIONS view shows there is only a single partition.
EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_TAB');

COLUMN table_name FORMAT A20
COLUMN partition_name FORMAT A20
COLUMN high_value FORMAT A40

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME           PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
-------------------- -------------------- ---------------------------------------- ----------
INTERVAL_TAB         PART_01              TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-M          0
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

1 row selected.

SQL>
Provided we insert data with a created_date value less than '01-NOV-2007' the data will be placed in the existing partition and no new partitions will be created.
INSERT INTO interval_tab VALUES (1, 'ONE', 'One', TO_DATE('16-OCT-2007', 'DD-MON-YYYY'));
INSERT INTO interval_tab VALUES (2, 'TWO', 'Two', TO_DATE('31-OCT-2007', 'DD-MON-YYYY'));
COMMIT;
EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_TAB');

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME           PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
-------------------- -------------------- ---------------------------------------- ----------
INTERVAL_TAB         PART_01              TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

1 row selected.

SQL>
If we add data beyond the range of the existing partition, a new partition is created.
INSERT INTO interval_tab VALUES (3, 'THREE', 'Three', TO_DATE('01-NOV-2007', 'DD-MON-YYYY'));
INSERT INTO interval_tab VALUES (4, 'FOUR', 'Four', TO_DATE('30-NOV-2007', 'DD-MON-YYYY'));
COMMIT;
EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_TAB');

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME           PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
-------------------- -------------------- ---------------------------------------- ----------
INTERVAL_TAB         PART_01              TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

INTERVAL_TAB         SYS_P44              TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

2 rows selected.

SQL>
If we insert data for two months after the current largest transition point, only the required partition is created, not the intervening partitions.
INSERT INTO interval_tab VALUES (5, 'FIVE', 'Five', TO_DATE('01-JAN-2008', 'DD-MON-YYYY'));
INSERT INTO interval_tab VALUES (4, 'FOUR', 'Four', TO_DATE('31-JAN-2008', 'DD-MON-YYYY'));
COMMIT;
EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_TAB');

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME           PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
-------------------- -------------------- ---------------------------------------- ----------
INTERVAL_TAB         PART_01              TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

INTERVAL_TAB         SYS_P44              TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

INTERVAL_TAB         SYS_P45              TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

3 rows selected.

SQL>
Notice that a partition to hold January 2008 data has not been created. If we insert data for this time period, the missing partition is created.
INSERT INTO interval_tab VALUES (7, 'SEVEN', 'Seven', TO_DATE('01-DEC-2007', 'DD-MON-YYYY'));
INSERT INTO interval_tab VALUES (8, 'EIGHT', 'Eight', TO_DATE('31-DEC-2007', 'DD-MON-YYYY'));
COMMIT;
EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_TAB');

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME           PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
-------------------- -------------------- ---------------------------------------- ----------
INTERVAL_TAB         PART_01              TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

INTERVAL_TAB         SYS_P44              TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

INTERVAL_TAB         SYS_P45              TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

INTERVAL_TAB         SYS_P46              TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-M          2
                                          M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

4 rows selected.

SQL>
The following restrictions apply to interval partitioned tables:
  • Interval partitioning is restricted to a single partition key that must be a numerical or date range.
  • At least one partition must be defined when the table is created.
  • Interval partitioning is not supported for index-organized tables.
  • You cannot create a domain index on an interval partitioned table.
  • Interval partitioning can be used as the primary partitioning mechanism in composite partitioning, but it can't be used at the subpartition level.
  • A MAXVALUE partition cannot be defined for an interval partitioned table.
  • NULL values are not allowed in the partition column.

System Partitioning

As you would expect, system partitioning allows large tables to be broken down into smaller partitions, but unlike other partitioning schemes, the database has no control over the placement of rows during insert operations. The following example shows the creation of a system partitioned table.
CREATE TABLE system_partitioned_tab (
  id           NUMBER,
  code         VARCHAR2(10),
  description  VARCHAR2(50),
  created_date DATE
)
PARTITION BY SYSTEM
(
  PARTITION part_1,
  PARTITION part_2
);
The partition must be explicitly defined in all insert statements or an error is produced.
INSERT INTO system_partitioned_tab VALUES (1, 'ONE', 'One', SYSDATE);
            *
ERROR at line 1:
ORA-14701: partition-extended name or bind variable must be used for DMLs on tables partitioned by
the System method

SQL>
The PARTITION clause is used to define which partition the row should be placed in.
INSERT INTO system_partitioned_tab PARTITION (part_1) VALUES (1, 'ONE', 'One', SYSDATE);
INSERT INTO system_partitioned_tab PARTITION (part_2) VALUES (2, 'TWO', 'Two', SYSDATE);
COMMIT;
EXEC DBMS_STATS.gather_table_stats(USER, 'SYSTEM_PARTITIONED_TAB');

COLUMN table_name FORMAT A25
COLUMN partition_name FORMAT A20
COLUMN high_value FORMAT A10

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME                PARTITION_NAME       HIGH_VALUE   NUM_ROWS
------------------------- -------------------- ---------- ----------
SYSTEM_PARTITIONED_TAB    PART_1                                   1
SYSTEM_PARTITIONED_TAB    PART_2                                   1

2 rows selected.

SQL>
Notice that the HIGH_VALUE for the partitions is blank.

The PARTITION clause is optional for update and delete statements, but omitting this clause will force all partitions to be scanned, since there is no way perform automatic partition pruning when the database has no control over row placement. When the PARTITION clause is used, you must be sure to perform the operation against the correct partition.
SQL> DELETE FROM system_partitioned_tab PARTITION (part_2) WHERE id = 1;

0 rows deleted.

SQL> UPDATE system_partitioned_tab PARTITION (part_1) SET code = 'TWO' WHERE id = 2;

0 rows deleted.

SQL>
The PARTITION clause can also be used in queries to target specific partitions.
SELECT COUNT(*)
FROM   system_partitioned_tab PARTITION (part_1);

  COUNT(*)
----------
         1

1 row selected.

SQL>
System partitioning gives you all the advantages partitioning, but leaves the decision of how the data is partitioned to the application layer.

Conditions and restrictions on system partitioning include:
  • If you specify the PARTITION BY SYSTEM clause, but don't define partitions, a single partition is created with the name in the format of "SYS_Pn".
  • If you specify PARTITION BY SYSTEM PARTITIONS n clause, the database creates "n" partitions with the name in the format of "SYS_Pn". The range of allowable values for "n" is from 1 to 1024K-1.
  • System partitioning is not available for index-organized tables or a table that is part of a cluster.
  • System partitioning can play no part in composite partitioning.
  • You cannot split a system partition.
  • System partitioning cannot be specified in a CREATE TABLE ... AS SELECT statement.
  • To insert data into a system-partitioned table using an INSERT INTO ... AS subquery statement, you must use partition-extended syntax to specify the partition into which the values returned by the subquery will be inserted.

Reference Partitioning

Reference partitioning allows tables related by foreign keys to be logically equi-partitioned. The child table is partitioned using the same partitioning key as the parent table without having to duplicate the key columns. Partition maintenance operations performed on the parent table are reflected on the child table, but no partition maintenance operations are allowed on the child table.

The following code contains a partitioned parent table and a dependent reference partitioned child table.
CREATE TABLE parent_tab (
  id           NUMBER NOT NULL,
  code         VARCHAR2(10) NOT NULL,
  description  VARCHAR2(50),
  created_date DATE,
  CONSTRAINT parent_tab_pk PRIMARY KEY (id)
)
PARTITION BY RANGE (created_date)
(
   PARTITION part_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY')),
   PARTITION part_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY'))
);

CREATE TABLE child_tab (
  id             NUMBER NOT NULL,
  parent_tab_id  NUMBER NOT NULL,
  code           VARCHAR2(10),
  description    VARCHAR2(50),
  created_date   DATE,
  CONSTRAINT child_tab_pk PRIMARY KEY (id),
  CONSTRAINT child_parent_tab_fk FOREIGN KEY (parent_tab_id)
                               REFERENCES parent_tab (id)
)
PARTITION BY REFERENCE (child_parent_tab_fk);
Child records that foreign key to rows in the first partition of the parent table should be placed in the first partition of the child table. So we insert two rows into the first partition and one row into the second of the parent table. We then insert three rows into the child table, with one foreign keyed to a row in the first partition and two foreign keyed to a row in the second partition of the master table.
INSERT INTO parent_tab VALUES (1, 'ONE', '1 ONE', SYSDATE);
INSERT INTO parent_tab VALUES (2, 'TWO', '2 TWO', SYSDATE);
INSERT INTO parent_tab VALUES (3, 'THREE', '3 THREE', ADD_MONTHS(SYSDATE,12));

INSERT INTO child_tab VALUES (1, 1, 'ONE', '1 1 ONE', SYSDATE);
INSERT INTO child_tab VALUES (2, 3, 'TWO', '2 3 TWO', SYSDATE);
INSERT INTO child_tab VALUES (3, 3, 'THREE', '3 3 THREE', SYSDATE);
COMMIT;

EXEC DBMS_STATS.gather_table_stats(USER, 'PARENT_TAB');
EXEC DBMS_STATS.gather_table_stats(USER, 'CHILD_TAB');
We now expect the parent table to have 2 records in the 2007 partition and 1 in the 2008 partition, while the child table should have 1 row in the 2007 partition and 2 rows in the 2008 partition. The following query confirms out expectation.
COLUMN table_name FORMAT A25
COLUMN partition_name FORMAT A20
COLUMN high_value FORMAT A40

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME                PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
------------------------- -------------------- ---------------------------------------- ----------
CHILD_TAB                 PART_2007                                                              1
CHILD_TAB                 PART_2008                                                              2
PARENT_TAB                PART_2007            TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-M          2
                                               M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

PARENT_TAB                PART_2008            TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-M          1
                                               M-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

4 rows selected.

SQL>
The following conditions and restrictions apply to reference partitioning:
  • The child table must specify a referential integrity constraint defined on the table being created. This constraint must be in ENABLE VALIDATE NOT DEFERRABLE state (the default) and refer to a primary or unique key on the parent table.
  • The foreign key columns referenced in constraint must be NOT NULL.
  • The constraint cannot use the ON DELETE SET NULL clause.
  • The parent table referenced must be an existing partitioned table. All partitioning methods except interval partitioning are supported.
  • The foreign key cannot contain any virtual columns.
  • The referenced primary key or unique constraint on the parent table cannot contain any virtual columns.
  • Reference partitioning cannot be used for index-organized tables, external tables, or domain index storage tables.
  • A chain of reference partitioned tables can be created, but constraint used can't be self-referencing.
  • The ROW MOVEMENT setting for both tables must match.
  • Reference partitioning cannot be specified in a CREATE TABLE ... AS SELECT statement.

Virtual Column-Based Partitioning

Oracle 11g supports the concept of virtual columns on tables. These virtual columns are not physically stored in the table, but derived from data in the table. These virtual columns can be used in the partition key in all basic partitioning schemes. The example below creates a table that is list partitioned on a virtual column that represents the first letter in the username column of the table.
CREATE TABLE users (
  id           NUMBER,
  username     VARCHAR2(20),
  first_letter VARCHAR2(1)
    GENERATED ALWAYS AS
      (
        UPPER(SUBSTR(TRIM(username), 1, 1))
      ) VIRTUAL
)
PARTITION BY LIST (first_letter)
(
  PARTITION part_a_g VALUES ('A','B','C','D','E','F','G'),
  PARTITION part_h_n VALUES ('H','I','J','K','L','M','N'),
  PARTITION part_o_u VALUES ('O','P','Q','R','S','T','U'),
  PARTITION part_v_z VALUES ('V','W','X','Y','Z')
);
The following code inserts two rows into each partition defined in the table.
INSERT INTO users (id, username) VALUES (1, 'Andy Pandy');
INSERT INTO users (id, username) VALUES (1, 'Burty Basset');
INSERT INTO users (id, username) VALUES (1, 'Harry Hill');
INSERT INTO users (id, username) VALUES (1, 'Iggy Pop');
INSERT INTO users (id, username) VALUES (1, 'Oliver Hardy');
INSERT INTO users (id, username) VALUES (1, 'Peter Pervis');
INSERT INTO users (id, username) VALUES (1, 'Veruca Salt');
INSERT INTO users (id, username) VALUES (1, 'Wiley Cyote');
COMMIT;

EXEC DBMS_STATS.gather_table_stats(USER, 'USERS');
The following query shows the data was distributed as expected.
COLUMN table_name FORMAT A25
COLUMN partition_name FORMAT A20
COLUMN high_value FORMAT A40

SELECT table_name, partition_name, high_value, num_rows
FROM   user_tab_partitions
ORDER BY table_name, partition_name;

TABLE_NAME                PARTITION_NAME       HIGH_VALUE                                 NUM_ROWS
------------------------- -------------------- ---------------------------------------- ----------
USERS                     PART_A_G             'A', 'B', 'C', 'D', 'E', 'F', 'G'                 2
USERS                     PART_H_N             'H', 'I', 'J', 'K', 'L', 'M', 'N'                 2
USERS                     PART_O_U             'O', 'P', 'Q', 'R', 'S', 'T', 'U'                 2
USERS                     PART_V_Z             'V', 'W', 'X', 'Y', 'Z'                           2

4 rows selected.

SQL>

Single Partition Transportable for Oracle Data Pump

Oracle 11g allows single table partitions to be transported between databases, rather than requiring the whole table to be transported. To show this in action we need to create two tablespaces to hold the table partitions and give the test use a quota on these tablespaces.
CONN sys/password@db11g AS SYSDBA

CREATE TABLESPACE transport_test_ts_1
  DATAFILE '/u01/app/oracle/oradata/DB11G/tt_ts_1'
  SIZE 128K AUTOEXTEND ON NEXT 128K;

CREATE TABLESPACE transport_test_ts_2
  DATAFILE '/u01/app/oracle/oradata/DB11G/tt_ts_2'
  SIZE 128K AUTOEXTEND ON NEXT 128K;

ALTER USER test
  QUOTA UNLIMITED ON transport_test_ts_1
  QUOTA UNLIMITED ON transport_test_ts_2;

CONN test/test@db11g
Next, we create and populate a test partitioned table.
CREATE TABLE transport_test_tab (
  id           NUMBER NOT NULL,
  code         VARCHAR2(10) NOT NULL,
  description  VARCHAR2(50),
  created_date DATE,
  CONSTRAINT transport_test_pk PRIMARY KEY (id)
)
PARTITION BY RANGE (created_date)
(
   PARTITION part_2007 VALUES LESS THAN (TO_DATE('01-JAN-2008','DD-MON-YYYY'))
     TABLESPACE transport_test_ts_1,
   PARTITION part_2008 VALUES LESS THAN (TO_DATE('01-JAN-2009','DD-MON-YYYY'))
     TABLESPACE transport_test_ts_2
);

INSERT INTO transport_test_tab VALUES (1, 'ONE', '1 ONE', SYSDATE);
INSERT INTO transport_test_tab VALUES (2, 'TWO', '2 TWO', SYSDATE);
INSERT INTO transport_test_tab VALUES (3, 'THREE', '3 THREE', ADD_MONTHS(SYSDATE,12));
INSERT INTO transport_test_tab VALUES (4, 'FOUR', '4 FOUR', ADD_MONTHS(SYSDATE,12));
COMMIT;

EXEC DBMS_STATS.gather_table_stats(USER, 'TRANSPORT_TEST_TAB');
The following query shows that each partition is on a separate tablespace and contains some data.
COLUMN table_name FORMAT A20
COLUMN partition_name FORMAT A20
COLUMN tablespace_name FORMAT A20

SELECT table_name, partition_name, tablespace_name, num_rows
FROM   user_tab_partitions;

TABLE_NAME           PARTITION_NAME       TABLESPACE_NAME        NUM_ROWS
-------------------- -------------------- -------------------- ----------
TRANSPORT_TEST_TAB   PART_2007            TRANSPORT_TEST_TS_1           2
TRANSPORT_TEST_TAB   PART_2008            TRANSPORT_TEST_TS_2           2

2 rows selected.

SQL>
Any tablespaces we wish to transport must be made read only.
CONN sys/password@db11g AS SYSDBA

ALTER TABLESPACE transport_test_ts_1 READ ONLY;
When we perform the data pump export, we can specify the individual partition we wish to export using the following syntax.
tables=schema.table:partition transportable=always
Notice the ":partition" section of the TABLES parameter. The TRANSPORTABLE parameter indicates that we wish to capture just the metadata for the partiton.

We can now run the Data Pump export using the following command.
expdp system/password tables=test.transport_test_tab:part_2007 transportable=always
  directory=data_pump_dir dumpfile=part_2007.dmp
The output is displayed below.
$ expdp system/password tables=test.transport_test_tab:part_2007 transportable=always
  directory=data_pump_dir dumpfile=part_2007.dmp

Export: Release 11.1.0.6.0 - Production on Friday, 19 October, 2007 16:40:45

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** tables=test.transport_test_tab:part_2007
  transportable=always directory=data_pump_dir dumpfile=part_2007.dmp
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /u01/app/oracle/admin/DB11G/dpdump/part_2007.dmp
******************************************************************************
Datafiles required for transportable tablespace TRANSPORT_TEST_TS_1:
  /u01/app/oracle/oradata/DB11G/tt_ts_1
Datafiles required for transportable tablespace USERS:
  /u01/app/oracle/oradata/DB11G/users01.dbf
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 16:40:55

$
To simulate transporting the tablespace, we will drop the existing table and tablespaces. Notice, that we don't actually remove the datafile associated with the first tablespace, as this is the datafile containing our transportable partition.
CONN sys/password@db11g AS SYSDBA

DROP TABLE test.transport_test_tab;

DROP TABLESPACE transport_test_ts_1 INCLUDING CONTENTS;
DROP TABLESPACE transport_test_ts_2 INCLUDING CONTENTS AND DATAFILES;
We can now import the dump file to snap in our transportable partition using the PARTITION_OPTIONS=DEPARTITION parameter setting, which converts all partitions into standalone table segments.
impdp system/password partition_options=departition dumpfile=part_2007.dmp
  transport_datafiles='/u01/app/oracle/oradata/DB11G/tt_ts_1'
The expected output is displayed below.
$ impdp system/password partition_options=departition dumpfile=part_2007.dmp
  transport_datafiles='/u01/app/oracle/oradata/DB11G/tt_ts_1'

Import: Release 11.1.0.6.0 - Production on Friday, 19 October, 2007 16:47:04

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01":  system/******** partition_options=departition
  dumpfile=part_2007.dmp transport_datafiles=/u01/app/oracle/oradata/DB11G/tt_ts_1
Processing object type TABLE_EXPORT/TABLE/PLUGTS_BLK
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
ORA-39083: Object type INDEX failed to create with error:
ORA-00942: table or view does not exist
Failing sql is:
CREATE UNIQUE INDEX "TEST"."TRANSPORT_TEST_PK" ON "TEST"."TRANSPORT_TEST_TAB" ("ID") PCTFREE 10
INITRANS 2 MAXTRANS 255  STORAGE(SEG_FILE 4 SEG_BLOCK 59 OBJNO_REUSE 70550 INITIAL 65536
NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT) TABLESPACE "USERS" PARALLEL 1
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
ORA-39083: Object type CONSTRAINT failed to create with error:
ORA-00942: table or view does not exist
Failing sql is:
ALTER TABLE "TEST"."TRANSPORT_TEST_TAB" ADD CONSTRAINT "TRANSPORT_TEST_PK" PRIMARY KEY ("ID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255  STORAGE(SEG_FILE 4 SEG_BLOCK 59 OBJNO_REUSE 70550
INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS"  ENABLE

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
ORA-39112: Dependent object type INDEX_STATISTICS skipped, base object type
INDEX:"TEST"."TRANSPORT_TEST_PK" creation failed
Processing object type TABLE_EXPORT/TABLE/END_PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" completed with 3 error(s) at 16:47:07

$
The table segment is named using a combination of the table and partition name, so dependent objects fail because they are referencing the wrong table name. We can see the new segment using the following query.
CONN test/test@db11g

EXEC DBMS_STATS.gather_schema_stats(USER);

COLUMN table_name FORMAT A30
COLUMN tablespace_name FORMAT A20

SELECT table_name, tablespace_name, partitioned, num_rows
FROM   user_tables;

TABLE_NAME                     TABLESPACE_NAME      PAR   NUM_ROWS
------------------------------ -------------------- --- ----------
TRANSPORT_TES_PART_2007        TRANSPORT_TEST_TS_1  NO           2

1 row selected.

SQL>

Partition Advisor

The SQL Access Advisor, originally introduced in Oracle 10g, has been updated to include advice on partitioning existing tables, materialized views and indexes. Partitioning exiting tables can be quite a lengthy process, so don't expect this advice to provide a quick fix.

Enhanced Statistics Collection for Partitioned Objects

Oracle 11g includes improvements to statistics collection for partitioned objects so untouched partitions are not rescanned. This significantly increases the speed of statistics collection on large tables where some of the partitions contain static data. Where partition exchange load (PEL) is used to add data to the a table, only the newly added partition must be scanned to update the global statistics.

Saturday, September 18, 2010

How to secure login with oracle SQL*Plus with a password On UNIX and Linux platforms

ost of us sometimes start SQL * Plus with a password on UNIX and Linux platforms without knowing security threat.
For example, an application user connects SQL * Plus by passing username and password on Unix/Linux Server.
$ sqlplus apps/apps@proddb
Here the sqlplus command parameters are very much available for viewing by all operating system users on the same host computer; as a result, password entered on the command line could be exposed to other users, as below.
$ ps -efgrep sqlplus
oracle 14490 2190 0 16:31:53 pts/5 0:00 sqlplus apps/apps@proddb
oracle 14493 14491 0 16:32:01 pts/5 0:00 grep sqlplus
So, there might be a chance for an intruder to know the user id and password, and can connect to the database using that credentials.
Then, following is the secure and best way of connecting SQL * Plus where the password is not exposed on the command line.
$ sqlplus apps@proddb
Enter password: ****
Or, even not to expose the username and connecting string.
$ sqlplus
Enter user-name: apps@proddb
Enter password: ****
Or
$ sqlplus /nolog
SQL> connect apps@proddb
Enter password: ****
And also, do not use the password while invoking Export/Import Utility using exp/imp command line, and for any other command line utilities which you think the password will be exposed to others.
On Microsoft Windows, the command recall feature (the Up arrow) remembers user input across command invocations.
For example, if you use the CONNECT APPS/password notation in SQL*Plus, exit, and then press the Up arrow to repeat the CONNECT command, the command recall feature discloses the connect string and shows the password. So, it is advice *NOT* to pass the password while connecting to SQL * Plus on windows as well.

RMAN BACKUP FAILURE AFTER CLONING:

Rman target /
RMAN> show all;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name ORQ4 are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM ‘AES128′; # default
CONFIGURE COMPRESSION ALGORITHM ‘BZIP2′; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u2490/app/oracle/product/11.1.0/db_1/dbs/snapcf_orq41.f’; # default
Configure  as :
RMAN configuration parameters for database with db_unique_name ORQ4 are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 14 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;
CONFIGURE DEFAULT DEVICE TYPE TO ‘SBT_TAPE’;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE ‘SBT_TAPE’ TO ‘%F’;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO ‘%F’;
CONFIGURE DEVICE TYPE ‘SBT_TAPE’ PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
CONFIGURE DEVICE TYPE DISK PARALLELISM 5 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE ‘SBT_TAPE’ TO 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE ‘SBT_TAPE’ TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE ‘SBT_TAPE’ PARMS  ‘BLKSIZE=262144,ENV=(CvClientName=qmnalx1109,CvInstanceName=Instance001,CvOraSID=orq41)’; ()
CONFIGURE MAXSETSIZE TO 30 G;
CONFIGURE ENCRYPTION FOR DATABASE OFF;
CONFIGURE ENCRYPTION ALGORITHM ‘AES128′;
CONFIGURE COMPRESSION ALGORITHM ‘BZIP2′;
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO ‘/u2490/app/oracle/product/11.1.0/db_1/dbs/snapcf_orq41.f’; # default
export TNS_ADMIN=/home/oracle/sqlnet
rman target / catalog rmanapps/rmanapps@RCVCAT
RMAN>list incarnation of database orq4;
RMAN>register database;
RMAN>exit
Check for the scheduled backup using crontab –l
#ORQ4 BACKUP
00 23 * * 2 /home/oracle/scripts_v2/backup_db_cvault.sh -d orq41 -c rmanapps -l 0 -f 0
00 23 * * 0,1,3,4,5,6 /home/oracle/scripts_v2/backup_db_cvault.sh -d orq41 -c rmanapps -l 1 -f 1
16 10 09 07 * /home/oracle/scripts_v2/backup_db_cvault.sh -d orq41 -c rmanapps -l 1 -f 1
Invoke the Backup
Sh  /home/oracle/scripts_v2/backup_db_cvault.sh -d orq41 -c rmanapps -l 0 -f 0

Create Database Link

CREATE DATABASE LINK prd
CONNECT AS userid
IDENTIFIED BY password
USING ‘tns:name’;
DELETE table;
INSERT INTO table
SELECT * FROM table@prd;
DROP DATABASE LINK prd;
COMMIT;

HOW TO SET A PROFILE TO A USER IN an INSTANCE

SYSTEM ADMINISTRATOR à PROFILE à SYSTEM à
CHECK for User Insert the UserID to the Right.
Find the Profile what you want to Assign to the User.  à Click Find.

Problem of Inactive, No Manager in one of the request submitted:

This problem seems to be inactive ICM.
When ICM is not up u will get error inactive-no manager.
Just check with manager and make it up.
ps -ef|grep LIBR
if it shows the manager is down then start the CM using script adcmctl.sh.
And if the manager is up but it is showing the inactive-nomanager then check for the log file and do the needful.
As my idea y dont u try with cmclean.sql.(not supported by oracle)
1.Bring down CM.
2.run cmclean.sql
3.Bring up CM.

Check for Locked Objects

set serveroutput on size 10000
declare
cursor c1 is
select * from v$lock where request != 0
order by id1, id2;
wid1            number := -999999;
wid2            number := -999999;
wholder_detail  varchar2(200);
v_err_msg          varchar2(80);
wsid            number(5);
wstep           number(2);
wtype           varchar2(10);
wobject_name    varchar2(180);
wobject_name1   varchar2(80);
wlock_type      varchar2(50);
w_lastcallet  varchar2(11);
h_lastcallet  varchar2(11);
begin
for c1_rec in c1 loop
if c1_rec.id1 = wid1 and c1_rec.id2 = wid2 then
null;
else
wstep  := 10;
select sid , type into wsid , wtype
from v$lock
where id1  = c1_rec.id1
and id2  = c1_rec.id2
and request = 0
and lmode != 4;
dbms_output.put_line(‘  ‘);
wstep  := 20;
select ‘Holder DBU: ‘||s.username ||’ OSU: ‘||s.osuser ||’
DBP:’||p.spid||’ APP: ‘|| s.process ||
‘ SID:’ || s.sid || ‘ Status: ‘ || s.status  ||
‘ (‘ ||         floor(last_call_et/3600)||’:'||
floor(mod(last_call_et,3600)/60)||’:'||
mod(mod(last_call_et,3600),60) ||
‘) Module:’|| module ||
‘ AppSrvr: ‘ ||
substr(replace(machine,’GEIPS-AMER\’,null),1,15)
into wholder_detail
from v$session s, v$process p
where s.sid= wsid
and s.paddr = p.addr;
dbms_output.put_line(wholder_detail);
begin
select decode(wtype,’TX’, ‘Transaction’,
‘DL’, ‘DDL Lock’,
‘MR’, ‘Media Recovery’,
‘RT’, ‘Redo Thread’,
‘UN’, ‘User Name’,
‘TX’, ‘Transaction’,
‘TM’, ‘DML’,
‘UL’, ‘PL/SQL User Lock’,
‘DX’, ‘Distributed Xaction’,
‘CF’, ‘Control File’,
‘IS’, ‘Instance State’,
‘FS’, ‘File Set’,
‘IR’, ‘Instance Recovery’,
‘ST’, ‘Disk Space Transaction’,
‘TS’, ‘Temp Segment’,
‘IV’, ‘Library Cache Invalida-tion’,
‘LS’, ‘Log Start or Switch’,
‘RW’, ‘Row Wait’,
‘SQ’, ‘Sequence Number’,
‘TE’, ‘Extend Table’,
‘TT’, ‘Temp Table’,
‘Un-Known Type of Lock’)
into wlock_type
from dual;
declare
cursor c3 is
select object_id from v$locked_object
where session_id = wsid;
begin
wobject_name := ”;
for c3_rec in c3 loop
select object_type||’: ‘||owner||’.'||object_name
into wobject_name
from dba_objects
where object_id = c3_rec.object_id;
wobject_name := wobject_name ||’ ‘||wobject_name1;
end loop;
exception
when others then
wobject_name := wobject_name ||’ No Object Found’;
end;
dbms_output.put_line(‘Lock Held: ‘||wlock_type||’ for Object
:’||wobject_name);
exception
when no_data_found then
dbms_output.put_line(‘Lock Held: ‘||wlock_type||’ No object found
in DBA Objects’);
end;
end if;
wstep  := 30;
select ‘….   Requestor DBU: ‘||s.username ||’ OSU: ‘||s.osuser ||’
DBP:’||p.spid||’ APP: ‘|| s.process ||
‘ SID:’ || s.sid || ‘ Status: ‘ || s.status  ||
‘ (‘ ||         floor(last_call_et/3600)||’:'||
floor(mod(last_call_et,3600)/60)||’:'||
mod(mod(last_call_et,3600),60) ||
‘) Module:’|| module ||
‘ AppSrvr: ‘ ||
substr(replace(machine,’GEIPS-AMER\’,null),1,15)
into wholder_detail
from v$session s, v$process p
where s.sid= c1_rec.sid
and s.paddr = p.addr;
dbms_output.put_line(wholder_detail);
wid1  := c1_rec.id1;
wid2  := c1_rec.id2;
end loop;
if wid1 = -999999 then
wstep  := 40;
dbms_output.put_line(‘No one requesting locks held by others’);
end if;
exception
when others then
v_err_msg := (sqlerrm ||’  ‘|| sqlcode||’ step=’||to_char(wstep));
DBMS_OUTPUT.PUT_LINE(v_err_msg);
end;

HOW TO Trace a concurrent program

1. Go to System Administrator >> Concurrent >> Program >> Define
2. Query back the concurrent program that you want to trace.
3. Click ‘Enable Trace’ and save it.
4. Run the concurrent program as a concurrent request
5. A SQL trace file will be saved in the ../udump directory on the database server



(OR)
How do you enable/disable a trace to a Concurrent Program?
1. Connect to Oracle Applications
2. Navigate to System Administrator->Concurrent->Program->Define
3. Query the concurrent program on which you want to enable trace.
4. Check the enable trace check box bottom of the screen, Save it.
5. Ask the developer to submit the request, Once the request got submitted and completed normal.
6. Get the spid as select oracle_process_id from apps.fnd_concurrent_requests where request_id=456624.
7. You will get a spid like 12340.
8. Goto Udump and ls -ltr *12344*.
9 . You will get trace file.
Query to Get the Udump Location:

SQL>SELECT name,value from v$parameter WHERE name like ‘user_dump_dest’;
NAME
——————————————————————————–
VALUE
——————————————————————————–
user_dump_dest
/u2190/app/oracle/product/11.1.0/db_1/admin/orq11_qmnalx1109/diag/rdbms/orq1/orq
11/trace

Profile Options for Oracle Apps DBA

Here is the list of few profile options which Apps DBA use frequently. It is not necessary that you as Apps DBA must know all profile options, it depends on your implemnetation. I am goingto update more about Profile Options.
Applications Help Web Agent  Applications Servlet Agent  Applications Web Agent  Concurrent: Active Request Limit  Concurrent: Hold Requests  Concurrent: Multiple Time Zones  Concurrent: Report Access Level  Concurrent: Report Copies  Concurrent: Request priority  Database Instance  Enable Security Group FND: Debug Log Filename  FND: Debug Log Level  Forms Runtime Parameters  Gateway User ID ICX: Discoverer Launcher ICX: Forms Launcher ICX: Report Launcher ICX: Limit Connect ICX: Limit time ICX: Session Timeout MO Operating Unit Node Trust Level RRA: Delete Temporary Files RRA: Enabled RRA: Service Prefix RRA: Maximum Transfer Size Self Service Personal Home Page Mode Sign-On: Audit Level Signon Password Failure Limit Signon Password Hard to Guess Signon Password Length Signon Password No Reuse Site Name Socket Listener Port TCF: Host TCF: Port TWO TASK Viewer: Text Applications Help Web Agent

Compile INVALID Objects

How to find the Invalid Objects?
select count(*) from DBA_OBJECTS where status=’INVALID’;
select owner,object_type,count(*) from dba_objects where status=’INVALID’ group by owner,object_type;
Compile Invalid Objects:
cd $ORACLE_HOME/rdbms/admin/utlrp.sql
Connect sqlplus “as sysdba”
sql>@utlrp.sql
This sql will compile all the invalid Objects

Assign the Responsibility to the User but it is not reflecting

Login as sysadmin
workflow->sitemap->configuration->service components
Screen appearing ..Just activate the Workflow Java Deferred Listener .
Next Run the two requests login as sysadmin
Workflow Directory Services User/Role Validation
Synchronize WF LOCAL tables
These two programs must run Compulsory

Monitoring Pending Requests in the Concurrent Managers


Monitoring Pending Requests in the Concurrent Managers
You may find that requests are stacking up in the concurrent managers with a status of “pending”. This can be caused by any of these conditions:
1. The concurrent managers were brought down will a request was running.
2. The database was shutdown before shutting down the concurrent managers.
3. There is a shortage of RAM memory or CPU resources.
When you get a backlog of pending requests, you can first allocate more processes to the manager that is having the problem in order to allow most of the requests to process, and then make a list of the requests that will not complete so they can be resubmitted, and cancel them.
To allocate more processes to a manager, log in as a user with the System Administrator responsibility. Navigate to Concurrent -> Manager -> Define. Increase the number in the Processes column. Also, you may not need all the concurrent managers that Oracle supplies with an Oracle Applications install, so you can save resources by identifying the unneeded managers and disabling them.
However, you can still have problems. If the request remains in a phase of RUNNING and a status of TERMINATING after allocating more processes to the manager, then shutdown the concurrent managers, kill any processes from the operating system that won’t terminate, and execute the following sqlplus statement as the APPLSYS user to reset the managers in the FND_CONCURRENT_REQUESTS table:
update fnd_concurrent_requestsset status_code=’X', phase_code=’C'where status_code=’T';

Problems & Solutions on Concurrent Managers

Problems: I hit the Restart button to start the Standard manager, but it still did not start? Solution: The ICM will start Standard Manager in the next process monitor session or the next time the ICM starts. Use Activate to start a manager immediately. When a manager is deactivated manually, the ICM will not restart it, you will need to set it to Restart, or activate it manually.
Problem: When you try to submit a request like Active users or Active responsibilities, request gets submitted. When we view the requests, you find that it is inactive / nomanager. Within 12 to 15 seconds, you refresh-it gets completed.
Solution: when this occurs where a request goes “inactive/no manager” and is then processed a short time later, the solution is to either increase the cache size for your Standard manger, or increase the actual number of Standard manager processes.
HELPFUL DIAGNOSTIC SCRIPTS
The following SQL scripts located under $FND_TOP/sql are useful when diagnosing concurrent manager problems:
afimchk.sql Tells the status of the ICM and PMON method
afcmstat.sql Lists active manager processes
afrqrun.sql Lists all the running, waiting and terminating requests
afrqwait.sql Lists requests that are constrained and waiting for the ICM to release them
afrqscm.sql Prints log file name of managers that can run a given request. It can be used to check for possible errors when a request stays in pending status. It requires a request id value.
afcmcreq.sql Prints the log file name of the manager that processed the request
afrqstat.sql Summary of completed concurrent requests grouped by completion status and execution type. It requires number of days prior to today on which to report parameter.
afimlock.sql Lists locks that the ICM is waiting to get
afcmrrq.sql Lists managers that currently are running a request

How to retrieve Summary of concurrent Jobs/status/Count in Last 1 hour

How to retrieve Summary of concurrent Jobs/status/Count in Last 1 hour?
Use following query:

selectfcpt.USER_CONCURRENT_PROGRAM_NAME,DECODE(fcr.phase_code,‘C’, ‘Completed’,‘I’, ‘Inactive’,‘P’, ‘Pending’,‘R’, ‘Running’,fcr.phase_code) PHASE ,DECODE(fcr.status_code,‘A’, ‘Waiting’,‘B’, ‘Resuming’,‘C’, ‘Normal’,‘D’, ‘Cancelled’,‘E’, ‘Errored’,‘F’, ‘Scheduled’,‘G’, ‘Warning’,‘H’, ‘On Hold’,‘I’, ‘Normal’,‘M’, ‘No Manager’,‘Q’, ‘Standby’,‘R’, ‘Normal’,‘S’, ‘Suspended’,‘T’, ‘Terminating’,‘U’, ‘Disabled’,‘W’, ‘Paused’,‘X’, ‘Terminated’,‘Z’, ‘Waiting’,fcr.status_code) STATUS,count(*)from apps.fnd_concurrent_programs_tl fcpt,apps.FND_CONCURRENT_REQUESTs fcrwhere fcpt.CONCURRENT_PROGRAM_ID=fcr.CONCURRENT_PROGRAM_IDand fcpt.language = USERENV(’Lang’)and fcr.ACTUAL_START_DATE > sysdate – 1/24group by fcpt.USER_CONCURRENT_PROGRAM_NAME,fcr.phase_code,fcr.status_code

Concurrent Manager: Output Post-processor issue

The concurrent program the output of which is XML report is failing in warning and the log shows the below message:
Beginning post-processing of request 5292785 on node AP6105RT at 02-AUG-2006 04:47:48. Post-processing of request 5292785 failed at 02-AUG-2006 04:49:48 with the error message:

The Output Post-processor is running but has not picked up this request. No further attempts will be made to post-process this request, and the request will be marked with Warning status.
Setting the profile option Concurrent: OPP Response Timeout to a higher value may be necessary. Example if the responce time is set to 60 make it atleast 180. Bounce the concurrent manager once this is done. This will resolve the issue.

If the issue still persist, check the OPP log file. You follow below navigation

System Administrator -> Concurrent : Manager -> Administrator

On this screen click on Output Post Processor and then click on Processes button -> Manager Log button

A new web page will open showing the manager log. Come to the bottom, you might see followng error

[10/7/08 10:15:56 PM] [1900472:RT5709298] Completed post-processing actions
for request 5709298.
[10/7/08 10:20:07 PM] [OPPServiceThread1] Post-processing request 5709301.
[10/7/08 10:20:07 PM] [1900472:RT5709301] Executing post-processing actions
for request 5709301.
[10/7/08 10:20:07 PM] [1900472:RT5709301] Starting XML Publisher
post-processing action.
[10/7/08 10:20:07 PM] [1900472:RT5709301]
@ Template code: FAXSRPIM
Template app: OFA
Language: EN
Territory: US
Output type: PDF
[10/7/08 10:20:07 PM] [UNEXPECTED] [1900472:RT5709301]
java.io.FileNotFoundException:
/appslog/fin_top/utl/fintest/out/xdosCfJu5EHsa100708_1020071837.fo (No such
file or directory)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.(FileOutputStream.java:179)
at java.io.FileOutputStream.(FileOutputStream.java:131)
@ at oracle.apps.xdo.common.tmp.TmpFile.createTmpFileJDK118(TmpFile.java:146)
@ at oracle.apps.xdo.common.tmp.TmpFile.createTmpFile(TmpFile.java:113)
at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:987)
at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:212)
at oracle.apps.xdo.template.FOProcessor.createFO(FOProcessor.java:1657)
at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:967)

So the path /appslog/fin_top/utl/fintest/out/ is not correct. As per the note ID 428855.1 changed the path to valid location

Navigate the following as the responsibility “XML Publisher Administrator”
Select Administration on the Personal Home Page
Under the Configuration tab select Properties and then General.
From there the Temporary directory will be displayed.

Changed the path to some thing that is existing physically.

After that bounce the output post processor. This should resolve the issue

Concureent manager

The concurrent managers serve important administrative functions. The concurrent managers ensure that the applications are not overwhelmed with requests, and the second areas of functions are the management of batch processing and report generation.
In The Oracle e-Business suite has three important Concurrent Managers:
Internal Concurrent Manager: The ICM (Internal Concurrent Manager) controls all of the other concurrent managers.
Standard Manager: The Standard Manager accepts any and all requests. It has no predefined specialization rules and is active all the time. It is not recommended to add specialization rules to the standard manager as it can cause problems.
Conflict Resolution Manager: The Conflict Resolution Manager resolves conflicts, such as request incompatibilities.
How to check concurrent manager running or not?
Log in to applications as System Administrator responsibility and navigate to the concurrent > manager > administration page and under the processes column if the target and actual column equal the same number (above 0) this means the managers are up and running.
How to Start / Stop Concurrent manager ?
If concurrent managers down, Use this syntax to start the concurrent managers from the command line:
adcmctl.sh start apps/
You stop the concurrent managers from the command line:
adcmctl.sh stop apps/
Location where you find adcmctl.sh scripts is $COMMN_TOP/admin/scripts/$CONTEXT_NAME/

Where the location of Concurrent Managers Log files?
The concurrent manager log files can be located in one of the following places:
1. If the environment variable $APPLCSF is set, the default location is $APPLCSF/$APPLLOG
2. If the environment variable $APPLCSF is not set, the logs go to $FND_TOP/$APPLLOG
NOTE: The default name of the concurrent manager log files is std.mgr. You can change these by setting the parameter logfile=.
Manager Log Files:
Standard manager log: w.mgr
Transaction manager log: t.mgr
Conflict Resolution manager log: c.mgr
Where: is the concurrent process id of the manager

Status of a particular Concurrent Request

set verify off
col sid format 9999
col spid format a9
col inst_id format 99
col serial# format 999999
col process format a10
set lines 180 pages 100
col username format a10
col program format a20
col machine format a18
col module format a25
col Action format a14
col logon_time format a21

PROMPT;
PROMPT;

ACCEPT inp_prog PROMPT 'Enter the desired Concurrent Request : ';

select s.inst_id,request_id,s.username,s.sid,s.serial#,s.process,
p.spid,
to_char(logon_time,'DD-MON-YYYY HH24:MI:SS') logon_time ,
s.status,
-- substr(s.program,1,10) "Program",
substr(s.machine,1,12) "Machine",
substr(s.module,1,12) "Module",
substr(s.action,1,14) "Action"
from gv$session s , gv$process p , apps.fnd_concurrent_requests fnd
where s.process = fnd.os_process_id
and s.paddr=p.addr
and s.inst_id=p.inst_id
and request_id like decode('&&inp_prog','',request_id,'&&inp_prog')
and phase_code='R'
order by request_id,logon_time
/

Service Management

R12 uses 10.1.3 fusion middle-ware AS stack, hence services in R12 is managed by OPMN (Oracle Process Manager and Notification Server).
OPMN consists of two main components (Oracle Process Manager) and (Oracle Notification Server).
Oracle Process Manager is responsible for
1) starting
2) stopping
3) restarting
4) monitoring the services it manages (this includes death detection and automatic restart of the process)
Oracle Notification Server is the transport mechanism for failure, recovery, startup, and other related notifications between components in AS.
Single configuration file(opmn.xml) is used OPMN to manage the services. Config file location is given as $ORA_CONFIG_HOME/10.1.3/opmn/conf/opmn.xml
Services managed by opmn are (grep process-type opmn.xml)
1) HTTP_Server
2) oacore
3) forms
4) oafm
Getting OPMN managed running process(es) status
bash-2.05$ ./adopmnctl.sh status
You are running adopmnctl.sh version 120.4
Checking status of OPMN managed processes…
Processes in Instance: SID_host.host.domain.com
———————+——————–+———–+———
ias-component | process-type | pid | status
———————+——————–+———–+———
OC4J | oafm | 13500 | Alive
OC4J | forms | 28358 | Alive
OC4J | oacore | 15899 | Alive
HTTP_Server | HTTP_Server | 23530 | Alive
ASG | ASG | N/A | Down
adopmnctl.sh: exiting with status 0
adopmnctl.sh: check the logfile /ebiz/oracle/SID/inst/apps/SID_host/logs/appl/admin/log/adopmnctl.txt for more information
Starting OPMN managed Services
Starting Complete OC4J container
bash-2.05$ ./adopmnctl.sh startproc ias-component=OC4J
Starting individual process-type(s)
bash-2.05$ ./adopmnctl.sh startproc ias-component=OC4J process-type= oafm
bash-2.05$ ./adopmnctl.sh startproc ias-component=OC4J process-type= forms
bash-2.05$ ./adopmnctl.sh startproc ias-component=OC4J process-type= oacore
Starting HTTP Server (Apache)
bash-2.05$ ./adopmnctl.sh startproc ias-component=HTTP_Server
Stoping OPMN managed Services
Stoping Complete OC4J container
bash-2.05$ ./adopmnctl.sh stopproc ias-component=OC4J
Stoping individual process-type(s)
bash-2.05$ ./adopmnctl.sh stopproc ias-component=OC4J process-type= oafm
bash-2.05$ ./adopmnctl.sh stopproc ias-component=OC4J process-type= forms
bash-2.05$ ./adopmnctl.sh stopproc ias-component=OC4J process-type= oacore
Stoping HTTP Server (Apache)
bash-2.05$ ./adopmnctl.sh stopproc ias-component=HTTP_Server
Benefits of OPMN
There are many benefits because of OPMN. To give an example,
Consider the scenario where one of your OC4J process has died. OPMN detects the death of the process which it manages and brings up in almost no time.
To elucidate this here is an example
bash-2.05$ ./adopmnctl.sh status
You are running adopmnctl.sh version 120.4
Checking status of OPMN managed processes…
Processes in Instance:  SID_host.host.domain.com
———————+——————–+———–+———
ias-component | process-type | pid | status
———————+——————–+———–+———
OC4J | oafm | 13500 | Alive
OC4J | forms | 15898 | Alive
OC4J | oacore | 15899 | Alive
HTTP_Server | HTTP_Server | 23530 | Alive
ASG | ASG | N/A | Down
adopmnctl.sh: exiting with status 0
adopmnctl.sh: check the logfile /ebiz/oracle/SID/inst/apps/SID_host/logs/appl/admin/log/adopmnctl.txt for more information …
All the OPMN managed processes are alive. Lets see, how opmn reacts to the death of the oacore OC4J process. I have killed the process with PID – 15898
bash-2.05$ kill -9 15898
bash-2.05$ ./adopmnctl.sh status
You are running adopmnctl.sh version 120.4
Checking status of OPMN managed processes…
Processes in Instance:  SID_host.host.domain.com
———————+——————–+———–+———
ias-component | process-type | pid | status
———————+——————–+———–+———
OC4J | oafm | 13500 | Alive
OC4J | forms | 28358 | Init
OC4J | oacore | 15899 | Alive
HTTP_Server | HTTP_Server | 23530 | Alive
ASG | ASG | N/A | Down
adopmnctl.sh: exiting with status 0
adopmnctl.sh: check the logfile /ebiz/oracle/SID/inst/apps/SID_host/logs/appl/admin/log/adopmnctl.txt for more information …
Within no time, OPMN detects the death and restarts the process it manages.(one or two seconds users will face the failures due to the process unavailability to serve the requests)
$LOG_HOME/ora/10.1.3/opmn/opmn.log is appened with the information ( it dumps the death detection and restart information)
08/05/02 12:34:04 [pm-process] Process Crashed: OC4J~forms~default_group~1 (691565399:15898) – Restarting
08/05/02 12:34:04 [pm-process] Starting Process: OC4J~forms~default_group~1 (691565400:0)
08/05/02 12:34:24 [pm-process] Process Alive: OC4J~forms~default_group~1 (691565400:28358)
This is one of greatest advantage you will get when you have R12, Even if oacore JVM crashes due to out of memory issue, opmn restarts after it detects OC4J has died.
What algorithm OPMN uses to detect the death?
1) OS process is checked by OPMN for every 2 seconds.
2) forward ping: periodically OPMN pings the process for every 20 seconds and expects response
3) reverse ping: every 20 seconds managed process sends OPMN a ping notification
Script Details
All the admin scripts are running the opmnctl in the background from the $ORACLE_CONFIG_HOME/opmn/bin directory. This environment variable is set in the $INST_TOP/apps/VIS_appl_node1/ora/10.1.3/VIS_appl_node1.env file. This contains all the required environment variables needed by opmnctl to run. So this file needs to be sourced before running the opmnctl for Ebiz.
bash-2.05# pwd
/ebiz/oracle/SID/inst/apps/SID_host/ora/10.1.3
bash-2.05# ls
Apache cfgtoollogs deconfig javacache opmn
VIS_appl_node1.env config j2ee network
bash-2.05# opmn/bin/opmnctl status
Unable to connect to opmn.
Opmn may not be up.
bash-2.05# source VIS_appl_node1.env
bash-2.05# opmn/bin/opmnctl status
Processes in Instance:  SID_host.host.domain.com
——————-+——————–+———+———
ias-component | process-type | pid | status
——————-+——————–+———+———
OC4J | oafm | N/A | Down
OC4J | forms | 8071 | Alive
OC4J | oacore | N/A | Down
HTTP_Server | HTTP_Server | 16572 | Alive
ASG | ASG | N/A | Down

Function not available for this responsibility

Error Faced:
Function is not available for this responsibility.Contact your system
Administrator or change the responsibility
Reason:
When a custom module is defined and forms are being run from this custom_top,
Forms dont run from custom top in servlet mode but run in socket mode….this is because in servlet mode the environment is read from the file
$ORA_CONFIG_HOME/10.1.2/forms/server/default.env
The location of the FMX files is determined by the value set for CUSTOM_TOP. If
this location is not found the error as seen is raised.
To get the custom_top defined in default.env the following steps are to be followed:

Solution:
1.Submit the “Compile Security” Concurrent program with the parameter everything set to Yes to verify that all the menu functions are compiled properly.
2.Create your-own AutoConfig template file
2.1 Define a product_top
Use the Oracle Applications Manager Context Editor to add your custom product_top to the context file.
This will add the CUSTOM_TOP to the .xml file, so it picked up when the default.env is created by Autoconfig
Or if you are well acquainted with Auto-config and Context_name.xml, add the Custom top manually to Context_name.xml using any editor and save.
3.Run AutoConfig
4.Restart apps services
5.Retest.

Connect Toad with Oracle Database

http://linuxappsdba.blogspot.com/2009/06/connect-toad-with-oracle-database.html

INTERMITTENT ORA-06502 DURING PEAK LOADING

In 10.2.0.3,
non reproducible ORA-06502 arises from
time to time when the production has the peak loading.
Alter system flush shared pool helps the problem for a while....

Error:
ORA-06502:
numeric or value error: associative array shape is not consistent with
session parameters
(PLS-01910)
numeric or value error: character to number conversion error (PLS-01900)
Solution:
Apply patch 5890966 to resolve the error using opatch utility.

Upgrade R12 Database server

How to upgrade R12 Database server from 10.2.0 to 11.1.0 (10g to 11g) manually ?

We cannot upgrade the existing Oracle Home, since 11g is not a patchset. We have to install 11g oracle home
as a separate ORACLE_HOME in parallel to 10g Oracle Home.
We have to install the new database for upgrading the existing database.
But before installing check the software versions for…..
a)Oracle database version and check the appropriate upgrade path accordingly depending on the present version.
b)E-business suite version—minimum is 12.0.4 and apply interoperability patch for 11g on apps node using adpatch
c)Autoconfig has to be upgraded to latest version by applying 7207440.
I did not need to apply as my instance is on 12.0.6 and 6728000(12.0.6 upgrade patch) supersedes this autoconfig patch..
Now steps for database installation
1.Apply patch 6400501 to Apps 10.1.0.5 Oracle-home
2. Deregister the current database if you want to change database sid,host,port.
And update autoconfig for database also before that.
$ perl $ORACLE_HOME/appsutil/bin/adgentns.pl appspass=apps
contextfile=$CONTEXT_FILE -removeserver
3. Update applications context file with new database parameters….
s_dbhost
s_dbdomain
s_db_serv_sid
s_dbport
s_apps_jdbc_connect_descriptor
to new database values. Don’t run auto config on apps tier now….
otherwise apps will not be able to connect to database at all.
Run autoconfig only after complete upgrade process is finished..
Database Installation
The 11.1.0 Oracle home must be installed on the database server node
in a different directory other than the current Oracle home
Log in to the database server node as the owner
of the Oracle RDBMS file system and database instance. Ensure
that environment settings, such as ORACLE_HOME, are set for the new
Oracle home you are about to create, and not for any existing
Oracle homes on the database server node
Choose to install software only (without any default database) in the new location.
After the installation,
Run utlu111i.sql (located in 11g_ORACLE_HOME/rdbms/admin) on source 10g database
and check the output.
SQL>@/11g_oracle_home/rdbms/admin/utlu111i.sql
It displays warnings and recommends steps to clear the issues.
Time Zone Issue
select * from v$timezone_file;
FILENAME VERSION
———— ———-
timezlrg.dat 3
If time zone file version is less than 4 then apply time zone patch 5632264
This can be done using opatch.
unzip the patch and run opatch going into the unzipped patch directory.
or manually copy the .dat files under 5632264/files/oracore/zoneinfo into
$ORACLE_HOME/oracore/zoneinfo
Bounce the database and check the TIMEZONE version.
Do not forget to take a backup of old zoneinfo directory before this.
Re-run utlu111i.sql after patching the database to record
the new timezone file version.
This time it says something like Database contains stale optimizer statistics.
So..
Gather statistics
SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(’SYS’);
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS(’SYSMAN’);
And other schemas which ever it recommends for.
Run the pre-upgrade utility once again to make sure,
you don’t get any warnings.
Copy initialization file (pfile) from source (10g) to target (11g)
Modify initialization parameters
1.
background_dump_dest replaced by diagnostic_dest
user_dump_dest replaced by diagnostic_dest
core_dump_dest replaced by diagnostic_dest
comment above three deprecated parameters
and add
*.diagnostic_dest=’/11g_base’
2.
Change *.compatible=’10.2.0′
to *.compatible=’11.1.0′
Set Environment to new Oracle home
export ORACLE_HOME=/oraDB/app/oracle/product/11.1.0/db_1
export PATH=$ORACLE_HOME/bin:$PATH
export ORACLE_SID=TEST
export TNS_ADMIN=$ORACLE_HOME/network/admin/Context_Name
sqlplus ‘/as sysdba’
Startup the database in upgrade mode.
SQL> startup upgrade
SQL> SPOOL upgrade.log
SQL> @catupgrd.sql
This takes considerably a long time depending on the size of database.
Once the upgrade finishes,It will shut down the database automatically.
Login again as sysdba and startup in normal mode.
You will get NLS errors as nls directory under new oracle_home does not have 9idata directory
Run the $ORACLE_HOME/nls/data/old/cr9idata.pl script to create the $ORACLE_HOME/nls/data/9idata directory.
I , actually copied the 9idata directory from 10g oracle_home to new location.
After creating the directory,
make sure that the ORA_NLS10 environment variable is set to the full path of the 9idata directory whenever you enable the 11g Oracle home.
SQL>startup
Check the dba_registry for the components and its status
SQL> select comp_name,version, status from dba_registry;
Run Post-Upgrade Status Tool provides a summary of the upgrade
SQL>@?/rdbms/admin/utlu111s.sql
Perform upgrade actions that do not require the database to be in UPGRADE mode
SQL>@?/rdbms/admin/catuppst.sql
Check for invalid objects
SQL> select object_name, owner, object_type from all_objects where status= ‘INVALID’;
Compile Invalids
SQL> @?/rdbms/admin/utlrp.sql
Copy tnsnames.ora, listener.ora, sqlnet.ora and include file from source oracle_home to target oracle_home
This finishes upgrade of 10.2.0.3 database to 11.1.0.6 in E-Business Suite R12.
Further we have to upgrade the existing 11.1.0.6 to 11.1.0.7 using a patchset.
which I will discuss in my next post..
After Upgrade
Start the new database listener
Set the TNS_ADMIN environment variable to the directory where you created your listener.ora and tnsnames.ora files.
Run adgrants.sql
Copy $APPL_TOP/admin/adgrants.sql (adgrants_nt.sql for Windows) from the administration server node to the database server node.
$ sqlplus ‘/ as sysdba’
SQL> @adgrants.sql APPLSYS
Grant create procedure privilege on CTXSYS
Copy $AD_TOP/patch/115/sql/adctxprv.sql from
the administration server node to the database server node.
$ sqlplus apps/apps
SQL>@adctxprv.sql SYSTEM_password CTXSYS
Implement and run AutoConfig in the new database Oracle Home.
How to implement autoconfig in a new oracle_home …coming up…..

Html or Self-service page not coming up

This happened on a Test instance.

A user today came up with this error .
On examining the exception details which looks something like this.
Exception Details.
oracle.apps.fnd.framework.OAException: Could not load application module ‘oracle.apps.pos.supplier.server.SuppSummAM’. at oracle.apps.fnd.framework.webui.OAJSPApplicationRegistry.registerApplicationModule(OAJSPApplicationRegistry.java:279) at oracle.apps.fnd.framework.webui.OAJSPApplicationRegistry.registerApplicationModule(OAJSPApplicationRegistry.java:78) at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1283) at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:536) at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:424) at _OA._jspService(_OA.java:212) at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59) at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:335) at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:478) at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:401) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:702) at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:359) at com.evermind.server.http.ServletRequestDispatcher.unprivileged_forward(ServletRequestDispatcher.java:252) at com.evermind.server.http.ServletRequestDispatcher.access$100(ServletRequestDispatcher.java:42) at com.evermind.server.http.ServletRequestDispatcher$2.oc4jRun(ServletRequestDispatcher.java:186) at oracle.oc4j.security.OC4JSecurity.doPrivileged(OC4JSecurity.java:283) at com.evermind.server.http.ServletRequestDispatcher.forward(ServletRequestDispatcher.java:191) at com.evermind.server.http.EvermindPageContext.forward(EvermindPageContext.java:322) at _RF._jspService(_RF.java:225) at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59) at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:335) at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:478) at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:401) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64) at oracle.apps.jtf.base.session.ReleaseResFilter.doFilter(ReleaseResFilter.java:26) at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:15) at oracle.apps.fnd.security.AppsServletFilter.doFilter(AppsServletFilter.java:318) at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:610) at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:359) at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:870) at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:451) at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:299) at com.evermind.server.http.AJPRequestHandler.run(AJPRequestHandler.java:187) at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260) at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:230) at oracle.oc4j.network.ServerSocketAcceptHandler.access$800(ServerSocketAcceptHandler.java:33) at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:831) at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303) at java.lang.Thread.run(Thread.java:595) ## Detail 0 ## JBO-30003: The application pool (parasaktidev.sinewloresoft.comDEVPCIL1527oracle.apps.pos.supplier.server.SuppSummAM) failed to checkout an application module due to the following exception: oracle.jbo.JboException: JBO-29000: Unexpected exception caught: oracle.apps.fnd.framework.OAException, msg=Application: FND, Message Name: FND_GENERIC_MESSAGE. Tokens: MESSAGE = java.sql.SQLException: ORA-20001: SQL_PLSQL_ERROR: N, ROUTINE, MO_GLOBAL.INIT, N, ERRNO, -20001, N, REASON, ORA-20001: APP-FND-02902: Multi-Org profile option is required. Please set either MO: Security Profile or MO: Operating Unit profile option. ORA-06512: at “APPS.FND_MESSAGE”, line 509 ORA-06512: at “APPS.MO_GLOBAL”, line 36 ORA-06512: at “APPS.MO_GLOBAL”, line 757 ORA-06512: at “APPS.MO_GLOBAL”, line 700 ORA-06512: at line 1


The highlighted part shows that the Profile options MO:Security Profile and MO:Operating Unit
need to be set correctly.
These profile options are being changed by some other user while testing.
On reverting to the relevant and appropriate values, everything went fine.

About Kernel Parameters

Edit /etc/sysctl.conf to edit the kernel parameters required for Applications
vi /etc/sysctl.conf
# semaphores in kernel.sem: semmsl semmns semopm semmni
kernel.sem = 256 32000 100 142
kernel.shmmax = 2288490188
kernel.shmmni = 4096
kernel.shmall = 3279547
fs.file-max = 327679
net.ipv4.ip_local_port_range = 1024 65000
kernel.msgmax = 8192
kernel.msgmni = 2878
kernel.msgmnb = 65535
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
SHMMAX
The SHMMAX parameter is used to define the maximum size (in bytes) for a
shared memory segment and should be set large enough for the largest SGA size.
If the SHMMAX is set incorrectly (too low), it is possible that the Oracle SGA
(which is held in shared segments) may be limited in size. An inadequate
SHMMAX setting would result in the following:
ORA-27123: unable to attach to shared memory segment
You can determine the value of SHMMAX by performing the following:
# cat /proc/sys/kernel/shmmax
33554432
As you can see from the output above, the default value for SHMMAX is 32MB.
This is often too small to configure the Oracle SGA. I generally set the SHMMAX
parameter to 2GB.
NOTE: With a 32-bit Linux operating system, the default maximum size of the SGA is 1.7GB.
This is the reason I often set the SHMMAX parameter to 2GB since it requires a larger value for SHMMAX.
On a 32-bit Linux operating system, without Physical Address Extension (PAE), the physical
memory is divided into a 3GB user space and a 1GB kernel space. It is therefore possible to
create a 2.7GB SGA, but you will need make several changes at the Linux operating system
level by changing the mapped base. In the case of a 2.7GB SGA, you would want to set the
SHMMAX parameter to 3GB.
This is method I use most often.
This method sets the SHMMAX on startup by inserting the
following kernel parameter in the /etc/sysctl.conf startup file:
# echo “kernel.shmmax=2147483648″ >> /etc/sysctl.conf
•If you wanted to dynamically alter the value of SHMMAX without rebooting the machine, you
can make this change directly to the /proc file system. This command can be made
permanent by putting it into the /etc/rc.local startup file:
# echo “2147483648″ > /proc/sys/kernel/shmmax
•You can also use the sysctl command to change the value of SHMMAX:
# sysctl -w kernel.shmmax=2147483648
SHMMNI
We now look at the SHMMNI parameters. This kernel parameter is used to set the
maximum number of shared memory segments system wide. The default value for
this parameter is 4096. This value is sufficient and typically does not need to be
changed.
You can determine the value of SHMMNI by performing the following:
# cat /proc/sys/kernel/shmmni
4096
SHMALL
Finally, we look at the SHMALL shared memory kernel parameter. This parameter
controls the total amount of shared memory (in pages) that can be used at one
time on the system. In short, the value of this parameter should always be at least:
ceil(SHMMAX/PAGE_SIZE)
The default size of SHMALL is 2097152 and can be queried using the following
command:
# cat /proc/sys/kernel/shmall
2097152
From the above output, the total amount of shared memory (in bytes) that can be
used at one time on the system is:
SM = (SHMALL * PAGE_SIZE)
= 2097152 * 4096
= 8,589,934,592 bytes
The default setting for SHMALL should be adequate for our Oracle installation.
NOTE: The page size in Red Hat Linux on the i386 platform is 4096 bytes. You can, however,
use
bigpages which supports the configuration of larger memory page sizes.
fs.file-max
——————
When configuring our Linux database server, it is critical to ensure that the maximum number
of file handles is large enough. The setting for file handles designate the number of open files
that you can have on the entire Linux system.
Use the following command to determine the maximum number of file handles for the entire
system:
# cat /proc/sys/fs/file-max
103062
Oracle recommends that the file handles for the entire system be set to at least 65536. In
most cases, the default for Red Hat Linux is 103062
This is method I use most often. This method sets the maximum number of file handles
(using the kernel parameter file-max) on startup by inserting the following kernel parameter in
the /etc/sysctl.conf startup file:
# echo “fs.file-max=65536″ >> /etc/sysctl.conf
•If you wanted to dynamically alter the value of all semaphore kernel parameters without
rebooting the machine, you can make this change directly to the /proc file system. This
command can be made permanent by putting it into the /etc/rc.local startup file:
# echo “65536″ > /proc/sys/fs/file-max
NOTE: It is also possible to query the current usage of file handles using the following
command:
# cat /proc/sys/fs/file-nr
1140 0 103062
In the above example output, here is an explanation of the three values from the file-nr
command:
•Total number of allocated file handles.
•Total number of file handles currently being used.
•Maximum number of file handles that can be allocated. This is essentially the value of filemax
- (see above).
NOTE: If you need to increase the value in /proc/sys/fs/file-max, then make sure that the
ulimit is set properly. Usually for 2.4.20 it is set to unlimited. Verify the ulimit setting my
issuing the ulimit command:
# ulimit
unlimited
net.ipv4.ip_local_port_range
————————————–
The /proc/sys/net/ipv4/ip_local_port_range defines the local port range that is used by TCP
and UDP traffic to choose the local port. You will see in the parameters of this file two
numbers: The first number is the first local port allowed for TCP and UDP traffic on the server,
the second is the last local port number. For high-usage systems you may change its default
parameters to 32768-61000 -first-last.
The default setup for the ip_local_port_range parameters under Red Hat Linux is: “1024
4999″
To change the values of ip_local_port_range, type the following command on your terminal:
[root@deep] /# echo “32768 61000″ >/proc/sys/net/ipv4/ip_local_port_range
You must restart your network for the change to take effect.
[root@deep] /# /etc/rc.d/init.d/network restart
kernel.msgmni
——————–
msg queues I use message queue’s to pass messages around. I use the same queue to
send and receive messages. This is actually very bad (now that I think about it), cause you
might fill up the queue, and cause a deadlock (hmm, that appears to be happening!). anyway
to increase the sizes put bigger sizes into
/proc/sys/kernel/msgmax
The max number of bytes per message (redhat 7.2 default = 8192)
/proc/sys/kernel/msgmnb
The total size of the queue. (redhat 7.2 default = 16384)
/proc/sys/kernel/msgmni
Number of messages. (redhat 7.2 default = 16)
Basically these low numbers make message queues pretty useless in a high traffic
application. Probably you can just up these numbers to be pretty large
msgmax: ( default 2048 )
The parameter “msgmax” is the maximum number of bytes that can be in
any one message. msgsnd() will give the error EINVAL if a message is
longer than this.
msgmnb: ( default 4096 )
The parameter “msgmnb” is the maximum number of message bytes that
can be on any one queue at a given time. If a message would cause a
queue to grow larger than this, that msgsnd() would either block
until space was available, or would give the error EAGAIN if user
specified non-blocking mode. For obvious reasons “msgmnb” must be at
least as large as “msgmax”, however, it is probably good to set it to
some integral number of “msgmax.” This way, a few messages of the
maximum size could be queued at any time. However, making “msgmnb”
too large compared to the total bytes available for messages (see
below) may allow one queue to hog the message system.
msgmni: ( default 50 )
The parameter “msgmni” is the number of message queue ids available
to the system. Each message queue requires one id. msgget() gives
the error ENOSPC if all the ids have been used up.