Friday, November 9, 2012

ORA-31693 ORA-29913 ORA-31617 Unable To Open Dump File for Write On RAC

While exporting few tables from 11g RAC environment hit the below Issue and workarounds to resolve the issue.

ORA-31693: Table data object "AABCVSUPPORT"."SOAPV_CHANNEL_INV_ESS_WW" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "/u01/app/oracle/product/
11.2.0.3/dbhome_1/rdbms/log/ABCVSUPPORT_new_03.DMP" for write
ORA-19505: failed to identify file "/u01/app/oracle/product/
11.2.0.3/dbhome_1/rdbms/log/ABCVSUPPORT_new_03.DMP"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
. . exported "ABCVSUPPORT"."SAOPV_INTRANSIT_EBS_WW"        1.548 GB 8647683 rows



Here is the command used to export the tables.

expdp system/<pwd> DIRECTORY=DATA_PUMP_DIR DUMPFILE=ABCVSUPPORT_new_%U.DMP LOGFILE=ABCVSUPPORT_new.LOG tables=ABCVSUPPORT.SOAPV_INTRANSIT_EBS_WW, ABCVSUPPORT.SOAPV_SHIPMENTS_EBS_WW, ABCVSUPPORT.SOAPV_SELLTHRU_EBS_WW, ABCVSUPPORT.SOAPV_CHANNEL_INV_EBS_WW, PARALLEL=5 filesize=4G
When the job scheduler starts parallel jobs, one or more processes may be executed on another node in the cluster and may not have access to the target directory. Make sure in a RAC environment the target directory is on SHARE storage (ASM/ACFS is very good for this purpose).

Solution

1. Please try to change the value of PARALLEL clause. (Experiment with value of parallel and see if issue is resolved by changing parallel value).
SUGGESTION: You can try from 2-8.
2. Check if  the storage is shared among all instances at operating system end. See if you can list the files from that mount point on all the instances in RAC (Real Application Cluster) as oracle user as well as at operating system end.
3. Check if  the directory is created on shared storage or is it local to the instance on non shared storage.
4. Please try DataPump export with different schema than you are doing it now for and see the problem is peculiar to specific SCHEMA.
5. Check if DBA_REGISTRY is clean (All components are VALID and showing same version).
Correct the components if they are INVALID:

connect / as sysdba
SELECT COMP_NAME , VERSION , STATUS FROM DBA_REGISTRY;
6. Please check the mount point you are using. Check if  it is NFS or something else. Check if there are some issues with it.
7. Please check if  there were any changes done recently at Oracle/Operating System end.
8. Please check if there is an disk space issue at the time of error.
9. Also check if this behavior is node specific or not. See if you can try DataPump by creating another directory on another node.
10. Check if  you  have created any service which is node specific.
11. Error ORA-31617 usually means that the operating system is not able to write to the destination. The error is returned to oracle by the operating system, so please check at operating system end that the file write operation from the local node to the particular mount point is fine.
12. Check if there is any I/O (Input/Output) error reported in operating system logs.
13. Please check the DIRECTORY definition and also the environment details. See if directory is created on shared server and the write permission is given to the user oracle as well as at operating system end.


Note: But in my case 1st point itself helped me after removing the parallel caluse from the export command it worked fine.

No comments:

Post a Comment