Translate

Search This Blog

expdp error : ORA-31617: unable to open dump file "/backup/salesdb_backup_02.dmp" for write ORA-19505: failed to identify file "/backup/nightly_backup/salesdb_backup_02.dmp" ORA-27037: unable to obtain file status

expdp parfile=params

content of params.sh:
userid=scott/tiger
parallel=2
.
.
.
ORA-31693: Table data object "HR"."EMP" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "/backup/salesdb_backup_02.dmp" for write
ORA-19505: failed to identify file "/backup/nightly_backup/salesdb_backup_02.dmp"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory

Remedy in 11g is use parameter cluster=N

This error occures when parallel parameter is specified in using expdp in RAC databases  and same physical directories does not exists in all nodes. This error is fixed in 11g only using cluster=N

ORA-31633: unable to create master table "SYSTEM.NIGHTLY_EXPORT"

expdp  parfile=params

content of params:
userid=system/manager
job_name=nightly_export.
.
.
.
[This will create a table with NIGHTLY_EXPORT in system schema or from the schema from which the expdp command is run]

Export: Release 11.2.0.3.0 - Production on Wed Sep 18 08:57:30 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.NIGHTLY_EXPORT"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-00955: name is already used by an existing object

This error comes when expdp job with the specified parameter job_name is already existing. It could be the case 1) expdp job is attempted to start with given job_name while expdp job with the same job_name is already running 2) expdp job with given job_name is currently not executing but it was previously stopped or cancelled.

Remedy: Verify from querying DBA_DATAPUMP_JOBS that job with specified job_name is not running then drop the table SYSTEM.NIGHTLY_EXPORT as in given example.

identifying mapping of OS disks to ASM Disks when using ASMLIB

SQL> show parameter asm_diskstring
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/asm*
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
[grid@node1 ~]$ ls -ltrh /dev/asm*
brw-rw----. 1 grid asmadmin 8, 65 Sep 21 14:02 /dev/asm-disk4
brw-rw----. 1 grid asmadmin 8, 49 Sep 21 14:02 /dev/asm-disk3
brw-rw----. 1 grid asmadmin 8, 33 Sep 21 14:02 /dev/asm-disk2
brw-rw----. 1 grid asmadmin 8, 17 Sep 21 14:04 /dev/asm-disk1
[grid@node1 ~]$ cat /proc/partitions
major minor  #blocks  name
   8       32    5242880 sdc
   8       33    5237158 sdc1   8       16    5242880 sdb
   8       17    5237158 sdb1   8        0   15178280 sda
   8        1      76800 sda1
   8        2    1048576 sda2
   8        3     524288 sda3
   8        4          1 sda4
   8        5   13526016 sda5
   8       48    5242880 sdd
   8       49    5237158 sdd1   8       64    5242880 sde
   8       65    5237158 sde1   8       80   15618560 sdf
   8       81   15615148 sdf1
Actually I'm using udev rules[see bottom in post] so I do not need this method neverthless it is useful

SQL> select a.name DiskGroup, b.disk_number Disk#, b.name DiskName, b.total_mb, b.path
from v$asm_disk b, v$asm_diskgroup a
where a.group_number (+) =b.group_number
order by b.group_number, b.disk_number, b.name;

DISKGROUP    DISK#   DISKNAME                            TOTAL_MB PATH
------------------- ---------- ----------------------------------- ---------- ----------------------------------------
                                        1                                                             0 /dev/asm-disk3
                                        2                                                             0 /dev/asm-disk2
                                        3                                                             0 /dev/asm-disk4
DATA                             0 DATA_0000                                 5114 /dev/asm-disk1
[grid@node1 ~]$ cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB7144c211-011b06b1", NAME="asm-disk1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB6b224856-6ec55511", NAME="asm-disk2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB5386bea3-5dbec50f", NAME="asm-disk3", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB27a05b79-e60cecc1", NAME="asm-disk4", OWNER="grid", GROUP="asmadmin", MODE="0660"

TNS-12555: TNS:permission denied

This error comes when you want to start the listener service from wrong user. Suppose you have installed 11g2 RAC then you can start listener of grid using grid or oracle user but if you want to start the listener service from any user other than them say as test below then you will get error : TNS-12555: TNS:permission denied
 

[test@node1 ~]$ /u01/11.2.0/grid/bin/lsnrctl start LISTENER
LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 21-SEP-2013 11:42:57
Copyright (c) 1991, 2011, Oracle.  All rights reserved.
Starting /u01/11.2.0/grid/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 11.2.0.3.0 - Production
System parameter file is /u01/11.2.0/grid/network/admin/listener.ora
Log messages written to /u01/11.2.0/grid/log/diag/tnslsnr/node1/listener/alert/log.xml
Error listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
TNS-12555: TNS:permission denied TNS-12560: TNS:protocol adapter error
  TNS-00525: Insufficient privilege for operation
   Linux Error: 1: Operation not permitted

Listener failed to start. See the error message(s) above...

create standby database in Oracle Standard edition

Oracle standard edition can not have data guard i.e it can not have the recovery managed mode standby database but it can have manually refreshed standby database.

Steps for creating manually refreshed standby databases is same as that for creating recovery managed standby database

Standby database in Standard edition has two main differences over data guard which are :

1) Primary databases can not perform log shipping i.e it can not copy/send the archive log file from to standby database's archive location.  So you will need to write ftp/sftp or scp shell script or batch script which will copy primary's archive log files to standby node

2) Standby database will be refreshed with below command:

SQL> RECOVER STANDBY DATABASE UNTIL CANCEL;

Optional : You can configure  new parameter introduced from 10g LOG_ARCHIVE_TARGET at the priamry database to the time(in minutes) interval at which you want archive logs to be automatically generated by log switch.

What you do not need:

  • You do not need to set log_archive_dest_2 etc parameter in Primary nor in Standby database
  • You do not need to set DB_UNIQUE_NAME
  • You do not need to set parameter LOG_ARCHIVE_CONFIG

What do you need actually is : How to refresh standby database :

#script to refresh standby database-
sqlplus -s  / as sysdba <
set autorecovery on
alter database recover automatic standby database until cancel;
exit
EOF

You can invoke this script periodically after copying of archive logs from primary databases at time interval set by LOG_ARCHIVE_TARGET or at any interval.
 
To find out the last archived log applied on this standby:

you can use this query on mount stage.
SQL>select max(fhrba_Seq) from x$kcvfh;

Other steps are common:

That is you need to physically copy the database files (except redo logfiles) from primary to standby node and you also need to generate standby control file at primary database and copy to standby database node. You can perform this step via RMAN also.

Alternate way of creating standby database:

is to restore the backup of primary database on standby node and mount it and then refresh using below command. In this case you do not need to create standby control file.

Alter database recover automatic database using backup controlfile until cancel;

Additional Stuff:

If you want to failover to this new standby which was created with alternate way, for disaster recovery)
  ++ Apply maximum archive log from primary to standby. ++ Optionally we can copy the Online Redo Logs (ORL) from primary, provided they are accessible and recover so as to increase the point of recoverability     SQL>recover database using backup controlfile until cancel 
  ++ alter database open resetlogs.
NB:
 1) If you create a new tablespace in primary you may get the following errors while applying the corresponding logs in
   reporting server.
   ORA-00283: recovery session canceled due to errors
   ORA-01244: unnamed datafile(s) added to controlfile by media recovery
   ORA-01110: data file 5: '/home/jobin/tes1.dbf'


   At this stage you need to run a command like this to create an empty datafile in standby.

   a) SQL>select name from v$datafile where name like '%UNNAMED%';
   b) SQL> alter database create datafile '/home/oracle/product/ora101/dbs/UNNAMED00005'
            as '/oradata/dummy/test01.dbf';     Where /oradata/dummy is location for datafiles in standby.
       Now you can restart the recovery process.

log file sync wait event

This wait event belongs to commits wait class and implies user session has issued commit but it is not completed as redo log writers has not yet finished writing into the redo log file i.e all redo log buffers are not flushed to redo log file disk. Excessive waits on log file sync wait event indicates either there are too many commits or slow I/O sub system on which redo log files have been placed.

User Actions that should be taken:

• Do not put redo logs on RAID 5. Now a days SSD disks have evolved in much bigger use but NAND flash based SSD disks should not be used to store redo log file as redo log writes are sequential writes for which these disks are not suitable. DDR based SSD can be used to used to store redo logfile.  Place log files on dedicated disks. Consider putting log files on striped disks.

• Be prudent with frequency of commits and commit batch size .  If there are lots of short duration transactions, see if it is possible to BATCH transactions together so there are fewer distinct COMMIT operations. Each commit has to have it confirmed that the relevant REDO is on disk. Although commits can be piggybacked by Oracle, reducing the overall number of commits by batching transactions can have a very beneficial effect.

•Determine whether any activity can safely be done with NOLOGGING / UNRECOVERABLE options. This may be an option in data ware house where data can be reproducabile.

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

DDL Wait Option in 11g:

Any DDL operation on table/index in a session can not succeed if table is in used by some transaction(DML,acuired TM lock) by other session

SQL> alter table trans add (tran_code varchar2(10));
But instead of getting  “Table altered”, DBA gets:
alter table trans add (tran_code varchar2(10));
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

The error message says the table is being used right now, probably by a transaction(DML), so getting an exclusive lock on the table may be next to impossible. Of course, the rows of the table are not locked forever. When sessions perform commit the locks on those rows are released, but before that unlock period gets very far, other sessions may update some other rows of the table—and thus the slice of time to get the exclusive lock on the table vanishes.

In Oracle Database 11g DBA has a better option: the DDL Wait option. DBA issues:
SQL> alter session set ddl_lock_timeout = 10;
Session altered.

Now, when a DDL statement in the session does not get the exclusive lock, it will not error out. Instead, it will wait for 10 seconds. In that 10 seconds, it continually re-tries the DDL operation until it’s successful or the time expires, whichever comes first. When DBA issues:
SQL>  alter table trans add (tran_code varchar2(10));
the statement hangs and does not error out.  This gets completed as soon as TM lock is released by DDL. If you issue ALTER SYSTEM SET DDL_LOCK_TIMEOUT = 10 it can be set at the system level as well.

ORA-02049: timeout: distributed transaction waiting for lock

This error comes when DML statement waits for a row level lock in remote database and distributed transaction waiting time is timed out, which defaults to 60 seconds specified by init parameter distributed_lock_timeout. In all cases, it involves DB links (distributed transactions)

SQL> update emp@rep1 set deptno=20 where empno=7789;
update emp@rep1 set deptno=20 where empno=7789;
*
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock
ORA-02063: preceding line from REP1

SQL> show parameter distri
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------
distributed_lock_timeout             integer     60

Action: : This situation is treated as a deadlock and the statement was rolled back. To set the time-out interval to a longer interval, adjust the initialization parameter DISTRIBUTED_LOCK_TIMEOUT, then shut down and restart the instance.

rename log file , data file or dropping log files in oracle straight forward

I'm writting this post for beginner DBAs to let them apply concept of redo log file,instance recovery and file management.

There are two ways to acheive rename datafile/redo logfile it:

1) data file and log files both can be renamed at the mount stage of database with same command ALTER DATABASE RENAME FILE OLDNAME TO NEWNAME;

steps #a) shutdown immediate
         #b) copy /path/redolog1 /newpath/redolog1_a
               copy /path/datafile1 /newpath/datafile01              
         #c) startup mount
         #d) alter database rename file '/path1/redolog1' to '/newpath2/redolog1a';
               alter database rename file '/path/datafile1' '/newpath/datafile01'
         #e) alter database open  

2) redo log files can not be renamed while database is open though you can drop redo logfile and create the one with the new path/name. On the other hand datafile can be renamed while the database is open(online) with ALTER TABLESPACE RENAME DATAFILE OLDNAME TO NEWNAME

steps #a) alter tablespace mydata offline;
          #b) host copy /path1/datafile1 /newpath2/datafile_01
          #c)  alter tablespace rename datafile '/path1/datafile1' to '/newpath2/datafie_01'
          #d) alter tablespace mydata online;

SQL> alter database add logfile 'd:\oracle\product\10.2.0\oradata\orcl\REDO04.log' size 50m reuse;
Database altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.
Total System Global Area  234881024 bytes
Fixed Size                  1289916 bytes
Variable Size             163578180 bytes
Database Buffers           62914560 bytes
Redo Buffers                7098368 bytes
Database mounted.
SQL> host copy D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO04.LOG D:\REDO04.log
        1 file(s) copied.

SQL> alter database rename file 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO04.LOG' to 'D:\REDO04.log';
Database altered.

SQL> alter database open;
Database altered.

Dropping redo logfile is also pretty straight forward thing. You just need to know active redo log file/group can not be dropped ,similarly current redo logfile/group can not be dropped.


Active redo log can not be dropped as checkpoint of its contents is not complete obviously and current redo log can be dropped because log write process is currently writing into it. Once I found my jumior DBA attempting to drop redo logfile which was active and it failed for above stated reason.So remedy I told her was to issue first Alter System CHECKPOINT. 


And to drop current redo log file group, first issue alter switch logfile command so that it is no longer current redo log group, it now becomes active redo log group and then issue alter system checkpoint command followed by drop logfile group or drop logfile command.

SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01624: log 2 needed for crash recovery of instance orcl (thread 1)ORA-00312: online log 2 thread 1: 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\REDO02.LOG'

SQL> select * from v$log;
    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
         1          1        403   52428800          1 YES ACTIVE                 2839841 21-SEP-13
         2          1        404   52428800          1 YES ACTIVE                 2839884 21-SEP-13
         3          1        406   52428800          1 NO  CURRENT                2839888 21-SEP-13
         4          1        405   52428800          1 YES ACTIVE                 2839886 21-SEP-13
SQL> alter system checkpoint ;
System altered.
SQL> select * from v$log;
    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
         1          1        403   52428800          1 YES INACTIVE               2839841 21-SEP-13
         2          1        404   52428800          1 YES INACTIVE               2839884 21-SEP-13
         3          1        406   52428800          1 NO  CURRENT                2839888 21-SEP-13
         4          1        405   52428800          1 YES INACTIVE               2839886 21-SEP-13
SQL> alter database drop logfile group 2;
Database altered.

Here alternatively log member of redo log group #2 could be queried and drop logfile command also could be issued: [ select l.group#,l.sequence#,l.status,lf.member,l.bytes/1024/1024 SIZE_MB  from v$log l, v$logfile lf where l.group#=lf.group# order by l.sequence#;

alter database drop logfile '/u05/app/dbf/redo02log';  ]

SQL> select * from v$log;
    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
         1          1        410   52428800          1 YES INACTIVE               2839985 21-SEP-13
         3          1        409   52428800          1 YES INACTIVE               2839983 21-SEP-13
         4          1        411   52428800          1 NO  CURRENT                2839987 21-SEP-13
SQL> alter database drop logfile group 4;
alter database drop logfile group 4
*
ERROR at line 1:
ORA-01623: log 4 is current log for instance orcl (thread 1) - cannot drop
ORA-00312: online log 4 thread 1: 'D:\REDO04.LOG'

what is cost in execution plan in oracle

What is meaning of COST in execution plan in Oracle:

COST in execution plan is a relative factor which tells how much single block read,multi block reads, temporay space operations will take place along with , CPU cycles (cpu cycle can differe in various operations). If two costs are closer than it does not mean their execution time will also be closer. Generally index scan needs more CPU cycles than FTS. In same way Nested loop join involves more CPU than Hash Join and Sort Merge join., while hash and sort-merge join needs temporay work area UGA(part of PGA)(as well temp space depends) which is not needed in Nested Loop join.

You should gather system statistics at representative workload so that oracle knows what is single block read time, what is multiblock read time and what is CPU speed etc. This way cost calclulated is absolutely best provided the table,index,columns statistics(histograms, if needed) are accurate.

e.g. Consider below two queries for which cost are closer but execution time is differing a lot

1. select count(*) from (select * from mytab union all select * from mytab); and
2 select sum(c) from (select count(*) c from mytab union all select count(*) c from mytab);

Their cost in execution plan is quite close but execution time of second query is less than half of first query.  Below uses dynamic sampling but it does not make difference.

SQL> conn scott/tiger
Connected.
SQL> create table mytab as select * from all_objects;
Table created.
SQL> explain plan for select count(*) from (select * from mytab union all select * from mytab);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
Plan hash value: 1229214271
----------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 313 (1)| 00:00:04 |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | VIEW | | 115K| 313 (1)| 00:00:04 |
| 3 | UNION-ALL PARTITION| | | | |
| 4 | TABLE ACCESS FULL | MYTAB  | 46942 | 159 (2)| 00:00:02 |
| 5 | TABLE ACCESS FULL | MYTAB| 46942 | 159 (2)| 00:00:02 |
----------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
16 rows selected.
SQL> explain plan for select sum(c) from (select count(*) c from mytab union all select count(*) c from mytab);
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
Plan hash value: 3291583229
------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 316 (1)| 00:00:04 |
| 1 | SORT AGGREGATE | | 1 | 13 | | |
| 2 | VIEW | | 2 | 26 | 316 (1)| 00:00:04 |
| 3 | UNION-ALL | | | | | |
| 4 | SORT AGGREGATE | | 1 | | | |
| 5 | TABLE ACCESS FULL| MYTAB| 46942 | | 158 (1)| 00:00:02 |
| 6 | SORT AGGREGATE | | 1 | | | |
| 7 | TABLE ACCESS FULL| MYTAB  | 46942 | | 158 (1)| 00:00:02 |
------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
18 rows selected.
SQL> --now alter system flsuh buffer_cache from sys session
SQL> alter session set statistics_level=all;
Session altered.

You notice COST of both queries are very close. Now check actual execution time

SQL> set timing on
SQL> select count(*) from (select * from mytab union all select * from mytab);
COUNT(*)
----------
100192
Elapsed: 00:00:01.77
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
SQL_ID 5wn0myv3kcsvs, child number 0
-------------------------------------
select count(*) from (select * from mytab union all select * from mytab)
Plan hash value: 1229214271
----------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:01.77 | 1386 |
| 2 | VIEW | | 1 | 115K| 100K|00:00:01.60 | 1386 |
| 3 | UNION-ALL PARTITION| | 1 | | 100K|00:00:01.10 | 1386 |
| 4 | TABLE ACCESS FULL | MYTAB  | 1 | 46942 | 50096 |00:00:00.15 | 693 |
| 5 | TABLE ACCESS FULL | MYTAB| 1 | 46942 | 50096 |00:00:00.10 | 693 |
----------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement

20 rows selected.
Elapsed: 00:00:00.09
SQL> --now alter system flsuh buffer_cache from sys session
SQL> select sum(c) from (select count(*) c from mytab union all select count(*) c from mytab);
SUM(C)
----------
100192
Elapsed: 00:00:00.61
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
SQL_ID gq7ca01amc88s, child number 0
-------------------------------------
select sum(c) from (select count(*) c from mytab union all select count(*) c from mytab)
Plan hash value: 3291583229
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.60 | 1386 | 689 |
| 2 | VIEW | | 1 | 2 | 2 |00:00:00.60 | 1386 | 689 |
| 3 | UNION-ALL | | 1 | | 2 |00:00:00.60 | 1386 | 689 |
| 4 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.35 | 693 | 689 |
| 5 | TABLE ACCESS FULL| MYTAB  | 1 | 46942 | 50096 |00:00:00.27 | 693 | 689 |
| 6 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.25 | 693 | 0 |
| 7 | TABLE ACCESS FULL| MYTAB  | 1 | 46942 | 50096 |00:00:00.15 | 693 | 0 |
-------------------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement

22 rows selected.
Elapsed: 00:00:00.10
SQL> spool off

You see second query is more than twice faster than first query
 

ORA-16038: log 1 sequence# 2901 cannot be archived - ORA-19809: limit exceeded for recovery files logfileswitch(archiving needed)

ORA-16038: log 1 sequence# 36792 cannot be archived.
ORA-19809: limit exceeded for recovery files

Your DML statements ,even, session logon hangs and you get above error besides getting below error in alert log file as well. Further you see wait event logfileswitch(archiving needed) which implies sesion is waiting for log file switch to succeed which in turn is waiting for older logfiles to be archived.

SID,SEQ#,EVENT,P1TEXT,P1,P1RAW,P2TEXT,P2,P2RAW,P3TEXT,P3,P3RAW,WAIT_CLASS_ID,WAIT_CLASS#,WAIT_CLASS,WAIT_TIME,SECONDS_IN_WAIT,STATE,WAIT_TIME_MICRO,TIME_REMAINING_MICRO,TIME_SINCE_LAST_WAIT_MICRO

49,47571,logfileswitch(archivingneeded),,0,00,,0,00,,0,00,3290255840,2,Configuration,0,84,WAITING,84329776,-1,0

If this wait event is seen continuously then it means archiver process is not able to write archive redo logfiles due below reasons

1. mount point or directory pointed by log_archive_destination_n is full in disk space

2. There is flash recovery area being used for archiving, there may be enough space in directory/mount point on which flash recovery area lies but flash recovery area is filled upto space limit defined by instance init parameter db_recovery_file_dest_size   

 select *  from v$recovery_file_dest

NAME,           SPACE_LIMIT,SPACE_USED,   SPACE_RECLAIMABLE,NUMBER_OF_FILES
C:\flash_recovery_area,107374182400,107216960512,1081344,1147

107374182400=100GB

Here you can see flash recovery area has only free space=SPACE_LIMIT-SPACE_USED=149MB while the redo log file size is bigger(200MB) than this free space 149MB so redo log could not be archived.

Remedy 1:

 SQL> show parameter recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------------------
db_recovery_file_dest                string      C:\flash_recovery_area
db_recovery_file_dest_size           big integer 100G

If there is enough free space in flash reocvery area location pointed by flash recovery area db_recovery_file_dest  then increase space limit(db_recovery_file_dest) which is set to 100GB in above: 
 
alter systemset db_recovery_file_dest_size=400g [

Remedy 2:

-delete older archive logs which are not needed according to backup policy

RMAN> delete archivelog until time 'sysdate-7'

-delete expired archive logs (archive log fiels deleted physically from OS command and not from RMAN.First, mark such archived logs to be expired in RMAN.

RMAN> crosscheck archivelog all
OR
RMAN>change archive log all validate;

then delete marked expired archive logs

RMAN>delete expired archivelog all;

Now archiving will succeed.

ASM I/O queries

-- ASM I/O load and I/O response QUERIES

Below query returns name of disk group, average read/write time in millisecs, total number of reads/writes

Query 1:

SELECT name, ROUND(total_mb / 1024) total_gb, active_disks,
 reads / 1000 reads1k, writes / 1000 writes1k,
 ROUND(read_time) read_time, ROUND(write_time) write_time,
 ROUND(read_time * 1000 / reads, 2) avg_read_ms,
 ROUND(write_time * 1000 / writes, 2) avg_write_ms
 FROM v$asm_diskgroup_stat dg
 JOIN
 (SELECT group_number, COUNT(DISTINCT disk_number)
active_disks,
 SUM(reads) reads, SUM(writes) writes,
 SUM(read_time) read_time, SUM(write_time)
write_time
 FROM gv$asm_disk_stat
 WHERE mount_status = 'CACHED'
 GROUP BY group_number) ds
 ON (ds.group_number = dg.group_number)
 ORDER BY dg.group_number

If you want above query returns load on ASM instance disk groups instance wise then you can replace view gv$asm_disk_stat by v$asm_disk_stat in above query

Below  query returns asm disk name, average read and total number of reads/write  
Query 2: query takes name of disk group as parameter
 
 SELECT d.PATH disk_path, d.total_mb,
 ROUND(ds.read_secs * 1000 / ds.reads, 2) avg_read_ms,
 ds.reads/1000 + ds.writes/1000 io_1k,
 ds.read_secs +ds.write_secs io_secs,
 ROUND((d.reads + d.writes) * 100 /
 SUM(d.reads + d.writes) OVER (),2) pct_io,
 ROUND((ds.read_secs +ds.write_secs)*100/
 SUM(ds.read_secs +ds.write_secs) OVER (),2) pct_time
 FROM v$asm_diskgroup_stat dg
 JOIN v$asm_disk_stat d ON (d.group_number = dg.group_number)
 JOIN (SELECT group_number, disk_number disk_number,
SUM(reads) reads,
 SUM(writes) writes, ROUND(SUM(read_time), 2)
read_secs,
 ROUND(SUM(write_time), 2) write_secs
 FROM gv$asm_disk_stat
 WHERE mount_status = 'CACHED'
 GROUP BY group_number, disk_number) ds
 ON (ds.group_number = d.group_number
 AND ds.disk_number = d.disk_number)
 WHERE dg.name = '&diskgroup_name'
 AND d.mount_status = 'CACHED'
 ORDER BY d.PATH

--rebalance operations

Query 3:

SELECT dg.NAME, d.operation, d.state, d.POWER, d.actual,
 est_work ,
 d.sofar*100/d.est_work pct_done, d.est_rate, d.est_minutes
 FROM v$asm_diskgroup dg LEFT OUTER JOIN gv$asm_operation d
 ON (d.group_number = dg.group_number);


Below query returns file names and assocoated template and stripping type

  Query 4:

 SELECT rootname,d.name diskgroup_name,f.TYPE, a.name filename,
 space / 1048576 allocated_mb, primary_region, striped,
 round((hot_reads + hot_writes)/1000,2) hot_ios1k,
 round((cold_reads + cold_writes)/1000,2) cold_ios1k
 FROM (SELECT CONNECT_BY_ISLEAF, group_number, file_number, name,
 CONNECT_BY_ROOT name rootname, reference_index,
 parent_index
 FROM v$asm_alias a
 CONNECT BY PRIOR reference_index = parent_index) a
 JOIN (SELECT DISTINCT name
 FROM v$asm_alias
 WHERE parent_index = group_number * POWER(2, 24)) b
 ON (a.rootname = b.name)
 JOIN v$asm_file f
 ON (a.group_number = f.group_number
 AND a.file_number = f.file_number)
 JOIN v$asm_diskgroup d
 ON (f.group_number = d.group_number)
 WHERE a.CONNECT_BY_ISLEAF = 1
 ORDER BY (cold_reads+cold_writes+hot_reads+hot_writes) DESC;

 SELECT t.name template_name, t.SYSTEM, t.redundancy,
 t.stripe, t.primary_region
 FROM v$asm_template t
 JOIN v$asm_diskgroup d
 ON (d.group_number = t.group_number)
 WHERE d.name = 'DATA'
 ORDER BY t.name;