Translate

Search This Blog

deciding execution plan when query is transformed.

Oracle 10g brought cost based query transformation. Consider two below cases and decide what plan of COST based optimizer is good.  Is transformation it is considering is good.

INDEX IDX_HIST_ID ON HISTDATA(ID)

exec :b1:=1

SELECT  /*+ use_nl (A B ) */  MAX (id)
                         FROM histdata A ,  sustained_feed b
                      WHERE A.id=b.id
                        AND b. data_bucket = :B1;

SELECT STATEMENT Optimizer Mode=ALL_ROWS 1   1779                    
  SORT AGGREGATE 1   10                    
    NESTED LOOPS 610 K 5 M 1779                    
      TABLE ACCESS FULL MT_RPT.sustained_feed 17   119   3                    
      FIRST ROW 35 K 105 K 104                    
        INDEX RANGE SCAN (MIN/MAX) MT_RPT.IDX_HIST_ID 35 K 105 K 104                    


SELECT    MAX (id)
                          FROM histdata
                         WHERE id IN (SELECT  id
                                                   FROM sustained_feed
                                                  WHERE data_bucket = :B1);


Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=ALL_ROWS 1   872                    
  SORT AGGREGATE 1   10                    
    HASH JOIN 610 K 5 M 872                    
      TABLE ACCESS FULL MT_RPT.sustained_feed 17   119   3                    
      INDEX FAST FULL SCAN MT_RPT.IDX_HIST_ID 933 K 2 M 759                

corrupt indexes key-rowid value mismatch

Short Circuit caused power Down=>SAN Down=>All databases servers using this storage  Crashed=>Oracle 9i DB started fine=>but…OOPS! SAP Application not working properly


Once a User complained about some modules having issues :
There was a Database Alert Log recorded ORA-600 internal error with different first actual code argument. A search into Meta link showed that DB had problems in executing DMLs on some tables, as some of  the indices on these tables had logical corruption. So it was thought, when DB crashed, DMLS on these tables were in progress,causing update of index entries. so BAD row mismatch occurred as key-row information was not consistent. So  queries and DMLs which would involve these indexes were failing.
Restart of the database, automatic Crash Recovery of Database[without intervention of DBA] was successful, there was no corruption shown in logs nor in any datafile block.  So when DB started,  SAN was made up again. After restoring power supply, it was opened successfully. But it had a logical corruption hidden inside bad leaf blocks of indices.  Automatic crash recovery was not able to fix index logical corruption. This is obvious behavior. Some  won’t expect such issue and blame RDBMS some won't. I won't.


Solution:


1. Identify such indices which are in bad shape and fix problematic indexes
a) drop and recreate such indexes  OR
b) rebuild them  online so that it read key values/rowed from table (normal rebuild  reads existing index entries so corruption does not go away)


2. Restore from backup and recover
The latter  was not feasible as DB had size 1TB and prod database was on RAID 5. It would  have taken more than 2 days to restore it!!!

Another case:
I seen some people killing their application had corrupt indexes. It means when they killed they had some running DMLs on tables.

oracle education content

Oracle has changed a lot the look and feel of its tutorials. They have changed links to reach viewlets. You need good at search. below is URL:

http://apex.oracle.com/pls/apex/f?p=44785:2:1734844480451698::NO:2,CIR,RIR:P2_PRODUCT_ID,P2_RELEASE_ID:2014,85



litle bit on jdbc & oracle when every milisecond matters

when every milisecond matters

1. use OCI JDBC driver , undoubetdly they are fastest JDBC driver , especially when there is lot of fetch. I found them performing 10% faster for a bulk fetch.

2. if java application has to fecth large data then use larger fetch size .

3. When DB server and Java application server are on same box, use OCI JDBC driver. In this case db connection will be using IPC and gives best performance. And ovbvious as states in 1. above , JDBC OCI are fastest even when used for remote connection to DB.

4. if using connection pooling,there should be good number of connections available otherwise there will be more opened cursors per session.

5. perform analysis for use of preparestatement vs statement. Don't go blindly for preparestatement

6. use batch update but just be wise to choose batch size. If keep it is small no benefit and if it is large again it will need more undo in database. Similarly guys when you add statements in batch be wise loop does not iterate very much other wise you will need good memory for your java program.

some tail info not related actually to performance:

-- use the JDBC drivers matching DB server jdbc drivers version. I seen people running  oracle 10.2.0.4 but their jdbc driver was version 10.2.0.1. They had issue in fetching from result set sometimes. Fix suggested was to use matching jdbc driver and it did .

--  if using connection pool,take extreme care to check result sets are closed properly and opened cursors does not exceed limit.