Translate

Search This Blog

RMAN-05541: no archived logs found in target database

RMAN> DUPLICATE TARGET DATABASE
   FOR STANDBY
   FROM ACTIVE DATABASE
   DORECOVER
   SPFILE
     SET "db_unique_name"="ORCL"
NOFILENAMECHECK;2> 3> 4> 5> 6> 7>
Starting Duplicate Db at 30-JUL-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 07/30/2013 02:45:30
RMAN-05541: no archived logs found in target database


REMEDY is generate archive logs at Primary database

sql plan baseline example

SQL> ALTER SESSION SET OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE
SQL> ALTER SESSION SET optimizer_mode='FIRST_ROWS';

Session altered.

SQL>  SELECT /* TEST */
  2   *
  3   FROM SALES WHERE QUANTITY_SOLD > 40
  4  ORDER BY PROD_ID;

no rows selected

SQL> ALTER SYSTEM FLUSH SHARED_POOL;

System altered.

SQL> ALTER SESSION SET optimizer_mode='ALL_ROWS';

Session altered.

SQL>  SELECT /* TEST */
  2   *
  3   FROM SALES WHERE QUANTITY_SOLD > 40
  4  ORDER BY PROD_ID;

no rows selected

Second execution of the query has encountered a change in the execution plan. But it has not been ‘accepted as yet even though it has been ‘enabled’.

SQL> SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines
WHERE sql_text like '%GAVIN%';  2

                                                              Ena-
SQL_HANDLE                     PLAN_NAME                      bled Acpt
------------------------------ ------------------------------ ---- ----
SYS_SQL_6f3dcd88c7488035       SYS_SQL_PLAN_c748803511df68d0  YES  YES
SYS_SQL_6f3dcd88c7488035       SYS_SQL_PLAN_c748803554bc8843  YES  NO

We can use the EVOLVE_SQL_PLAN_BASELINE function to compare the performance between the two plans. We note that the second plan has not been automatically evolved or accepted because it does not pass the performance improvement criteria which has been laid down

SQL> SET LONG 10000
SQL> SELECT DBMS_SPM.evolve_sql_plan_baseline(sql_handle => 'SYS_SQL_6f3dcd88c7488035') FROM dual;

DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE(SQL_HANDLE=>'SYS_SQL_6F3DCD88C7488035')
--------------------------------------------------------------------------------

-------------------------------------------------------------------------------
                        Evolve SQL Plan Baseline Report
-------------------------------------------------------------------------------

Inputs:
-------
  SQL_HANDLE = SYS_SQL_6f3dcd88c7488035
  PLAN_NAME  =
  TIME_LIMIT = DBMS_SPM.AUTO_LIMIT
  VERIFY     = YES
  COMMIT     = YES

Plan: SYS_SQL_PLAN_c748803554bc8843
-----------------------------------
  Plan was verified: Time used 1.457 seconds.
  Failed performance criterion: Compound improvement ratio <= 1.16.

                      Baseline Plan      Test Plan     Improv. Ratio
                      -------------      ---------     -------------
  Execution Status:        COMPLETE       COMPLETE
  Rows Processed:                 0              0
  Elapsed Time(ms):            1036            304              3.41
  CPU Time(ms):                1020            300               3.4
  Buffer Gets:                 1929           1727              1.12
  Disk Reads:                     0              0
  Direct Writes:                  0              0
  Fetches:                        0              0
  Executions:                     1              1

-------------------------------------------------------------------------------
                                 Report Summary
-------------------------------------------------------------------------------
Number of SQL plan baselines verified: 1.
Number of SQL plan baselines evolved: 0.
We can also manually evolve the plan if we ourselves feel that we know that the plan is a better one
SQL> var spm number;

SQL> exec :spm := dbms_spm.alter_sql_plan_baseline(sql_handle =>'SYS_SQL_6f3dcd88c7488035',
- plan_name =>'SYS_SQL_PLAN_c748803554bc8843', attribute_name => 'ACCEPTED',attribute_value => 'YES');

PL/SQL procedure successfully completed.

SQL> SELECT sql_handle, plan_name, enabled, accepted FROM dba_sql_plan_baselines
  2  WHERE sql_text like '%GAVIN%';

                                                              Ena-
SQL_HANDLE                     PLAN_NAME                      bled Acpt
------------------------------ ------------------------------ ---- ----
SYS_SQL_6f3dcd88c7488035       SYS_SQL_PLAN_c748803511df68d0  YES  YES
SYS_SQL_6f3dcd88c7488035       SYS_SQL_PLAN_c748803554bc8843  YES  YES

Flashing Back a Failed Primary Database into a Physical Standby Database

Flashing Back a Failed Primary Database into a Physical Standby Database

 

The following steps assume that a failover has been performed to a physical standby database and that Flashback Database was enabled on the old primary database at the time of the failover. This procedure brings the old primary database back into the Data Guard configuration as a physical standby database.


Step 1 Determine the SCN at which the old standby database became the primary database.

On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:


SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;

Step 2 Flash back the failed primary database.

Shut down the old primary database (if necessary), mount it, and flash it back to the value for STANDBY_BECAME_PRIMARY_SCN that was determined in Step 1.

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;

SQL> FLASHBACK DATABASE TO SCN standby_became_primary_scn;

Step 3 Convert the database to a physical standby database.

Perform the following steps on the old primary database:

1.Issue the following statement on the old primary database:

SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

This statement will dismount the database after successfully converting the control file to a standby control file.

2.Shut down and restart the database:

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT;

Step 4 Start transporting redo to the new physical standby database.

Perform the following steps on the new primary database:

1.Issue the following query to see the current state of the archive destinations:

SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, -
> ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;

2.If necessary, enable the destination:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;

3.Perform a log switch to ensure the standby database begins receiving redo data from the new primary database, and verify it was sent successfully. Issue the following SQL statements on the new primary database:

SQL> ALTER SYSTEM SWITCH LOGFILE;

SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION,-
> ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;

On the new standby database, you may also need to change the LOG_ARCHIVE_DEST_n initialization parameters so that redo transport services do not transmit redo data to other databases.

Step 5 Start Redo Apply on the new physical standby database.

Issue the following SQL statement on the new physical standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -
> USING CURRENT LOGFILE DISCONNECT;

Redo Apply automatically stops each time it encounters a redo record that is generated as the result of a role transition, so Redo Apply will need to be restarted one or more times until it has applied beyond the SCN at which the new primary database became the primary database. Once the failed primary database is restored and is running in the standby role, you can optionally perform a switchover to transition the databases to their original (pre-failure) roles. See upcoming post "Perfoming a Switchover to a Physical Standby Database" for more information.



convert failed primary database into physical standby database

Converting a Failed Primary into a Physical Standby Database Using RMAN Backups

 

To convert a failed primary database, Oracle recommends that you enable the Flashback Database feature on the primary and follow the procedure described in either Section 13.2.1 or Section 13.2.2. The procedures in those sections describe the fastest ways to convert a failed primary into either a physical or logical standby. However, if Flashback Database was not enabled on the failed primary, you can still convert the failed primary into either a physical or logical standby using a local backup of the failed primary, as described in the following sections:

The steps in this section describe how to convert a failed primary into a physical standby by using RMAN backups. This procedure requires that the COMPATIBLE initialization parameter of the old primary be set to at least 11.0.0.



Step 1 Determine the SCN at which the old standby database became the primary database.

On the new primary database, issue the following query to determine the SCN at which the old standby database became the new primary database:

SQL> SELECT TO_CHAR(STANDBY_BECAME_PRIMARY_SCN) FROM V$DATABASE;

Step 2 Restore and recover the entire database.

Restore the database with a backup taken before the old primary had reached the SCN at which the standby became the new primary (standby_became_primary_scn). Then, perform a point-in-time recovery to recover the old primary to that same point.

Issue the following RMAN commands:

RMAN> RUN
{
SET UNTIL SCN ;
RESTORE DATABASE;
RECOVER DATABASE;
}

 [ With user-managed recovery, you can first restore the database manually. Typically, a backup taken a couple of hours before the failover would be old enough. You can then recover the failed primary using the following command:

SQL> RECOVER DATABASE USIING BACKUP CONTROLFILE UNTIL CHANGE -

> ;

Unlike a reinstantiation that uses Flashback Database, this procedure adds one to standby_became_primary_scn. For datafiles, flashing back to an SCN is equivalent to recovering up until that SCN plus one. ]


Step 3 Convert the database to a physical standby database.

Perform the following steps on the old primary database:

1.Issue the following statement on the old primary database:
SQL> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

This statement will dismount the database after successfully converting the control file to a standby control file.

2.Shut down and restart the database:
SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT;

Step 4 Open the database as read-only.

Issue the following command:

SQL> ALTER DATABASE OPEN READ ONLY;

The goal of this step is to synchronize the control file with the database by using a dictionary check. After this command, check the alert log for any actions suggested by the dictionary check. Typically, no user action is needed if the old primary was not in the middle of adding or dropping datafiles during the failover.

Step 5 (Optional) Mount the standby again, if desired

If you have purchased a license for the Active Data Guard option and would like to operate your physical standby database in active query mode, skip this step. Otherwise, bring your standby database to the mount state.

For example:

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT;

Step 6 Restart transporting redo to the new physical standby database.

Before the new standby database was created, the new primary database probably stopped transmitting redo to the remote destination. To restart redo transport services, perform the following steps on the new primary database:

1.Issue the following query to see the current state of the archive destinations:

SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, -
> ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;

2.If necessary, enable the destination:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n=ENABLE;

3.Perform a log switch to ensure the standby database begins receiving redo data from the new primary database, and verify it was sent successfully.

Note: This is an important step in order for the old primary to become a new standby following the new primary. If this step is not done, the old primary may recover to an incorrect database branch. The only way to correct the problem then is to convert the old primary again.

At the SQL prompt, enter the following statements:

SQL> ALTER SYSTEM SWITCH LOGFILE;

SQL> SELECT DEST_ID, DEST_NAME, STATUS, PROTECTION_MODE, DESTINATION, -
> ERROR,SRL FROM V$ARCHIVE_DEST_STATUS;

On the new standby database, you may also need to change the LOG_ARCHIVE_DEST_n initialization parameters so that redo transport services do not transmit redo data to other databases. This step can be skipped if both the primary and standby database roles were set up with the VALID_FOR attribute in one server parameter file (SPFILE). By doing this, the Data Guard configuration operates properly after a role transition.

Step 7 Start Redo Apply.

Start Redo Apply on the new physical standby database, as follows:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE -
> USING CURRENT LOGFILE DISCONNECT;

Once the failed primary database is restored and is running in the standby role, you can optionally perform a switchover to transition the databases to their original (pre-failure) roles. See Section 8.2.1, "Performing a Switchover to a Physical Standby Database" for more information.

DB lost write and standby database

Recovering From Lost-Write Errors on a Primary Database


During media recovery in a Data Guard configuration, a physical standby database can be used to detect lost-write data corruption errors on the primary database. This is done by comparing SCNs of blocks stored in the redo log on the primary database to SCNs of blocks on the physical standby database. If the SCN of the block on the primary database is lower than the SCN on the standby database, then there was a lost-write error on the primary database.

Note: Because lost-write errors are detected only when a block is read into the cache by a primary and the corresponding redo is later compared to the block on the standby, there may be undetected stale blocks on both the primary and the standby that have not yet been read and verified. These stale blocks do not affect operation of the current database because until those blocks are read, all blocks that have been used up to the SCN of the currently applied redo on the standby to do queries or updates were verified by the standby.

When a primary lost-write error is detected on the standby, one or more block error messages similar to the following for each stale block are printed in the alert file of the standby database:

Tue Dec 12 19:09:48 2006
STANDBY REDO APPLICATION HAS DETECTED THAT THE PRIMARY DATABASE
LOST A DISK WRITE OF BLOCK 26, FILE 7
NO REDO AT OR AFTER SCN 389667 CAN BE USED FOR RECOVERY.
.
.
.
The alert file then shows that an ORA-00752 error is raised on the standby database and the managed recovery is cancelled:
Slave exiting with ORA-752 exception
Errors in file /oracle/log/diag/rdbms/dgstwrite2/stwrite2/trace/stwrite2_pr00_23532.trc:
ORA-00752: recovery detected a lost write of a data block
ORA-10567: Redo is inconsistent with data block (file# 7, block# 26)
ORA-10564: tablespace TBS_2
ORA-01110: data file 7: '/oracle/dbs/btbs_21.f'
ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 57503
.
.
.
The standby database is then recovered to a consistent state, without any corruption to its datafiles caused by this error, at the SCN printed in the alert file:

Recovery interrupted!
Recovered data files to a consistent state at change 389569

This last message may appear significantly later in the alert file and it may have a lower SCN than the block error messages. Also, the primary database may operate without visible errors even though its datafiles may already be corrupted.

The recommended procedure to recover from such errors is a failover to the physical standby, as described in the following steps.

Steps to Failover to a Physical Standby After Lost-Writes Are Detected on the Primary

1.Shut down the primary database. All data at or after SCN printed in the block error messages will be lost.
2.Issue the following SQL statement on the standby database to convert it to a primary:
SQL> ALTER DATABASE ACTIVATE STANDBY DATABASE;

oracle standby file management

Actions Required on a Standby Database After Changes to a Primary Database


Renaming a Datafile in the Primary Database


When you rename one or more datafiles in the primary database, the change is not propagated to the standby database. Therefore, if you want to rename the same datafiles on the standby database, you must manually make the equivalent modifications on the standby database because the modifications are not performed automatically, even if the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO. The following describe how to rename a datafile in the primary database and manually propagate the changes to the standby database. If you do not want the standby database to have the same physical structure as the primary database, then these steps are not required.

Rename the datafile in the standby control file. Note that the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL.

1. rename datafile at Primary
2. rename datafuke at standby at mount mode
SQL> ALTER DATABASE RENAME FILE '/disk1/oracle/oradata/payroll/tbs_4.dbf'
  2> TO '/disk1/oracle/oradata/payroll/tbs_x.dbf';

If you do not rename the corresponding datafile at the standby system, and then try to refresh the standby database control file, the standby database will attempt to use the renamed datafile, but it will not find it. Consequently, you will see error messages similar to the following in the alert log:

ORA-00283: recovery session canceled due to errors
ORA-01157: cannot identify/lock datafile 4 - see DBWR trace file
ORA-01110: datafile 4: '/Disk1/oracle/oradata/payroll/tbs_x.dbf'

Adding or Dropping Online Redo Log Files

Changing the size and number of the online redo log files is sometimes done to tune the database. You can add online redo log file groups or members to the primary database without affecting the standby database. Similarly, you can drop log file groups or members from the primary database without affecting your standby database. However, these changes do affect the performance of the standby database after switchover.

--------------------------------------------------------------------------------
Caution: Whenever you add an online redo log file to the primary database, you must add a corresponding standby redo log file to the standby database.
--------------------------------------------------------------------------------

For example, if the primary database has 10 online redo log files and the standby database has 2, and then you switch over to the standby database so that it functions as the new primary database, the new primary database is forced to archive more frequently than the original primary database. Consequently, when you add or drop an online redo log file at the primary site, it is important that you synchronize the changes in the standby database by following these steps:


If Redo Apply is running, you must cancel Redo Apply before you can change the log files.

If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO, change the value to MANUAL.

Add or drop an online redo log file:

To add an online redo log file, use a SQL statement such as this:

SQL> ALTER DATABASE ADD LOGFILE
'/disk1/oracle/oradata/payroll/prmy3.log' SIZE 100M;

To drop an online redo log file, use a SQL statement such as this:

SQL> ALTER DATABASE DROP LOGFILE
'/disk1/oracle/oradata/payroll/prmy3.log';

Repeat the statement you used in Step 3 on each standby database.  Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.

Using Transportable Tablespaces with a Physical Standby Database


You can use the Oracle transportable tablespaces feature to move a subset of an Oracle database and plug it in to another Oracle database, essentially moving tablespaces between the databases.To move or copy a set of tablespaces when a physical standby is being used, perform the following steps:

Generate a transportable tablespace set that consists of datafiles for the set of tablespaces being transported and an export file containing structural information for the set of tablespaces.

Transport the tablespace set:

Copy the datafiles and the export file to the primary database.

Copy the datafiles to the standby database.

The datafiles must be to the directory defined by the DB_FILE_NAME_CONVERT initialization parameter. If DB_FILE_NAME_CONVERT is not defined, then issue the ALTER DATABASE RENAME FILE statement to modify the standby control file after the redo data containing the transportable tablespace was applied and failed. The STANDBY_FILE_MANAGEMENT initialization parameter must be set to AUTO.

Plug in the tablespace.

Invoke the Data Pump utility to plug the set of tablespaces into the primary database. Redo data will be generated and applied at the standby site to plug the tablespace into the standby database


Altering the Primary Database Control

FileUsing the SQL CREATE CONTROLFILE statement with the RESETLOGS option on your primary database will force the primary database to reset the online redo log file the next time the primary database is opened, thereby invalidating the standby database.
If you invalidated the control file for the standby database, re-create the file. If you invalidated the standby database, you must re-create the standby database.

NOLOGGING or Unrecoverable Operations

When you perform a DML or DDL operation using the NOLOGGING or UNRECOVERABLE clause, the standby database is invalidated and might require substantial DBA administrative activities to repair. You can specify the SQL ALTER DATABASE or SQL ALTER TABLESPACE statement with the FORCELOGGING clause to override the NOLOGGING setting. However, this statement will not repair an already invalidated database.

If you perform an unrecoverable operation (such as a direct path load), you will see a performance improvement on the primary database; but there is no corresponding recovery process performance improvement on the standby database, and you will have to move the data manually to the standby database

When the archived redo log file is copied to the standby site and applied to the physical standby database, a portion of the datafile is unusable and is marked as being unrecoverable. When you either fail over to the physical standby database, or open the standby database for read-only access, and attempt to read the range of blocks that are marked as UNRECOVERABLE, you will see error messages similar to the following:


ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option

To recover after the NOLOGGING clause is specified, you need to copy the datafile that contains the unjournaled data from the primary site to the physical standby site. Perform the following steps:

Step 1 Determine which datafiles should be copied.
Follow these steps:

Query the primary database:

SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;

NAME UNRECOVERABLE
----------------------------------------------------- -------------
/oracle/dbs/tbs_1.dbf 5216
/oracle/dbs/tbs_2.dbf 0
/oracle/dbs/tbs_3.dbf 0
/oracle/dbs/tbs_4.dbf 0

4 rows selected.

Query the standby database:

SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;

NAME UNRECOVERABLE
----------------------------------------------------- -------------
/oracle/dbs/stdby/tbs_1.dbf 5186
/oracle/dbs/stdby/tbs_2.dbf 0
/oracle/dbs/stdby/tbs_3.dbf 0
/oracle/dbs/stdby/tbs_4.dbf 0

4 rows selected.

Compare the query results of the primary and standby databases.

Compare the value of the UNRECOVERABLE_CHANGE# column in both query results. If the value of the UNRECOVERABLE_CHANGE# column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

In this example, the value of the UNRECOVERABLE_CHANGE# in the primary database for the tbs_1.dbf datafile is greater, so you need to copy the tbs_1.dbf datafile to the standby site.

Step 2 On the primary site, back up the datafile you need to copy to the standby site.
Issue the following SQL statements:

SQL> ALTER TABLESPACE system BEGIN BACKUP;

% cp tbs_1.dbf /backup

SQL> ALTER TABLESPACE system END BACKUP;

Step 3 On the standby database, restart Redo Apply.

Issue the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

You might get the following error messages (possibly in the alert log) when you try to restart Redo Apply:

ORA-00308: cannot open archived log 'standby1'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'

If you get the ORA-00308 error and Redo Apply does not terminate automatically, you can cancel recovery by issuing the following statement from another terminal window:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

These error messages are returned when one or more log files in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps, and repeat Step 3. See Section 5.8.4 for information about manually resolving an archive gap.

Determining If a Backup Is Required After Unrecoverable Operations

If you performed unrecoverable operations on your primary database, determine if a new backup operation is required by following these steps:

Query the V$DATAFILE view on the primary database to determine the system change number (SCN) or the time at which the Oracle database generated the most recent invalidated redo data.
Issue the following SQL statement on the primary database to determine if you need to perform another backup:

SELECT UNRECOVERABLE_CHANGE#,
TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss')
FROM V$DATAFILE;

If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.




RPMS for Oracle 10g2 on 64 bit Linux

Oracle installation on 64 bit requires that certain 32bit packages also needs to be installed on system. So first of all you should be able to distinguish between a 32-bit or 64-bit package. Following rpm command can be used for this purpose.

#rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep

For example,  rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc-devel
See the oracle installation documentation that comes with oracle installation disks.

For e.g. for installing Oracle 10gR2 on 64 bit RHEL 5:

Check system RPMs while installing 10gR2 on 64 bit and install RPMS using yum

rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep compat
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep binutils
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep compat-db
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep compat-libstdc++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep control-center
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep gcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep gcc-c++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc-common
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc-devel
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc-headers
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep gnome-libs
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libgcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libaio
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libgcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libstdc++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libstdc++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libgomp
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libXp
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep make
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep pdksh
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep sysstat
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep binutils
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep compat-libstdc++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep gcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc-32bit
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep glibc-devel
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libaio
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libelf
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libgcc
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libstdc++
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep libstdc++-devel
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep make
rpm -qa --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n" | grep sysstat

avahi-compat-libdns_sd-0.6.16-6.el5 (x86_64)
binutils-2.17.50.0.6-12.el5 (x86_64)
control-center-2.16.0-16.el5 (i386)
control-center-2.16.0-16.el5 (x86_64)
libgcc-4.1.2-46.el5 (x86_64)
libgcc-4.1.2-46.el5 (i386)
glibc-2.5-42 (i686)
glibc-2.5-42 (x86_64)
glibc-common-2.5-42 (x86_64)
glibc-common-2.5-42 (x86_64)
libgcc-4.1.2-46.el5 (x86_64)
libgcc-4.1.2-46.el5 (i386)
libaio-0.3.106-3.2 (i386)
libaio-0.3.106-3.2 (x86_64)
libgcc-4.1.2-46.el5 (x86_64)
libgcc-4.1.2-46.el5 (i386)
libstdc++-4.1.2-46.el5 (x86_64)
libstdc++-4.1.2-46.el5 (i386)
libstdc++-4.1.2-46.el5 (x86_64)
libstdc++-4.1.2-46.el5 (i386)
libXpm-3.5.5-3 (x86_64)
make-3.81-3.el5 (x86_64)
binutils-2.17.50.0.6-12.el5 (x86_64)
libgcc-4.1.2-46.el5 (x86_64)
libgcc-4.1.2-46.el5 (i386)
glibc-2.5-42 (i686)
glibc-2.5-42 (x86_64)
glibc-common-2.5-42 (x86_64)
libaio-0.3.106-3.2 (i386)
libaio-0.3.106-3.2 (x86_64)
elfutils-libelf-0.137-3.el5 (x86_64)
libgcc-4.1.2-46.el5 (x86_64)
libgcc-4.1.2-46.el5 (i386)
libstdc++-4.1.2-46.el5 (x86_64)
libstdc++-4.1.2-46.el5 (i386)
make-3.81-3.el5 (x86_64)

 groupadd oinstall
 usermod -g oinstall oracle
 yum groupinstall "X Window System" "GNOME Desktop Environment"

 yum install compat-db.i386
 yum install compat-db.x86_64
 yum install compat-libstdc++-33.i386
 yum install compat-libstdc++-33.x86_64
 yum install control-center.i386
 yum install control-center.x86_64
 yum install control-center-devel.i386
 yum install control-center-devel.x86_64
 yum install gcc-c++.x86_64
 yum install gcc-c++.x86_64
 yum install gcc-c++.x86_64
 yum install glibc-devel.i386
 yum install glibc-devel.x86_64
 yum install glibc-devel.x86_64
 yum install glibc-devel.i386
 yum install compat-glibc-headers.x86_64
 yum install glibc-headers.x86_64
 yum install imake.x86_64
 yum install libaio.i386
 yum install libaio.x86_64
 yum install libaio-devel.i386
 yum install libaio-devel.x86_64
 yum install libgomp.x86_64
 yum install libstdc++
 yum install libstdc++.i386
 yum install libstdc++.x86_64
 yum install libstdc++-devel.i386
 yum install libstdc++-devel.x86_64
 yum install libstdc++-devel.x86_64
 yum install sysstat.x86_64
 yum install libXp.i386
 yum install libXp.x86_64
 yum install libXpm.i386
 yum install libXpm.x86_64
 yum install libXpm-devel.i386
 yum install libXpm-devel.x86_64
 yum install libXp-devel.i386
 yum install libXp-devel.x86_64
 yum install mutt

 yum install unixODBC-devel.x86_64

 yum install mysql-connector-odbc-3.51.26r1127-1.el5.x86_64
 yum install mysql.x86_64
 yum install mysql-server.x86_64
 make install
 yum install mysql-devel.x86_64
yum install php-mysql.x86_64
 yum provides *myodbc-installer*


See above for how to tackle the missing library error.

Now find the name of missing required rpm by command

yum provides libXt.so.6

This will give you the name of missing package. Install its 32bit or its 64 bit version of package, or both, depending on the case and go ahed! Here in this case i386 implies 32 bit version of missing package is required.  


Theory on SQLPLAN BASELINES Using Stored Outlines

SQLPLAN BASELINES Using Stored Outlines

If you don’t have access to SQL Tuning Sets you can capture your existing execution plan using
Stored Outlines. There are two ways to capture Stored Outlines, you can either manually create
one for each SQL statement using the CREATE OUTLINE command or let Oracle
automatically create a Stored Outline for each SQL statement that is executed. Below are the
steps needed to let Oracle automatically create the Stored Outlines for you.

1. Start a new session and issue the following command to switch on the automatic
capture of a Stored Outline for each SQL statement that gets parsed from now on until
you explicitly turn it off.
SQL > alter system set CREATE_STORED_OUTLINES=OLDPLAN;

NOTE: Ensure that the user for which the Stored Outlines are to be created has the CREATE
ANY OUTLINE privilege. If they don’t the Stored Outlines will not be captured.

2. Now execute your workload either by running your application or manually issuing SQL
statements. NOTE: if you manually issue the SQL statements ensure you use the exact
SQL text used by the application, if it uses bind variables you will have to use them too.

3. Once you have executed your critical SQL statements you should turn off the automatic
capture by issuing the following command:
SQL > alter system set CREATE_STORED_OUTLINES=false;

4. To confirm you have captured the necessary Stored Outlines issue the following SQL
statement.
SQL> select name, sql_text, category from user_outlines;

NOTE: Each Stored Outline should be in the OLDPLAN category.

5. The actual Stored Outlines are stored in the OUTLN schema. Before you upgrade you
should export this schema as a backup.
exp outln/outln file=soutline.dmp owner=outln rows=y

6. After the upgrade to Oracle Database 11gR2, you can migrate stored outlines for
one or more SQL statements to SQL plan baselines using
DBMS_SPM.MIGRATE_STORED_OUTLINE or through Oracle Enterprise Manager (EM).
You can specify which stored outline(s) to be migrated based on outline name, SQL
text, or outline category, or migrate all stored outlines in the system to SQL plan
baselines.

SQL> variable report clob;
-- Migrate a single Stored Outline by name
SQL> exec :report:=DBMS_SPM.MIGRATE_STORED_OUTLINE( -
attribute_name=>'OUTLINE_NAME', attribute_value =>
'stmt01');

-- Migrate all Stored Outlines
SQL> exec :report:=DBMS_SPM.MIGRATE_STORED_OUTLINE( -
attribute_name=>'ALL');

Note: If you are not planning on doing an in-place upgrade you will have to move the STS, SQL
Trace files or Stored Outlines to the Oracle Database 11g system.

reference:  A practical session on migrating outline blog post - practical session migrating stored outlines to sql plan baseline

load sql plan baseline from cursor cache and making sql plan baseline of query with hints

It is possible to load plans for statements directly from the cursor cache into the SQL Management Base. By applying a filter - on the module name, the schema, or the SQL_ID - you can identify the SQL statement or set of SQL statement you wish to capture.

The plans can be loaded using the PL/SQL procedure DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE or through Oracle Enterprise Manager. The
next time these statements are executed their SQL plan baselines will be used. Loading plans directly from the cursor cache can be extremely useful if application SQL has beentuned by hand using hints. Since it is unlikely the application SQL can be changed to include the hint, by capturing the tuned execution plan as a SQL plan baseline you can ensure that the application SQL will use that plan in the future. By using the simple steps below you can use SPM to capture the hinted execution plan and associate it with the non-hinted SQL statement.You begin by capturing a SQL plan baseline for the non-hinted SQL statement.

1. In a SQL*Plus session run the non-hinted SQL statement so we can begin the SQL plan
baseline capture

SQL> SELECT prod_name, SUM(amount_sold)
FROM Sales s, Products p
WHERE s.prod_id=p.prod_id
AND prod_category = :ctgy
GROUP BY prod_name;

2. Then find the SQL_ID for the statement in the V$SQL view.

SQL> SELECT sql_id, sql_fulltext
FROM V$SQL
WHERE sql_text LIKE '%SELECT prod_name, SUM(%';
SQL_ID SQL_FULLTEXT
------------- ---------------------------------------
74hnd835n81yv select SQL_ID, SQL_FULLTEXT from v$SQL
chj6q8z7ykbyy SELECT PROD_NAME, SUM(AMOUNT_SOLD)

3. Using the SQL_ID create a SQL plan baseline for the statement.

SQL> variable cnt number;
SQL> EXECUTE :cnt :=DBMS_SPM.LOAD_PLAN_FROM_CURSOR_CACHE(
sql_id=>'chj6q8z7ykbyy');

4. The plan that was captured is the sub-optimal plan and it will need to be disabled. The
SQL_HANDLE & PLAN_NAME are required to disable the plan. These can found by
looking in DBA_SQL_PLAN_BASELINE view.

SQL> SELECT sql_handle, sql_text, plan_name, enabled FROM
dba_sql_plan_baselines;

SQL_HANDLE SQL_TEXT PLAN_NAME ENABLE
------------------------ ---------------------- ----------------------- ------
SYS_SQL_bf5c9b08f72bde3e SELECTPROD_NAME,SUM SQL_PLAN_byr4v13vkrrjy42949306 Y

5. Using DBMS_SPM.ALTER_SQL_PLAN_BASELINE disable the bad plan

SQL> variable cnt number;
SQL> exec :cnt :=DBMS_SPM.ALTER_SQL_PLAN_BASELINE(
SQL_HANDLE => 'SYS_SQL_bf5c9b08f72bde3e',
PLAN_NAME
=> 'SQL_PLAN_byr4v13vkrrjy42949306',
ATTRIBUTE_NAME => 'enabled',
ATTRIBUTE_VALUE => 'NO');
SQL> SELECT sql_handle, sql_text, plan_name, enabled
FROM dba_sql_plan_baselines;
SQL_HANDLE SQL_TEXT PLAN_NAME ENABLE
------------------------ ---------------------- ----------------------- ------
SYS_SQL_bf5c9b08f72bde3e SELECTPROD_NAME,SUM SQL_PLAN_byr4v13vkrrjy42949306 N

6. Now you need to modify the SQL statement using the necessary hints & execute the
modified statement.

SQL> SELECT /*+ INDEX(p) */ prod_name, SUM(amount_sold)
FROM Sales s, Products p
WHERE s.prod_id=p.prod_id
AND prod_category = :ctgy
GROUP BY prod_name;

7. Find the SQL_ID and PLAN_HASH_VALUE for the hinted SQL statement in the V$SQL
view.

SQL> SELECT sql_id, plan_hash_value, fulltext
FROM V$SQL
WHERE sql_text LIKE '%SELECT /*+ INDEX(p) */
prod_na%';
SQL_ID PLAN_HASH_VALUE SQL_FULLTEXT
------------- --------------- ---------------------------
9t5v8swp79svs 3262214722 select SQL_ID, SQL_FULLTEXT
djkqjd0kvgmb5 3074207202 SELECT /*+ INDEX(p) */

8. Using the SQL_ID and PLAN_HASH_VALUE for the modified plan, create a new
accepted plan for original SQL statement by associating the modified plan to the
original statement's SQL_HANDLE.

exec :cnt:=dbms_spm.load_plans_from_cursor_cache(
sql_id => 'djkqjd0kvgmb5',
plan_hash_value => 3074207202,
sql_handle => 'SYS_SQL_bf5c9b08f72bde3e‘);

creating sql plan baseline and its management

What most DBAs want is plan stability that is execution  plans should only change when they will result in performance gains. SQL plan management (SPM) ensures that runtime performance will never degrade due to the change of an execution plan. To guarantee this, only accepted (trusted) execution plans will be
used; any plan evolution will be tracked and evaluated at a later point in time and only be
accepted as verified if the new plan causes no runtime change or an improvement of the runtime.

The SQL Plan Management has three main components:

1. SQL plan baseline capture:
Create SQL plan baselines that represents accepted (trusted) execution plans
for all relevant SQL statements. The SQL plan baselines are stored in a plan
history in the SQL Management Base in the SYSAUX tablespace.

2. SQL plan baseline selection:
Ensure that only accepted execution plans are used for statements with a SQL
plan baseline and track all new execution plans in the plan history for a
statement. The plan history consists of accepted and unaccepted plans. An
unaccepted plan can be unverified (newly found but not verified) or rejected
(verified but not found to performant).

3. SQL plan baseline evolution:
Evaluate all unverified execution plans for a given statement in the plan history
to become either accepted or rejected.

 The SQL PLAN history enables the SPM aware optimizer to determine whether the best-cost plan it has produced is a brand new plan or not. A new plan represents a plan change that has potential to cause performance regression. For this reason, the SPM aware optimizer does not choose a brand new best-cost plan. Instead, it chooses from a set of accepted plans. An accepted plan is one that has been either verified to not cause performance regression or designated to have good performance. A set of accepted plans is called a SQL plan baseline, which represents a subset of the plan history.

A brand new plan is added to the plan history as a non-accepted plan. Later, an SPM utility verifies its performance, and keeps it as a non-accepted plan if it will cause a performance regression, or changes it to an accepted plan if it will provide a performance improvement. The plan performance verification process ensures both plan stability and plan adaptability. The figure below shows the SMB containing the plan history for three SQL statements. Each plan history contains some accepted plans (the SQL plan baseline) and some non-accepted plans



1.SQL plan baseline capture:


You can create a SQL plan baseline in several ways: using a SQL Tuning Set (STS); from the cursor cache; exporting from one database and importing into another; and automatically for every statement. Let's look at each in turn.

Creating SQL plan baselines from STS

If you are upgrading from 10gR2 or have an 11g test system, you might already have an STS containing some or all of your SQL statements. This STS might contain plans that perform satisfactorily. Let's call this STS MY_STS. You can create a SQL plan baseline from this STS as follows:
SQL> variable pls number;
SQL> exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name => 'MY_STS', -
>                   basic_filter => 'sql_text like ''select%p.prod_name%''');


This will create SQL plan baselines for all statements that match the specified filter.


Creating SQL plan baselines from cursor cache

You can automatically create SQL plan baselines for any cursor that is currently in the cache as follows:

SQL> exec :pls := dbms_spm.load_plans_from_cursor_cache( -
>                   attribute_name => 'SQL_TEXT', -
>                   attribute_value => 'select%p.prod_name%');


This will create SQL plan baselines for all statements whose text matches the specified string. Several overloaded variations of this function allow you to filter on other cursor attributes.


Creating SQL plan baselines using a staging table

If you already have SQL plan baselines (say on an 11g test system), you can export them to another system (a production system for instance). First, on the test system, create a staging table and pack the SQL plan baselines you want to export:

SQL> exec dbms_spm.create_stgtab_baseline(table_name => 'MY_STGTAB', -
>           table_owner => 'SH');



PL/SQL procedure successfully completed.
SQL> exec :pls := dbms_spm.pack_stgtab_baseline( -
>                   table_name => 'MY_STGTAB', -
>                   table_owner => 'SH', -
>                   sql_text => 'select%p.prod_name%')
;

This will pack all SQL plan baselines for statements that match the specified filter. The staging table, MY_STGTAB, is a regular table that you should export to the production system using Datapump Export.
On the production system, you can now unpack the staging table to create the SQL plan baselines:
SQL> exec :pls := dbms_spm.unpack_stgtab_baseline( -
>                   table_name => 'MY_STGTAB', -
>                   table_owner => 'SH', -
>                   sql_text => 'select%p.prod_name%')
;
This will unpack the staging table and create SQL plan baselines. Note that the filter for unpacking the staging table is optional and may be different than the one used during packing. This means that you can pack several SQL plan baselines into a staging table and selectively unpack only a subset of them on the target system.

Creating SQL plan from migrating stored outlines:

var mig clob
exec :mig:=dbms_spm.migrate_stored_outline(attribute_name=>-

> 'outline_name',attribute_value=>'OL1',fixed=>'NO')


Creating SQL plan baselines automatically

You can create SQL plan baselines for all repeatable statements automatically by setting the parameter optimizer_capture_sql_plan_baselines to TRUE (default is FALSE). The first plan captured for any statement is automatically accepted and becomes part of the SQL plan baseline, so enable this parameter only when you are sure that the default plans are performing well. 

You can use the automatic plan capture mode when you have upgraded from a previous database version. Set optimizer_features_enable to the earlier version and execute your workload. Every repeatable statement will have its plan captured thus creating SQL plan baselines. You can reset optimizer_features_enable to its default value after you are sure that all statements in your workload have had a chance to execute.

Note that this automatic plan capture occurs only for repeatable statements, that is, statements that are executed at least twice. Statements that are only executed once will not benefit from SQL plan baselines since accepted plans are only used in subsequent hard parses.

The following example shows a plan being captured automatically when the same statement is executed twice:
SQL> alter session set optimizer_capture_sql_plan_baselines = true;
Session altered.

SQL> var pid number
SQL> exec :pid := 100;

PL/SQL procedure successfully completed.

SQL> select p.prod_name, s.amount_sold, t.calendar_year
2    from sales s, products p, times t
3    where s.prod_id = p.prod_id
4      and s.time_id = t.time_id
5      and p.prod_id < :pid;

PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> /
PROD_NAME AMOUNT_SOLD CALENDAR_YEAR
--------- ----------- -------------
...
9 rows selected.

SQL> alter session set optimizer_capture_sql_plan_baselines = false;

Session altered

Automatic plan capture will not occur for a statement if a stored outline exists for it and is enabled and the parameter use_stored_outlines is TRUE. In this case, turn on incremental capture of plans into an STS using the function capture_cursor_cache_sqlset() in the DBMS_SQLTUNE package. After you have collected the plans for your workload into the STS, manually create SQL plan baselines using the method described earlier. Then, disable the stored outlines or set use_stored_outlines to FALSE. From now on, SPM will manage your workload and stored outlines will not be used for those statements. In this article, we have seen how to create SQL plan baselines. In the next, we will describe the SPM aware optimizer and how it uses SQL plan baselines.

2. SQL plan baseline selection:

Each time a SQL statement is compiled, the optimizer first uses the traditional cost-based search
method to build a best-cost plan. If the initialization parameter
OPTIMIZER_USE_SQL_PLAN_BASELINES is set to TRUE (default value) then before the cost
based plan is executed the optimizer will try to find a matching plan in the SQL statement’s SQL
plan baseline; this is done as in-memory operation, thus introducing no measurable overhead to
any application. If a match is found then it proceeds with this plan. Otherwise, if no match is
found, the newly generated plan will be added to the plan history; it will have to be verified
before it can be accepted as a SQL plan baseline. Instead of executing the newly generated plan
the optimizer will cost each of the accepted plans for the SQL statement and pick the one with
the lowest cost (note that a SQL plan baseline can have more than one verified/accepted plan for
a given statement). However, if a change in the system (such as a dropped index) causes all of the
accepted plans to become non-reproducible, the optimizer will use the newly generated costbased
plan.

It is also possible to influence the optimizer’s choice of plan when it is selecting a plan from a
SQL plan baseline. SQL plan baselines can be marked as fixed. Fixed SQL plan baselines indicate
to the optimizer that they are preferred. If the optimizer is costing SQL plan baselines and one of
the plans is fixed, the optimizer will only cost the fixed plan and go with that if it is reproducible.
If the fixed plan(s) are not reproducible the optimizer will go back and cost the remaining SQL
plan baselines and select the one with the lowest cost. Note that costing a plan is nowhere near
as expensive as a hard parse. The optimizer is not looking at all possible access methods but at
one specific access path.

3.SQL Plan Baseline Evolution

When the optimizer finds a new plan for a SQL statement, the plan is added to the plan history
as a non-accepted plan that needs to be verified before it can become an accepted plan. It is
possible to evolve a SQL statement’s execution plan using Oracle Enterprise Manager or by
running the command-line function DBMS_SPM.EVOLVE_SQL_PLAN_BASELINE. Using either of
these methods you have three choices:

1. Accept the plan only if it performs better than the existing SQL plan baseline
2. Accept the plan without doing performance verification
3. Run the performance comparison and generate a report without evolving the new plan.

If you choose option 1, it will trigger the new plan to be evaluated to see if it performs better
than a selected plan baseline. If it does, then the new plan will be added to the SQL plan baseline,
as an accepted plan. If not the new plan will remain in the plan history as a non-accepted plan
but its LAST_VERIFIED attribute will be updated with the current timestamp. A formatted text
report is returned by the function, which contains the actions performed by the function as well
as side-by-side display of performance statistics of the new plan and the original plan.

If you choose option 2, the new plan will be added to the SQL plan baseline as an accepted plan
without verifying its performance. The report will also be generated.

If you choose option 3 the new plan will be evaluated to see if it performs better than a selected
plan baseline but it will not be accepted automatically if it does. After the evaluation only the
report will be generated.

Monitoring SPM through DBA views

The view DBA_SQL_PLAN_BASELINES displays information about the SQL plan baselines
currently created for specific SQL statements. Here is an example.

select sql_handle, sql_text, plan_name, origin,
enabled, accepted, fixed, autopurge
from dba_sql_plan_baselines

The above select statement returns the following rows

SQL_HANDLE SQL_TEXT PLAN_NAME ORIGIN ENA ACC FIX AUT
-------- ---------- ------------- ------- --- --- --- ---
SYS_SQL_6fe2 select... SYS_SQL_PLAN_1ea AUTO-CAP YES NO NO YES
SYS_SQL_6fe2 select... SYS_SQL_PLAN_4be AUTO-CAP YES YES NO YES


In this example the same SQL statement has two plans, both of which were automatically captured. One of the plans (SYS_SQL_PLAN_4be) is a plan baseline as it is both enabled and accepted. The other plan (SYS_SQL_PLAN_1ea) is a non-accepted plan, which has been queued for evolution or verification. It has been automatically captured and queued for verification; its accepted value is set to NO. Neither of the plans is fixed and they are both eligible for automatic purge.

To check the detailed execution plan for any SQL plan baseline you can use the procedure
DBMS_XPLAN.DISPLAY_SQL_PLAN_BASELINE.

It is also possible to check whether a SQL statement is using a SQL plan baseline by looking in V$SQL. If the SQL statement is using a SQL plan baseline the plan_name for the plan selected from that SQL plan baseline will be in the sql_plan_baseline column of V$SQL. You can join the V$SQL view to the DBA_SQL_PLAN_BASELINES view using the following query:

Select s.sql_text, b.plan_name, b.origin, b.accepted
From dba_sql_plan_baselines b, v$sql s
Where s.exact_matching_signature = b.signature
And s.SQL_PLAN_BASELINE = b.plan_name;


Integration with Automatic SQL tuning

In Oracle Database 11g, the SQL Tuning Advisor, a part of the Tuning and Diagnostics pack, is
automatically run during the maintenance window. This automatic SQL tuning task targets highload
SQL statements. These statements are identified by the execution performance data
collected in the Automatic Workload Repository (AWR) snapshots. If the SQL Tuning Advisor
finds a better execution plan for a SQL statement it will recommend a SQL profile. Some of
these high-load SQL statements may already have SQL plan baselines created for them. If a SQL
profile recommendation made by the automatic SQL tuning task is implemented, the execution
plan found by the SQL Tuning Task will be added as an accepted SQL plan baseline.

The SQL Tuning Advisor can also be invoked manually, by creating a SQL Tuning Set for a
given SQL statement. If the SQL Tuning Advisor recommends a SQL profile for the statement
and it is manually implemented then that profile will be added as an accepted plan to the SQL

statements plan baseline if one exists.

Using SQL Plan Management for upgrade:

Undertaking a database upgrade is a daunting task for any DBA. Once the database has been
successfully upgraded you must still run the gauntlet of possible database behavior changes. On
the top of every DBA’s list of potential behavior changes are execution plan changes. With the
introduction of SQL Plan Management you now have an additional safety net to ensure
execution plans don’t change during the upgrade. In order to take full advantage of this safety net
you need to capture your existing execution plans before you upgrade so they can be used to seed
SPM.

Using SQL Tuning Sets:

If you have access to SQL Tuning Sets (STS) in the diagnostics pack then this is the easiest way
to capture your existing 10g execution plans. An STS is a database object that includes one or
more SQL statements along with their execution statistics, execution context and their current
execution plan. (An STS in Oracle Database 10gR1 will not capture the execution plans for the
SQL statements so it can’t be used to seed SPM. Only a 10gR2 STS will capture the plans).

To begin you will need to create a new STS. You can either do this through Oracle Enterprise
Manager (EM) or using the DBMS_SQLTUNE package. In this example we will use
DBMS_SQLTUNE.

BEGIN
SYS.DBMS_SQLTUNE.CREATE_SQLSET (
sqlset_name => 'SPM_STS',
description => '10g plans');
END;
\
Once the STS has been created we need to populate it. You can populate an STS from the
workload repository, another STS or from the cursor cache. In this case we will capture the SQL
statements and their execution plans from the cursor cache. This is a two-step process. In the
first step we create a ref cursor to select the specified SQL from the cursor cache (in this case all
non sys SQL statements). Then in the second step we use that ref cursor to populate the STS.

DECLARE
stscur dbms_sqltune.sqlset_cursor;
BEGIN
OPEN stscur FOR
SELECT VALUE(P)
FROM TABLE(dbms_sqltune.select_cursor_cache(
‘parsing_schema_name <> ‘‘SYS’’’,
null, null, null, null, 1, null, 'ALL')) P;
-- populate the sqlset
dbms_sqltune.load_sqlset(sqlset_name => 'SPM_STS',
populate_cursor => stscur);
END;
/
Once the software upgrade is completed the execution plans can be bulk loaded from an STS
into SPM using the PL/SQL procedure DBMS_SPM.LOAD_PLANS_FROM_SQLSET or
through Oracle Enterprise Manager (EM).

SQL> Variable cnt number
SQL> execute :cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( -
sqlset_name => 'SPM_STS');

Few bugs  I noticed:

1. Packed sql tuning set in 10.2.0.5 and unpacked in 11.2.0.1. 11g DB was hanged when plan was enabled in 11g.

2. Sql plan baselines were packed and unpacked. When unpacked the one sqlplan baseline which had higher cost for same sql handle was not accepted in DB where it was unpacked. Later on that plan had to be evolved in order to be enabled.

Note: Optimizer_cost from view dba_sql_plan_baselines and actual cost as shown from dbms_xplan.display_sql_plan_baselines were different. It has reason to be so.





Reference: White paper SQL Plan Management in Oracle Database 11g
                 oracle blog 

Oracle text index replacing like query

Text indexes in comparison query ( tune like query using text index for large tables)
consider text comparison query:

select e.empno,e.ename,d.loc,d.dname from emp e ,dept d
where e.ename like '%'||d.dname||'%'

This is suitable case of use of text indexes if table emp is quite big.

Steps:

1 .FROM DBA ---------

GRANT SELECT ON ctxsys.dr$ths_phrase to scott;
GRANT EXECUTE ON CTX_DDL TO scott;

FROM CTXSYS USER ----------------
exec ctx_ddl.create_stoplist('empty_stoplist', 'BASIC_STOPLIST');
exec ctx_ddl.create_preference('matching_lexer', 'BASIC_LEXER');

2.FROM application USER scott ------

CREATE INDEX emp_ename2 ON emp2 (ename) INDEXTYPE IS CTXSYS.CONTEXT parameters ('lexer CTXSYS.matching_lexer');

in order to take benefit of this text index rewrite above query as below :

select e.*
from emp2 e, dept d
where CONTAINS(e.ename, '%'||d.dname||'%')>0

***** Spool outputs along with execution plans *****

SYSTEM session:

1* alter user ctxsys identified by sys
SQL> /

User altered.

SQL> GRANT SELECT ON ctxsys.dr$ths_phrase to scott;
Grant succeeded.

SQL> GRANT EXECUTE ON CTX_DDL TO scott;
Grant succeeded.

SQL> exec ctx_ddl.create_stoplist('empty_stoplist', 'BASIC_STOPLIST');
PL/SQL procedure successfully completed.

SQL> exec ctx_ddl.create_preference('matching_lexer', 'BASIC_LEXER');

PL/SQL procedure successfully completed.

--SCOTT session --

SQL> create table emp2 as select * from emp;
Table created.

SQL> alter table emp2 modify empno number(7);
Table altered.

SQL> alter table emp2 modify ename varchar2(200);
Table altered.

SQL> create sequence s1 ;
Sequence created.

SQL> insert into emp2(empno,ename) select s1.nextval,'SALES' from emp2;
15 rows created.

insert repeated for

30720 rows created.

SQL> commit;
Commit complete.

SQL> insert into emp2(empno,ename) select s1.nextval,'ABCD SALES HOLA' from emp2 where rownum<=1000; 1000 rows created.
 SQL> select * from dept;

DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON

SQL> insert into emp2(empno,ename) select s1.nextval,'ABCDACCOUNTINGEXYZ' from emp2;
62440 rows created.

SQL> CREATE INDEX emp_ename2 ON emp2 (ename) INDEXTYPE IS CTXSYS.CONTEXT parameters ('lexer CTXSYS.matching_lexer');

Index created.

SQL> insert into emp2(empno,ename) select s1.nextval,'bola' from emp2 ;
124880 rows created.

SQL> commit;
Commit complete.

SQL> set autotrace traceonly explain statistics timing on
SQL> select e.*
2 from emp2 e, dept d
3 where CONTAINS(e.ename, '%'||d.dname||'%')>0 ;

124866 rows selected.

Elapsed: 00:00:01.23

Execution Plan
----------------------------------------------------------
Plan hash value: 4225699595

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 682 | 135K| 146 (0)| 00:00:02 |
| 1 | NESTED LOOPS | | 682 | 135K| 146 (0)| 00:00:02 |
| 2 | TABLE ACCESS FULL | DEPT | 4 | 40 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| EMP2 | 170 | 32980 | 146 (0)| 00:00:02 |
|* 4 | DOMAIN INDEX | EMP_ENAME2 | | | 4 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("CTXSYS"."CONTAINS"("E"."ENAME",'%'||"D"."DNAME"||'%')>0)

Note
-----
- dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
750 recursive calls
0 db block gets
10354 consistent gets
5 physical reads
0 redo size
2295427 bytes sent via SQL*Net to client
91959 bytes received via SQL*Net from client
8326 SQL*Net roundtrips to/from client
26 sorts (memory)
0 sorts (disk)
124866 rows processed

SQL> /
124866 rows selected.
Elapsed: 00:00:00.84

SQL> drop index EMP_ENAME2;
Index dropped.

Elapsed: 00:00:02.93
1 select e.*
2 from emp2 e, dept d
3* where e.ename like '%'||d.dname||'%'
SQL>
SQL> /

124866 rows selected.

Elapsed: 00:00:01.01

Execution Plan
----------------------------------------------------------
Plan hash value: 4088618096

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 43131 | 8087K| 831 (2)| 00:00:10 |
| 1 | NESTED LOOPS | | 43131 | 8087K| 831 (2)| 00:00:10 |
| 2 | TABLE ACCESS FULL| DEPT | 4 | 40 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMP2 | 10783 | 1916K| 207 (2)| 00:00:03 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("E"."ENAME" LIKE '%'||"D"."DNAME"||'%')

Note
-----
- dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
178 recursive calls
0 db block gets
11239 consistent gets
0 physical reads
0 redo size
2295427 bytes sent via SQL*Net to client
91959 bytes received via SQL*Net from client
8326 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
124866 rows processed

SQL> /
124866 rows selected.
Elapsed: 00:00:01.01


SQL> insert into emp2(empno,ename) select s1.nextval,'z' from emp2;

49760 rows created.

Elapsed: 00:00:02.75

SQL> commit;
Commit complete.

Elapsed: 00:00:00.10
SQL> set autotrace off
SQL> insert into emp2(empno,ename) select s1.nextval,null from emp2;

499520 rows created.

Elapsed: 00:00:06.32
SQL> commit;
Commit complete.

Elapsed: 00:00:00.01
SQL> CREATE INDEX emp_ename2 ON emp2 (ename) INDEXTYPE IS CTXSYS.CONTEXT parameters ('lexer CTXSYS.matching_lexer');

Index created.

Elapsed: 00:00:22.06
SQL> select e.*
2 from emp2 e, dept d
3 where CONTAINS(e.ename, '%'||d.dname||'%')>0
4
SQL>
SQL> set autotrace traceonly explain statistics timing on
SQL>
SQL> /

124866 rows selected.

Elapsed: 00:00:00.86

Execution Plan
----------------------------------------------------------
Plan hash value: 4225699595

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1852 | 368K| 356 (0)| 00:00:05 |
| 1 | NESTED LOOPS | | 1852 | 368K| 356 (0)| 00:00:05 |
| 2 | TABLE ACCESS FULL | DEPT | 4 | 40 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| EMP2 | 463 | 89822 | 356 (0)| 00:00:05 |
|* 4 | DOMAIN INDEX | EMP_ENAME2 | | | 4 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("CTXSYS"."CONTAINS"("E"."ENAME",'%'||"D"."DNAME"||'%')>0)

Note
-----
- dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
392 recursive calls
0 db block gets
10671 consistent gets
2 physical reads
0 redo size
2295427 bytes sent via SQL*Net to client
91959 bytes received via SQL*Net from client
8326 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
124866 rows processed

SQL> select e.*
2 from emp2 e, dept d
3 where e.ename like '%'||d.dname||'%' ;

124866 rows selected.

Elapsed: 00:00:01.42

Execution Plan
----------------------------------------------------------
Plan hash value: 4088618096

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 185K| 33M| 2103 (3)| 00:00:26 |
| 1 | NESTED LOOPS | | 185K| 33M| 2103 (3)| 00:00:26 |
| 2 | TABLE ACCESS FULL| DEPT | 4 | 40 | 3 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| EMP2 | 46308 | 8230K| 525 (3)| 00:00:07 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("E"."ENAME" LIKE '%'||"D"."DNAME"||'%')

Note
-----
- dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
5 recursive calls
0 db block gets
15761 consistent gets
0 physical reads
0 redo size
2295427 bytes sent via SQL*Net to client
91959 bytes received via SQL*Net from client
8326 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
124866 rows processed

SQL> /

124866 rows selected.

Elapsed: 00:00:01.42

SQL> select e.*
2 from emp2 e, dept d
3 where CONTAINS(e.ename, '%'||d.dname||'%')>0;

124866 rows selected.

Elapsed: 00:00:00.82

Execution Plan
----------------------------------------------------------
Plan hash value: 4225699595

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1852 | 368K| 356 (0)| 00:00:05 |
| 1 | NESTED LOOPS | | 1852 | 368K| 356 (0)| 00:00:05 |
| 2 | TABLE ACCESS FULL | DEPT | 4 | 40 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| EMP2 | 463 | 89822 | 356 (0)| 00:00:05 |
|* 4 | DOMAIN INDEX | EMP_ENAME2 | | | 4 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("CTXSYS"."CONTAINS"("E"."ENAME",'%'||"D"."DNAME"||'%')>0)

Note
-----
- dynamic sampling used for this statement (level=2)

Statistics
----------------------------------------------------------
52 recursive calls
0 db block gets
10242 consistent gets
0 physical reads
0 redo size
2295427 bytes sent via SQL*Net to client
91959 bytes received via SQL*Net from client
8326 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
124866 rows processed

Clone the database and make standby database using RMAN in Oracle 11g

CLONE DATABSE(duplicate database): Use the DUPLICATE command to create a copy of a source database. A duplicate database, is a copy of the source database with a unique DBID which RMAN generated. Because a duplicate database has a unique DBID, it is independent of the source database and can be registered in the same recovery catalog.  Following are steps of an example where sorce(target) database and clone(duplicate or auxilary instace) are on different machines.

#1. create the oracle password file in the clone database oracle_home

orapwd file=$ORACLE_HOME/dbs/prod

#2.  Take the backup of source(target) database including control file auto backup

#3 Make tns entry for source and clone (say auxilary or target or duplicate ) instance.

#4. Prepare init file of auxilary instance
# Minimum parameters file in $ORACLE_HOME/dbs/initprod at clone oracle home.
DB_NAME=prod

If source and clone db has to be in different location then optional parameters are:
DB_FILE_NAME_CONVERT = ("+SOURCEDSK","+DUPDSK")
LOG_FILE_NAME_CONVERT = ("+SOURCEDSK","+DUPDSK")


#5. create all directories required to start the clone or duplicate database
    like directory for diag_dest,audit_file_dest etc

#6. ftp or scp copy the backups taken at step# 2  to same locations on duplicate(clone) database node.

#7. startup the clone (auxilary) instance in nomount mode at clone db node

# Backup files are in matching location to that on the source server.
# Duplicate database to TARGET's current state.
DUPLICATE TARGET DATABASE TO prod
SPFILE
NOFILENAMECHECK;

# Duplicate database to TARGET's state 4 days ago.
DUPLICATE TARGET DATABASE TO DB11G
  UNTIL TIME 'SYSDATE-4'
  SPFILE
  NOFILENAMECHECK;

# Backup files are in a different location to that on the source server.
# Duplicate database to the most recent state possible using the provided backups.
# Works with just an AUXILIARY connection only.
DUPLICATE DATABASE TO DB11G
  SPFILE
  BACKUP LOCATION '/source/app/oracle/fast_recovery_area/DB11G'
  NOFILENAMECHECK;

STANDBY DATABASE:  standby database, which is a special copy of the source database (called a primary database in a Data Guard environment) that is updated by applying archived redo logs from the primary database. A standby database does not get a new DBID.
steps for creating standby database using RMAN are similar to steps for creating duplicate(clone) database.
To create a standby database with the DUPLICATE command you must connect as target to the primary database and specify the FOR STANDBY option.

A database in a Data Guard environment is uniquely identified by means of the DB_UNIQUE_NAME parameter in the initialization parameter file. TheDB_UNIQUE_NAME must be unique across all the databases with the same DBID for RMAN to work correctly in a Data Guard environment.

Recovery of a Standby Databas By default, RMAN does not recover the standby database after creating it. RMAN leavesthe standby database mounted, but does not place the standby database in manual or managed recovery mode.

Use the DORECOVER option of the DUPLICATE command to specify that RMAN should recover the standby database. RMAN performs the following steps after creating the standby database files: 

1.RMAN begins media recovery. If recovery requires archived redo log files, and if the log files are not already on disk, then RMAN attempts to restore backups.

2. RMAN recovers the standby database to the specified time, system change number (SCN), or log file sequence number, or to the latest archived redo log file generated if none of the preceding are specified.

3. RMAN leaves the standby database mounted after media recovery is complete, but does not place the standby database in manual or managed recovery mode. 

simple example : DUPLICATE TARGET DATABASE FOR STANDBY DO RECOVER
           NOFILENAMECHECK;

NOFILENAMECHECK: prevents RMAN from checking whether the datafiles and online redo logs files of the source database are in use when the source database files share the same names as the duplicate database files (see Example 2-73). You are responsible for determining that the duplicate operation will not overwrite useful data. This option is necessary when you are creating a duplicate database in a different host that has the same disk configuration, directory structure, and filenames as the host of the source database. For example, assume that you have a small database located in the /dbs directory of host1:
/oracle/dbs/system_prod1.dbf
/oracle/dbs/users_prod1.dbf
/oracle/dbs/rbs_prod1.dbf
Assume that you want to duplicate this database to host2, which has the same file system /oracle/dbs/*, and you want to use the same filenames in the duplicate database as in the source database. In this case, specify the NOFILENAMECHECK option to avoid an error message. Because RMAN is not aware of the different hosts, RMAN cannot determine automatically that it should not check the filenames.

If duplicating a database on the same host as the source database, then make sure that NOFILENAMECHECK is not set. Otherwise, RMAN may signal the error.

Active Database Duplication: Beginning from Oracle 11g it has ability to create duplicate databases directly without the need for a backup,known as active database duplication. Its requirement is source database have to be in ARCHIVELOG mode..Both the source and destination database servers require a "tnsnames.ora" entry for the destination database.

Using the DUPLICATE Command to Create a Standby Database:
1. Creating a Standby Database with Active Database Duplication  (available from 11g)

DUPLICATE TARGET DATABASE
  FOR STANDBY
  FROM ACTIVE DATABASE
  DORECOVER
  SPFILE
    SET "db_unique_name"="foou" COMMENT ''Is a duplicate''
    SET LOG_ARCHIVE_DEST_2="service=inst3 ASYNC REGISTER
     VALID_FOR=(online_logfile,primary_role)"
    SET FAL_CLIENT="inst3" COMMENT "Is standby"
    SET FAL_SERVER="inst1" COMMENT "Is primary"

  NOFILENAMECHECK;

2. Creating a Standby Database with Backup-Based Duplication

DUPLICATE TARGET DATABASE
  FOR STANDBY
  DORECOVER
  SPFILE
    SET "db_unique_name"="foou" COMMENT ''Is a duplicate''
    SET LOG_ARCHIVE_DEST_2="service=inst3 ASYNC REGISTER
     VALID_FOR=(online_logfile,primary_role)"
    SET FAL_CLIENT="inst3" COMMENT "Is standby"
    SET FAL_SERVER="inst1" COMMENT "Is primary"
  NOFILENAMECHECK;

Other examples:

Copying the Server Parameter File in Active Database Duplication

RMAN> CONNECT TARGET SYS@prod

target database Password: password
connected to target database: PROD1 (DBID=39525561)

RMAN> CONNECT AUXILIARY SYS@dup

auxiliary database Password: password
connected to auxiliary database: DUP1 (not mounted)

RMAN> DUPLICATE TARGET DATABASE TO dup
2> FROM ACTIVE DATABASE
3> PASSWORD FILE
4> SPFILE;

Setting New Filenames Manually for Duplication:

DUPLICATE TARGET DATABASE TO dup
  FOR STANDBY
  FROM ACTIVE DATABASE
  PASSWORD FILE
  SPFILE 
    PARAMETER_VALUE_CONVERT '/disk1', '/disk2'
    SET DB_FILE_NAME_CONVERT '/disk1','/disk2'
    SET LOG_FILE_NAME_CONVERT '/disk1','/disk2'
    SET SGA_MAX_SIZE 200M
    SET SGA_TARGET 125M;

references: oracle doc1
                 oracle doc2
                 oracle base