Knowledge is wealth. Share it!

Knowledge is wealth. Share it! There is no wealth like knowledge, and no poverty like ignorance.

01 December 2015

Script to find locks in a postgres database.

SELECT bl.pid                 AS blocked_pid,
         a.usename              AS blocked_user,
         ka.query               AS blocking_statement,
         now() - ka.query_start AS blocking_duration,
         kl.pid                 AS blocking_pid,
         ka.usename             AS blocking_user,
         a.query                AS blocked_statement,
         now() - a.query_start  AS blocked_duration
  FROM  pg_catalog.pg_locks         bl
   JOIN pg_catalog.pg_stat_activity a  ON a.pid = bl.pid
   JOIN pg_catalog.pg_locks         kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid
   JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid
  WHERE NOT bl.GRANTED;

22 August 2015

Script to find locks in SQL Server

SET NOCOUNT ON
GO

-- Count the locks

IF EXISTS ( SELECT  Name
            FROM    tempdb..sysobjects
            WHERE   name LIKE '#Hold_sp_lock%' )
--If So Drop it
    DROP TABLE #Hold_sp_lock
GO
CREATE TABLE #Hold_sp_lock
    (
      spid INT,
      dbid INT,
      ObjId INT,
      IndId SMALLINT,
      Type VARCHAR(20),
      Resource VARCHAR(50),
      Mode VARCHAR(20),
      Status VARCHAR(20)
    )
INSERT  INTO #Hold_sp_lock
        EXEC sp_lock
SELECT  COUNT(spid) AS lock_count,
        SPID,
        Type,
        Cast(DB_NAME(DBID) as varchar(30)) as DBName,
        mode
FROM    #Hold_sp_lock
GROUP BY SPID,
        Type,
        DB_NAME(DBID),
        MODE
Order by lock_count desc,
        DBName,
        SPID,
        MODE

--Show any blocked or blocking processes

IF EXISTS ( SELECT  Name
            FROM    tempdb..sysobjects
            Where   name like '#Catch_SPID%' )
--If So Drop it
    DROP TABLE #Catch_SPID
GO
Create Table #Catch_SPID
    (
      bSPID int,
      BLK_Status char(10)
    )
GO
Insert  into #Catch_SPID
        Select Distinct
                SPID,
                'BLOCKED'
        from    master..sysprocesses
        where   blocked <> 0
        UNION
        Select Distinct
                blocked,
                'BLOCKING'
        from    master..sysprocesses
        where   blocked <> 0

DECLARE @tSPID int
DECLARE @blkst char(10)
SELECT TOP 1
        @tSPID = bSPID,
        @blkst = BLK_Status
from    #Catch_SPID

WHILE( @@ROWCOUNT > 0 )
    BEGIN

        PRINT 'DBCC Results for SPID ' + Cast(@tSPID as varchar(5)) + '( '
            + rtrim(@blkst) + ' )'
        PRINT '-----------------------------------'
        PRINT ''
        DBCC INPUTBUFFER(@tSPID)


        SELECT TOP 1
                @tSPID = bSPID,
                @blkst = BLK_Status
        from    #Catch_SPID
        WHERE   bSPID > @tSPID
        Order by bSPID

    END

Comparison of Oracle 9i and 10G

Parameter / Metrics
Oracle 9i
Oracle 10G / 11G
SQL Optimizer
Both Cost Based and Rule Based Optimizers can be used
Cost Based Optimizer is used as a Default option. Rule Based cannot be used.
Data Export and Import
Can be done with the help of Export / Import feature
Comes with an additional feature of Oracle Data Pump.
Job Scheduling
Done by DBMS_JOB
Done with the help of DBMS_SCHEDULER
Table Space Point In Time Recovery(TSPITR)
Auxiliary Instance needs to be created Manually
Auxiliary Instance Automatically created while creation of Database or by using ReecoveryManager.
Database or Table space Roll back
Done with the Help of HOT / COLD backup of the database and recovery is required.
Roll back of the database or the table space can be done by enabling the Flashback database feature
Backup and Retention periods
The Old Backups are deleted manually or with the help of third party backup software.
The old Backups can be deleted automatically with the help of RMAN by setting the BAACKUP RETENTION POLICY PERIOD
Size of Backup Sets
Backup set compression cannot be done
Backup set can be reduced with the help of RMAN compression
Check point and UNDO tuning
These Parameters needs to be tuned based on the Performance of the Database manually. SQL tuning is done with the help of PGA_AGGREGATE_TARGET
Automatic tuning and SQL Tuning Advisories are available in 10G.
Database Performance Monitoring and tuning
Can be done with the help of STATSPACK and manually interpreting the same.
New feature called ADDM (Automatic Database Diagnostic Monitor) can be used for the collection of DBSTATS.
SGA and PGA Tuning
This is done manually based on the workload. Buffer Cache advisory needs to be turned ON to know the optimal value of the Buffer Cache.
Adjusted automatically based on workload.
Movement of Data files
Cannot be done.
Available and can be moved across platforms.
Segment Space Management
Use ASSM (Automatic Segment Space Management).Free space within the blocks is managed by bit-maps stored in the bitmap blocks in the given segment. Improves the performance in case of concurrent DML operations because respective operations can access different bitmap blocks reducing the contention on segment header.

ASSM available in 10g.
Features are introduced which are built on ASSM e.g. qualifying segments for reclaiming free space, shrinking segments.
These are useful features for avoiding the wastage of space resulted due to frequent DELETE operations.

Database Reorg

a) Move the tables within the table space or across table spaces b) using Exp/Imp.  
Limitation of moving the object is that new object is created first and then the old object is dropped so in case of 10GB table, 10GB of free space must be present.

Time is a major constraint for Exp/Imp operation.

No new object is created.
The rows are physically moved to the blocks having adequate free space provided Automatic Segment Space Management is being used. This is less time consuming and needs no extra space.



Execute OS commands / Job Scheduling

Use JAVA stored procedures to execute OS command.

No need of developing JAVA stored procedures. New API i.e. DBMS_SCHEDULER has been introduced.

Rename table space.

Not Available.

Table space can be Renamed






21 August 2015

Backup and Restore of ASM instance

Introduction:

This document provides a high level overview and steps to restore a ASM database .


Steps:

1.      Take a backup of the database or else use the last successful backup from tape / disk

2.      Created the parameter file with the bare minimum parameters for the database to startup.

vi initTEST1.ora

*.audit_file_dest='/u01/app/oracle/admin/TEST1/adump'
*.audit_trail='db'
*.compatible='11.2.0.2.0'
*.control_files='+DATA','+FRA'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_name='RAC'
*.diagnostic_dest='/u01/app/oracle'
*.log_archive_dest_1='LOCATION=+FRA'
*.log_archive_format='%t_%s_%r.dbf'
*.memory_target=1658847232
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
*.cluster_database=false

3.      Start the database in NOMOUNT stage for restore and recovery from the backup:

[oracle@MYHOST1 dbs]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Thu Aug 1 05:58:30 2013
Copyright (c) 1982, 2009, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> startup nomount pfile='initTEST1.ora'
ORACLE instance started.
Total System Global Area 1653518336 bytes
Fixed Size                  2213896 bytes
Variable Size             956303352 bytes
Database Buffers          687865856 bytes
Redo Buffers                7135232 bytes

SQL> exit

4.      Connect to the Oracle Recovery Manager and restore the  control file associated with the recent backup that is planned to be restored.

Connect to RMAN:
[oracle@MYHOST1 dbs]$ rman target /
Recovery Manager: Release 11.2.0.2.0 - Production on Thu Aug 1 05:59:28 2013
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: RAC (not mounted)


RMAN> restore controlfile from autobackup

Starting restore at 17-JUN-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 device type=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
output file name=+DATA/TEST1/controlfile/current.236.7445645
output file name=+FRA/TEST1/controlfile/current.235.7445645
Finished restore at 17-JUN-14

RMAN> alter database mount;
database mounted
released channel: ORA_DISK_1


5.      Initiate a restore of the database once the control file is restored and the DB is in mount state as shown below.
RMAN> restore database;

Starting restore at 17-JUN-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to +DATA/TEST1/system.356.1123822
channel ORA_DISK_1: restoring datafile 00004 to +DATA/TEST1/users. 356.1223822
channel ORA_DISK_1: reading from backup piece
 channel ORA_DISK_1: piece
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00003 to +DATA/rac/datafile/undotbs1.314.822277039
channel ORA_DISK_1: restoring datafile 00005 to +DATA/rac/datafile/undotbs2.315.822277039
channel ORA_DISK_1: reading from backup piece channel ORA_DISK_1: piece
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to +DATA/ TEST1/sysaux.311.822277081
channel ORA_DISK_1: reading from backup piece
channel ORA_DISK_1: piece
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 17-JUN-14

Recover the database
RMAN> recover database;

Starting recover at 17-JUN-14
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 8 is already on disk as file +FRA/TEST1/2014_06_17/thread_1_seq_8.412.822282953
archived log for thread 2 with sequence 6 is already on disk as file +FRA/TEST1/2014_06_17/thread_2_seq_6.413.822282955
archived log file name=+FRA/TEST1/2014_06_17/thread_1_seq_8.412.822282953 thread=1 sequence=8
archived log file name=+FRA/TEST1/2014_06_17/thread_2_seq_6.413.822282955 thread=2 sequence=6
archived log file name=+FRA/TEST1/2014_06_17/thread_1_seq_9.415.822287039 thread=1 sequence=9
archived log file name=+FRA/TEST1/2014_06_17/thread_2_seq_7.414.822286061 thread=2 sequence=7
unable to find archived log
archived log thread=2 sequence=8

6.      Open the database with resetlogs:
RMAN> alter database open resetlogs;
database opened

RMAN> exit
Recovery Manager complete.


7.      Start the database
SQL> startup
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size                  2220200 bytes
Variable Size             616566616 bytes
Database Buffers          444596224 bytes
Redo Buffers                5554176 bytes
Database mounted.
Database opened.

8.      Once the DB is started, add the entries and start using the server control utility.



15 December 2014

Steps to setup the streaming replication in postgres without PGPOOL

Step 1 : Login to the primary and ensure the backup has happened.
Step 2 : Login to the secondary and ensure the applications are not connected. Get the details of the hostname and the PGDATA location
Step 3 : Enable password less authentication between primary and secondary/
Step 4 : Go to the PGDATA location and execute the base_backup.sh file with the required parameters as shown below.

-bash-4.1$ echo $PGDATA
/store/pgdata
-bash-4.1$ ./base_backup.sh /store/pgdata lnxpg1b /store/pgdata
+ PGDATA=/store/pgdata
+ REMOTE_HOST=lnxpg1b
+ REMOTE_PGDATA=/store/pgdata
+ RECOVERY_LOG=/var/log/pgpool/recovery.log
+ DSDATA=/store/postgresql/data
+ SSH_OPTIONS='-T -o StrictHostKeyChecking=no'
+ log 'running backup as UID:26'
++ date
+ echo 'Tue Dec 16 04:43:13 EST 2014 running backup as UID:26'
+ log DATA=/store/pgdata
++ date
+ echo 'Tue Dec 16 04:43:13 EST 2014 DATA=/store/pgdata'
+ log RECOVERY_TARGET=lnxpg1b
++ date
+ echo 'Tue Dec 16 04:43:13 EST 2014 RECOVERY_TARGET=lnxpg1b'
+ log RECOVERY_DATA=/store/pgdata
++ date
+ echo 'Tue Dec 16 04:43:13 EST 2014 RECOVERY_DATA=/store/pgdata'
+ log 'on lnxpg1b: removing /store/pgdata and /store/postgresql/data'
++ date
+ echo 'Tue Dec 16 04:43:13 EST 2014 on lnxpg1b: removing /store/pgdata and /sto
+ log 'on lnxpg1b: running pg_basebackup...'
++ date
+ echo 'Tue Dec 16 04:43:13 EST 2014 on lnxpg1b: running pg_basebackup...'
+ ssh -T -o StrictHostKeyChecking=no lnxpg1b '
rm -rf /store/pgdata /store/postgresql/data
/usr/bin/pg_basebackup -h lnxpg1a -D /store/pgdata -U postgres -v -P -x -c fast
ERRCODE=$?
rm -f /store/pgdata/postgresql.trigger /store/pgdata/recovery.done
exit $ERRCODE'
transaction log start point: 0/CD000028 on timeline 3
669922/669922 kB (100%), 7/7 tablespaces
transaction log end point: 0/CD0000F0
pg_basebackup: base backup completed
+ checkResult 0 '/usr/bin/pg_basebackup -h lnxpg1a -D /store/pgdata -U postgres -v -P -x -c fast'
+ '[' 0 -eq 0 ']'
+ log '/usr/bin/pg_basebackup -h lnxpg1a -D /store/pgdata -U postgres -v -P -x -c fast finished with exit code 0'
++ date
+ echo 'Tue Dec 16 04:43:54 EST 2014 /usr/bin/pg_basebackup -h lnxpg1a -D /store/pgdata -U postgres -v -P -x -c fast finished with exit code 0'
+ log 'on lnxpg1b: set up recovery.conf'
++ date
+ echo 'Tue Dec 16 04:43:54 EST 2014 on lnxpg1b: set up recovery.conf'
+ ssh -T -o StrictHostKeyChecking=no lnxpg1b '
cd /store/pgdata;
cat > recovery.conf << EOT
standby_mode = '\''on'\''
primary_conninfo = '\''host=lnxpg1a port=5432 user=postgres password=zantaz'\''
restore_command = '\''cp /store/postgresql/archives/%p %f'\''
trigger_file = '\''/store/pgdata/postgresql.trigger'\''
EOT
'

Step 5: Once this is done, login to the secondary database server as postgres user. Check the recovery.conf file and ensure the standby_mode is set to ON. Also, ensure all the directories are copied to the secondary from primary.
Step 6 : start the database using the pg_ctl start command.
Step 7 : Test the replication process by creating some test table in the primary.


[root@lnxpg1b store]# su - postgres
-bash-4.1$ /usr/pgsql-9.3/bin/pg_ctl start
server starting
-bash-4.1$ 2014-12-16 04:46:36 EST [21937]: [1-1] user=,db=LOG:  redirecting log output to logging collector process
2014-12-16 04:46:36 EST [21937]: [2-1] user=,db=HINT:  Future log output will appear in directory "pg_log".

-bash-4.1$
-bash-4.1$ ps -ef | grep wal
postgres 21946 21937  0 04:46 ?        00:00:00 postgres: wal receiver process
postgres 21948 21902  0 04:46 pts/0    00:00:00 grep wal
-bash-4.1$ psql
psql (9.3.5)
Type "help" for help.

postgres=# \q
-bash-4.1$ psql
psql (9.3.5)
Type "help" for help.

postgres=# SELECT pg_is_in_recovery()
postgres-#
postgres-#
postgres-# ;
 pg_is_in_recovery
-------------------
 t
(1 row)

postgres=# \q
-bash-4.1$ ps -ef | grep wal
postgres 21946 21937  0 04:46 ?        00:00:00 postgres: wal receiver process
postgres 21956 21902  0 04:48 pts/0    00:00:00 grep wal
-bash-4.1$ psql
psql (9.3.5)
Type "help" for help.

postgres=# \c dbac
You are now connected to database "dbac" as user "postgres".
dbac=# create table kt_test (eno int);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
dbac=# \d kt_test
    Table "public.kt_test"
 Column |  Type   | Modifiers
--------+---------+-----------
 eno    | integer |

dbac=# select * from kt_test;
 eno
-----
   1
(1 row)

dbac=#

01 September 2014

Please use the below script to find used pages and size of the DB2 tablespaces / containers.

db2 "select TOTAL_EXEC_TIME, NUM_EXECUTIONS, substr (STMT_TEXT, 1,500) \
as statement from table(SNAPSHOT_DYN_SQL('dsv', -1)) as \
snap_dyn_sql order by 1 desc,2 desc"

09 August 2014

How to edit the welcome message when ssh to a server. there are basically 2 methods:

1. Edit the /etc/motd and update the message that is required.
2. Edit the /etc/ssh/sshd_config: Change the setting PrintLastLog to "no".

Hope this helps. :)

07 August 2014

The below script can be used to check the backup timings for all the database in a SQL server instance.

DECLARE @dbname sysname
SET @dbname = NULL --set this to be whatever dbname you want
SELECT bup.user_name AS [User],
 bup.database_name AS [Database],
 bup.server_name AS [Server],
 bup.backup_start_date AS [Backup Started],
 bup.backup_finish_date AS [Backup Finished]
 ,CAST((CAST(DATEDIFF(s, bup.backup_start_date, bup.backup_finish_date) AS int))/3600 AS varchar) + ' hours, ' 
 + CAST((CAST(DATEDIFF(s, bup.backup_start_date, bup.backup_finish_date) AS int))/60 AS varchar)+ ' minutes, '
 + CAST((CAST(DATEDIFF(s, bup.backup_start_date, bup.backup_finish_date) AS int))%60 AS varchar)+ ' seconds'
 AS [Total Time]
FROM msdb.dbo.backupset bup
WHERE bup.backup_set_id IN
  (SELECT MAX(backup_set_id) FROM msdb.dbo.backupset
  WHERE database_name = ISNULL(@dbname, database_name) --if no dbname, then return all
  AND type = 'D' --only interested in the time of last full backup
  GROUP BY database_name) 
ORDER BY bup.database_name;

If you want to find the percent completion of operations like backup, use the below query.

SELECT      command, percent_complete,
            'elapsed' = total_elapsed_time / 60000.0,
            'remaining' = estimated_completion_time / 60000.0
FROM        sys.dm_exec_requests
WHERE       command like 'BACKUP%'

04 August 2014

64 Bit Upgradation Of Oracle server

Oracle software comes in 64bit and 32 bit version. The 32 bit oracle has a limitation of addressing the 4 GB of physical memory. The main reason for up gradation is to surmount this limitation posed by 32 bit addressing scheme and also help increasing the scalability of the database server. This document clearly explains the step by step process of upgrading a database bit version from 32-bit to 64-bit.

Step 1 determining the bit version of oracle binaries:

The bit version of oracle binaries can be checked using file command in UNIX. 

For 32 bit it will produce output like this.
cd $ORACLE_HOME/bin
$ file oracle
oracle:         ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped

and for 64 bit versions the output will be like below.

oracle:         ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not stripped

Step 2: Down load the oracle binaries for upgrade process.

2.1 Download the patch set 9.2.0.8 from metalink and ftp it to the target server.   
We have 9.2.0.6.0 version of oracle in order to mitigate the errors that might      occur during the process of up gradation it’s recommended to apply patch set of 9.2.0.8 (latest patch set is  9.2.0.8).
            
2.2 Download the 64 bit version and 9.2.0.8 patch set           
       Down load the 9.2.0.1 base version and install 64 bit oracle in predetermined oracle HOME. After this activity the new 9.2.0.8 patch should be applied to 64 bit home. The installation of 64 bit oracle and patching up will not affect the availability of instance. So this activity can be completed well in advance.  
      
Step 3: Prepare the OS for addressing larger SGA

Change the kernel parameter for oracle for large SGA. By changing the
The shm_max should be greater than SGA_MAX_SIZE. Otherwise the OS will create multiple shared segments for oracle which may result in suboptimal performance.

shmsys:  shm_max=12884901888  (12g.  Max RAM is 24g). This can be tuned after the release based on the requirement.
semsys:   sem_mns=5000
set semsys:  sem_mni=256     
set semsys:   sem_opm=256


Kernel parameter can be changed by editing the /etc/system(in Solaris)  files by logging as root. After changing the values server needs to be restarted to reflect the new values.


Step 4 : Backing  up the database .

            In this section we need to take the both logical and physical backup of database which will be required if the whole activity needs to be rolled back.  

4.1 Take the full export dump if it’s feasible. If it will take a long time take only the           full structure of database.
4.2 Create the backup of control file in binary and ascii format.

           SQL> create backup control file to ‘absolutefilepath’   --- for binary backup
           SQL>create backup control file to trace – for ascii format.
Copy the ascii format form user_dump_dest .

4.3 Create the list of invalid objects. This list needs to be compared with the invalid
  Objects after the upgradation. Any discrepancy at that time should be addressed. Spool the output of below sql.

SQL>Select owner,object_name,object_type,status from dba_objects where status <> ‘VALID’
                                                   

4.4 Save the list of jobs in the database and their intervals as these jobs may go into broken state during the migration process. 

4.5 Prepare the script for db links in the database by login as sys. Since the database   has db links pointing to Sybase database. Since its 9i we can get the password for db links by querying the sys.link$.

4.5 Check the components installed in the target database. These components needs to be installed in the after the 64 bit up gradation.

SQL>select comp_id,comp_name,version,status from dba_registry;

Step 5: Patching of 32bit oracle .


  5.1. Check the saherd_pool_size and shared_pool_reserved_size in database and set it above 150 MB.

  5.2. Apply the patch set 9.2.0.8 on top of 9.2.0.6 oracle.
          The patch set can be applied either through GUI or from command line through response files. For GUI VNc server needs to be started first.
Then run the oracle installer ./runinstaller.


5.3. After the patching the oracle migrate scripts needs to be executed from sqlplus.

shutdown database & listener
           
          LSNRTCTL>STOP     (default listener is configured for this db)
SQL>startup migrate
SQL>spool patch.log
SQL>@?/rdbms/admin/catpatch.sql
SQL>shutdown
SQL>startup
SQL>@?/rdbms/admin/utlrp.sql ----for compiling the invalid objects
check invalid objects count

Note: The alert logs should be monitored while startup and shutdown.


Step 6: Starting up of database in 64 bit oracle Home


6.1 copy the initfile, spfile,pwfile  into new ORACLE_HOME/dbs

6.2 copy the listener.ora  and tnsnames.ora to new $ORACLE_HOME/network/admin

       6.3 Copy the init files for gateways to their respective locations. All the Sybase    gateways should be copied to $ORACLE_HOME(Newone)/ tg4sybs/admin from $ORACLE_HOME(Newone)/ tg4sybs/admin.

       6.4 Check for ORACLE_HOME variables in .profile & change it and also update  
wherever applicable and also check for the Oracle database links including the    Sybase connectivity.

        6.5. Add the variable LD_LIBRARY_PATH = ORACLE_HOME/lib32 In .profile.

      6.6. Make following changes to init.ora
                 aq_tm_processes=0   
                 job_queue_processes=0
                 _system_trig_enabled= FALSE
             These settings are recommended by oracle during the upgrade process.
  
      6.7. Increasing the following memory parameters to address double the memory.
            SHARED_POOL_SIZE
            SHARED_POOL_RESERVED_SIZE
           LARGE_POOL_SIZE. These memories need to be doubled as the internal oracle      structures will use the double memory size. As the internal oracle pointers will be using double the size in 64bit versions (As in case of float data type in c)




Parameter name
Current Value (Bytes)
Values to be resized(Bytes)
    SHARED_POOL_SIZE
15000000
30000000
    SHARED_POOL_RESERVED_SIZE
436207616
872415232
    LARGE_POOL_SIZE
67108864
134217728
  

     .


  6.8. Startup the database using pfile

     SQL>  Startup migrate pfile= ‘$RACLE_HOME/dbs/initMNDP.ora’

     SQL> @?/rdbms/admin/catalog.sql

   6.9 Check the validity of the DBMS_STANDARD package:

 SQL> select status from dba_objects where object_name='DBMS_STANDARD'   and         object_type='PACKAGE'  and owner='SYS';

    If the package is invalid recompile it.

   SQL> alter package dbms_standard compile;

6.10 Run the following scripts:

 SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql

   After running this script, check for invalid objects:

 SQL> select owner, object_name, object_type from dba_objects
         where status <> 'VALID';


    Recompile any invalid objects to avoid problems while running the utlirp.sql script
The components catproc and catalog should be in valid state. If any of these components are invalid , ORA-600 error will be logged in alert logs. These components   can be made valid by executing the cataproc.sql for CATPROC, and catalog.sql for CATALOG.
        6.11 Set the system to spool results to a log file for later verification of success:

   
        SQL> SPOOL catoutw.log

        If you want to see the output of the script you will run on your screen,
        then you can also issue a SET ECHO ON statement:

        SQL> SET ECHO ON
Run utlirp.sql:
SQL> @$ORACLE_HOME/rdbms/admin/utlirp.sql

    The utlirp.sql script recompiles existing PL/SQL modules in the format required by the new database. This script first alters certain dictionary tables. Then, it reloads package STANDARD and DBMS_STANDARD,  which are necessary for using PL/SQL. Finally, it triggers a recompile of all PL/SQL modules, such as packages, procedures, types, and so on.
            Turn off the spooling of script results to the log file:
       SQL> SPOOL OFF
       
Then, check the spool file and verify that the packages and procedures
 compiled successfully. You named the spool file in Step 6 the suggested
 name was catoutw.log. Correct any problems you find in this file.

    If you specified SET ECHO ON, then you may want to SET ECHO OFF now:

    SQL> SET ECHO OFF
6.12. Remove these from the pfile

       aq_tm_processes=0
      job_queue_processes=0
      _system_trig_enabled=false


6.13. The SGA and memory related parameters needs to be checked and tuned after the upgrade based on the requirements:

shared_poolsize=600  (doubled the current one 304)
db_cache_size=3500m
db_keep_cache_size=500m
db_recycle_cache_size=500m
sga_max_size=6000m
pga_aggregate_target=3000m
log buffer = 4096000  (the current one of 20mb is way above normal, may cause log file sync waits at times)

6.14. Bounce the DB and release to Application team  for testing
    
 DBA can verify the changes from select address from v$sql (it would be 16 char long as compared to the 8chr of 32-bit).