migration

小型机迁移x86平台之老生常谈

July 17, 2014 migration, oracle, system, unix No comments

You can view this presentation click here: x86_architect
注意使用chrome打开文件 不然可能乱码

LOB字段迁移浅谈

October 31, 2012 migration, oracle No comments

最近做了一次文描系统的迁移,涉及到大量的lob表,对于这种文描表的处理,其实不建议放在oracle 的lob字段中.对于LOB相信大家肯定遇到过各种问题
LOB的直接读写往往会对存储层面造成很大的压力,瞬间的并发严重的可能导致系统的崩溃.相信维护过秒杀系统的同学深有感触。

鉴于这次的迁移.总结了一点经验,对于文描这种应用(图书,药物等)首先应该考虑文件系统,在数据库层面通过指针来访问,其次可以考虑nosql的应用
(推荐mongodb),对于这次的迁移来说考虑了以下几种方式:

示例表为:

[oracle@db-2-16 ~]$ ora bigtable

OWNER TABLE_NAME SIZE_MB
—————————— —————————— ————
PROD_DATA2 PRODUCT_DESCRIPTION 78,165

这是一张将近80GB的lob表 (product_description column nclob)

SQL> select segment_name from dba_lobs where table_name=’PRODUCT_DESCRIPTION’;

SEGMENT_NAME
——————————
SYS_LOB0000017343C00008$$

SQL> select sum(bytes)/1024/1024 from dba_segments where segment_name=’SYS_LOB0000017343C00008$$’;

SUM(BYTES)/1024/1024
——————–
74117

LOB字段大小为74GB

对于这种表考虑了第一种方法:oracle goldengate

想通过ogg单独抽这种表,由于这次迁移是10g-11g ogg正好有用武之地,测试发现ogg的初始化极其缓慢(老毛病了) 放弃 同理测试DDS DSG 同样很慢。

方法二:采用了expdp 使用parallel 8 方式导出预计需要5个小时以上 放弃 采用exp query PK mod=8 预计8小时以上 放弃。

方法三:采用CTAS 通过dblink方式 这种方法的好处是节省了传输的时间,直接在目标端进行数据的插入注意这种方式有一个要求 不能够直接create table as select 这也跟kamus讨论过,DDL parallel 是不支持LOB字段的表的:

DDL statements

Some examples are CREATE TABLE AS SELECT,CREATE INDEX,REBUILD INDEX,REBUILD INDEX PARTITION and MOVE/SPLI/COALESCE PARTITION.

You can normally use parallel DDL where you use regular DDL. There are, however, some additional details to consider when designing your database.

One important restriction is that parallel DDL cannot be used on tables with object or LOB columns.

All of these DDL operations can be performed in NOLOGGINGmode for either parallel or serial execution.

The CREATE TABLE statement for an index-organized table can be parallelized either with or without an AS SELECT clause.

Different parallelism is used for different operations.Parallel create (partitioned) table as select and parallel create (partitioned) index run with a degree of parallelism equal to the number of partitions.

采用手工insert append的方式将主键分为20个分区.多个SQL 同时插入 这种方式预估时间为2个小时,是可以接受的。

方法四:利用prebuild MV快速迁移 这是最便宜也是最省心的(当然如果你的ogg不需要收费的话..) 这种方法可以参考楚天的文章:
利用prebuild MV快速迁移跨平台数据库实施及其总结
测试结果为complete refresh 花了将近8个小时。不过一旦初始化完成,后面将大大简化迁移时候的操作,只需要drop掉mv即可。

方法五: 也是我们最终采用的PCIE卡直接抽取 flash技术在公司已经大量运用了.具体操作为搭建一个文件存放在fusion-IO上的物理DG,迁移停机之后active这个DG 使用expdp导出这张表。惊讶的是导出这张表仅仅只需要20分钟,强大的fusion-io!
同样测试了华为的超高速SSD 表现还可以需要40分钟左右。这种方法的好处在于物理DG十分的可靠,在迁移之前可以最大划的保证数据的完整性(ogg,mv可能丢数据) 这样就省去了表数据的对比工作,同样20分钟的表现完全可以接受。

这里大致谈了lob字段的迁移方法,其实对于lob字段的运用关键还在于设计。在大数据即将到来的今天,合理的运用各种技术才是最重要的。

Exadata migration tips

August 7, 2012 Exadata, migration, oracle No comments

Exadata migration完成 目前采用的是物理DG switch over的方式 采用11g active duplicate 实现整个数据大小为2T左右 总共耗时约为10个小时 每秒达到近60M
采取这种方法的好处为:
1:主库无需停机
2:可以采用exadata最佳性能AU_SIZE=4M
3:复制过程中不会产生集中化IO,不过还是建议在夜里进行操作
4:可以直接switch over 使用原来的库为备库

处理完毕后 我们可以查看主机状态:

[grid@dm01db01 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA_DM01.dg
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
ora.DBFS_DG.dg
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
ora.LISTENER.lsnr
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
ora.RECO_DM01.dg
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
ora.asm
               ONLINE  ONLINE       dm01db01                 Started             
               ONLINE  ONLINE       dm01db02                 Started             
ora.gsd
               OFFLINE OFFLINE      dm01db01                                     
               OFFLINE OFFLINE      dm01db02                                     
ora.net1.network
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
ora.ons
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
ora.registry.acfs
               ONLINE  ONLINE       dm01db01                                     
               ONLINE  ONLINE       dm01db02                                     
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       dm01db02                                     
ora.cvu
      1        ONLINE  ONLINE       dm01db02                                     
ora.dm01db01.vip
      1        ONLINE  ONLINE       dm01db01                                     
ora.dm01db02.vip
      1        ONLINE  ONLINE       dm01db02                                     
ora.edw1.db
      1        ONLINE  ONLINE       dm01db01                 Open                
      2        ONLINE  ONLINE       dm01db02                 Open                
ora.oc4j
      1        ONLINE  ONLINE       dm01db02                                     
ora.scan1.vip
      1        ONLINE  ONLINE       dm01db02                                     
[grid@dm01db01 ~]$ 


SQL> /

GROUP_NUMBER NAME							  VALUE
------------ ------------------------------------------------------------ ------------------------------------------------------------
	   1 access_control.enabled					  FALSE
	   1 access_control.umask					  066
	   1 au_size							  4194304
	   1 cell.smart_scan_capable					  TRUE
	   1 compatible.asm						  11.2.0.2.0
	   1 compatible.rdbms						  11.2.0.2.0
	   1 disk_repair_time						  3.6h
	   1 idp.boundary						  auto
	   1 idp.type							  dynamic
	   1 sector_size						  512
	   1 template.ARCHIVELOG.mirror_region				  0

可以看到 au_size=4194304 使用了oracle推荐的best au_size for exadata 对于4M这个数值 可以参考这篇文章 exadata AU_SIZE

通过上图可以看到au size 4m对于exadata的 smart scan io 提升比较明显,另外测试发现11g active duplicate 对于 “cell physical IO bytes saved during optimized RMAN file restore”并没有影响

SQL> SELECT name, value/1024/1024 MB from v$sysstat a WHERE
  2  a.name = 'physical read total bytes' OR
a.name = 'physical write total bytes' OR
a.name = 'cell physical IO interconnect bytes' OR
a.name = 'cell physical IO bytes eligible for predicate offload' OR
a.name = 'cell physical IO bytes saved during optimized file creation' OR
a.name = 'cell physical IO bytes saved during optimized RMAN file restore' OR
a.name = 'cell IO uncompressed bytes' OR
a.name = 'cell physical IO interconnect bytes returned by smart scan' OR
a.name = 'cell physical IO bytes saved by storage index';  3    4    5    6    7    8    9   10  

NAME									 MB
---------------------------------------------------------------- ----------
physical read total bytes					 2043013.32
physical write total bytes					 187975.958
cell physical IO interconnect bytes				 1497804.22
cell physical IO bytes saved during optimized file creation		  0
cell physical IO bytes saved during optimized RMAN file restore 	  0
cell physical IO bytes eligible for predicate offload		 1042550.99
cell physical IO bytes saved by storage index			 127372.633
cell physical IO interconnect bytes returned by smart scan	 119367.207
cell IO uncompressed bytes					 915887.641

关于v$sysstat对于ed cell的statistics 可以参照下表

Statistic Description

cell flash cache read hits

The number of read requests that were a cache hit on exadata flash cache.

cell IO uncompressed bytes

The total size of uncompressed data that is processed on the cell. For scan on hybrid-columnar-compressed tables, this statistic is the size of data after decompression.

cell physical IO interconnect bytes returned by smart scan

The number of bytes that are returned by the cell for Smart Scan only, and does not include bytes for other database I/O.

cell physical IO bytes saved by storage index

The number of bytes saved by storage index.

cell physical IO bytes eligible for predicate offload

The total number of I/O bytes processed with physical disks when processing was offloaded to the cell.

cell physical IO bytes sent directly to DB node to balance CPU usage

The number of I/O bytes sent back to the database server for processing due to CPU usage on Exadata Cell.

cell physical IO bytes saved during optimized file creation

The number of I/O bytes saved by the database host by offloading the file creation operation to cells. This statistic shows the Exadata Cell benefit due to optimized file creation operations.

cell physical IO bytes saved during optimized RMAN file restore

The number of I/O bytes saved by the database host by offloading the RMAN file restore operation to cells. This statistic shows the Exadata Cell benefit due to optimized RMAN file restore operations.

cell physical IO interconnect bytes

The number of I/O bytes exchanged over the interconnection between the database host and cells.

physical read requests optimized

Total number of read requests satisfied either by using Exadata Smart Flash Cache or storage index.

physical read total bytes

Total amount of I/O bytes for reads processed with physical disks. This includes when processing was offloaded to the cell and when processing was not offloaded.

physical read total bytes optimized

Total number of bytes read from Exadata Smart Flash Cache or storage index.

physical write total bytes

Total amount of I/O bytes for writes processed with physical disks. This includes when processing was offloaded to the cell and when processing was not offloaded.

另外官方也给出了几种解决exadata migration的方案

Tip:

ASM Online Migration: This method is only applicable if your database is already using ASM and you do not need to adjust the ASM allocation unit (AU) size. 
To use this method you must also be able to connect your current database storage to Database Machine and migrate the database instances to Database 
Machine. After migrating the database instances, the data migration is very simple, just add new Exadata-based grid disks to your ASM disk groups and drop 
existing disks from your ASM disk groups to migrate the data.

当然迁移的方法还有很多OGG,DSG,甚至DDS。但是需要说明的一点是当使用ASM Online Migration时 虽然不严格要求AU_SIZE(best practice for 4M in ED) 相同 但是不同AU_SIZE的migration 会影响迁移之后的性能

Cloning an Oracle Home

March 2, 2012 migration, oracle 1 comment

Cloning an Oracle Home


Cloning an Oracle home involves creating a copy of the Oracle home and then configuring it for a new environment. If you are performing multiple Oracle Database installations, then you may want to use this method to create each Oracle home, because copying files from an existing Oracle Database installation takes less time than creating a new version of them. This method is also useful if the Oracle home that you are cloning has had patches applied to it. When you clone the Oracle home, the new Oracle home has the patch updates.

When you clone Oracle homes using release 11.2 Database Control, you must update the exclude file list. This file list specifies files that need not be included when the source Oracle home is archived because these files are not required for the cloning operation. Do not include the following files in the archive:

sqlnet.ora
tnsnames.ora
listener.ora
oratab

Note:
In addition to cloning an Oracle home, you can clone individual Oracle Database installations by using Oracle Enterprise Manager Database Control. Oracle Database Administrator’s Guide provides detailed information about cloning Oracle Database installations and Oracle homes.
This appendix includes information about the following topics:

Cloning an Oracle Home
Configuring Oracle Configuration Manager in a Cloned Oracle Home
B.1 Cloning an Oracle Home

Perform the following to clone an Oracle home:

Verify that the installation of Oracle Database to clone is successful.

You can do this by reviewing the installActionsdate_time.log file for the installation session, which is typically located in the /orainventory_location/logs directory.

If you install patches, then check their status using the following:

$ cd $ORACLE_HOME/OPatch
Include $ORACLE_HOME/OPatch in $PATH

$ opatch lsinventory
Stop all processes related to the Oracle home. See Chapter 7, “Removing Oracle Database Software” for more information about stopping the processes for an Oracle home.

Create a ZIP file with the Oracle home (but not the Oracle base) directory.

For example, if the source Oracle installation is in the /u01/app/oracle/product/11.2.0/dbhome_1, then you zip the dbhome_1 directory by using the following command:

# zip -r dbhome_1.zip /u01/app/oracle/product/11.2.0/dbhome_1
Do not include the admin, fast_recovery_area, and oradata directories that are under the Oracle base directory. These directories are created in the target installation later, when you create a new database there.

Copy the ZIP file to the root directory of the target computer.

Extract the ZIP file contents by using the following command:

# unzip -d / dbhome_1.zip
Repeat steps 4 and 5 for each computer where you want to clone the Oracle home, unless the Oracle home is on a shared storage device.

On the target computer, change the directory to the unzipped Oracle home directory, and remove all the .ora (*.ora) files present in the unzipped $ORACLE_HOME/network/admin directory.

From the $ORACLE_HOME/clone/bin directory, run the clone.pl file for the unzipped Oracle home. Use the following syntax:

$ORACLE_HOME/perl/bin/perl clone.pl ORACLE_BASE=”target_oracle_base” ORACLE_HOME=”target_oracle_home”
OSDBA_GROUP=OSDBA_privileged_group OSOPER_GROUP=OSOPER_privileged_group -defaultHomeName
For example:

$ORACLE_HOME/perl/bin/perl clone.pl ORACLE_BASE=”/u01/app/oracle/” ORACLE_HOME=”/u01/app/oracle/product/11.2.0/dbhome_1″
OSDBA_GROUP=dba OSOPER_GROUP=oper -defaultHomeName
Oracle Universal Installer starts, and then records the cloning actions in the cloneActionstimestamp.log file. This log file is typically located in /orainventory_location/logs directory.

To configure the connection information for the new database, run Net Configuration Assistant:

$ cd $ORACLE_HOME/bin
$ ./netca
To create a new database for the newly cloned Oracle home, run Database Configuration Assistant:

$ cd $ORACLE_HOME/bin
$ ./dbca
See Also:
Oracle Universal Installer and OPatch User’s Guide for Windows and UNIX for detailed information about using Oracle Universal Installer to clone an Oracle Database home

Oracle Database Administrator’s Guide for information about cloning Oracle databases and cloning an Oracle Database home

B.2 Configuring Oracle Configuration Manager in a Cloned Oracle Home

Perform the following to configure Oracle Configuration Manager for a cloned Oracle home:

Run the following command from $ORACLE_HOME/ccr/state:
$ rm -rf *.ll*
Run the following command from $ORACLE_HOME/ccr/inventory:
$ cp core.jar pending
Run the following commands from $ORACLE_HOME/ccr/bin:
$ ./emSnapshotEnv
$ ./deployPackages
Run the following command from $ORACLE_HOME/ccr/bin and provide the proper credentials:
$ ./configCCR
If Oracle Configuration Manager was manually configured using setupCCR, then perform the following in the cloned Oracle home:

Delete all the subdirectories of the $ORACLE_HOME/ccr/hosts directory to remove previously configured hosts.

Run the following command from $ORACLE_HOME/ccr/bin:

$ ./configCCR -a
If you installed Oracle Configuration Manager in the original Oracle home but have not configured it, then run the following command in the cloned Oracle home:

$ setupCCR

下面给个范例:

[oracle@rac03 bin]$ /data/oracle/product/11203/db1/perl/bin/perl clone.pl ORACLE_BASE=”/data/oracle” ORACLE_HOME=”/data/oracle/product/11203/db1″ OSDBA_GROUP=dba -defaultHomeName
./runInstaller -clone -waitForCompletion “ORACLE_BASE=/data/oracle” “ORACLE_HOME=/data/oracle/product/11203/db1” “oracle_install_OSDBA=dba” -defaultHomeName -defaultHomeName -silent -noConfig -nowait
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 129576 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-02-14_02-44-44PM. Please wait …Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.

You can find the log of this install session at:
/home/oracle/oraInventory/logs/cloneActions2012-02-14_02-44-44PM.log
………………………………………………………………………………………. 100% Done.

Installation in progress (Tuesday, February 14, 2012 2:44:54 PM CST)
……………………………………………………………………. 79% Done.
Install successful

Linking in progress (Tuesday, February 14, 2012 2:45:00 PM CST)
Link successful

Setup in progress (Tuesday, February 14, 2012 2:45:51 PM CST)
Setup successful

End of install phases.(Tuesday, February 14, 2012 2:46:14 PM CST)
WARNING:A new inventory has been created in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script ‘/home/oracle/oraInventory/orainstRoot.sh’ with root privileges.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the “root” user.
/home/oracle/oraInventory/orainstRoot.sh
/data/oracle/product/11203/db1/root.sh
To execute the configuration scripts:
1. Open a terminal window
2. Log in as “root”
3. Run the scripts

The cloning of OraHome1 was successful.
Please check ‘/home/oracle/oraInventory/logs/cloneActions2012-02-14_02-44-44PM.log’ for more details.

可见oracle clone 自动进行了relink 和 oraInventory 创建,完全可以代替relink 进行数据库soft的迁移工作。

记录一次database upgrade 导致physical standby故障

February 9, 2012 migration, oracle, replication No comments

记录一次database upgrade 导致physical standby故障

upgrade from 10.2.0.5->11.2.0.3

单节点的database升级很容易,严格按照手册,修改compatible=10.2.0

参考文档 Complete Checklist for Manual Upgrades to 11gR2 [ID 837570.1]

如果带有standby的系统升级要注意以下几点:

1.首先stop 10g listener,使用 11g listener

2.使用11g software打开standby database

3,开启日志apply

由于没有仔细看文档,升级primary的时候 没有打开standby 备库处于shutdown 状态,想primary升级完成后开启sql apply,报如下错误

Errors in file /data/oracle/diag/rdbms/paystd/paystd/trace/paystd_pr00_17447.trc (incident=40247):
ORA-00353: log corruption near block 6144 change 2561976 time 02/08/2012 00:00:15
ORA-00334: archived log: ‘/data/oracle/oradata/paystd/arch/1_78_770564180.dbf’
Incident details in: /data/oracle/diag/rdbms/paystd/paystd/incident/incdir_40247/paystd_pr00_17447_i40247.trc
Completed: alter database recover managed standby database disconnect from session using current logfile
MRP0: Background Media Recovery terminated with error 354
Errors in file /data/oracle/diag/rdbms/paystd/paystd/trace/paystd_pr00_17447.trc:
ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 6144 change 2561976 time 02/08/2012 00:00:15
ORA-00334: archived log: ‘/data/oracle/oradata/paystd/arch/1_78_770564180.dbf’
Managed Standby Recovery not using Real Time Apply
Wed Feb 08 09:53:02 2012
Sweep [inc][40247]: completed
Wed Feb 08 09:53:02 2012
Sweep [inc2][40247]: completed

[oracle@db57 ~]$ vi /data/oracle/diag/rdbms/paystd/paystd/incident/incdir_40247/paystd_pr00_17447_i40247.trc

*** 2012-02-08 09:53:01.725
*** SESSION ID:(765.15) 2012-02-08 09:53:01.725
*** CLIENT ID:() 2012-02-08 09:53:01.725
*** SERVICE NAME:() 2012-02-08 09:53:01.725
*** MODULE NAME:() 2012-02-08 09:53:01.725
*** ACTION NAME:() 2012-02-08 09:53:01.725

Dump continued from file: /data/oracle/diag/rdbms/paystd/paystd/trace/paystd_pr00_17447.trc
ORA-00353: log corruption near block 6144 change 2561976 time 02/08/2012 00:00:15
ORA-00334: archived log: ‘/data/oracle/oradata/paystd/arch/1_78_770564180.dbf’

========= Dump for incident 40247 (ORA 353 [6144] [2561976]) ========

*** 2012-02-08 09:53:01.725
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
—– SQL Statement (None) —–
Current SQL information unavailable – no cursor.

—– Call Stack Trace —–

可以看到standby 无法读取 block 6144 的内容 而此时的情况是 如果直接使用real-time-apply 则报错

临时解决方案:

重建primary端的redo log:

使用循环 drop inactive group 之后 standby 恢复正常:


alter database recover managed standby database disconnect using current logfile
Attempt to start background Managed Standby Recovery process (paystd)
Wed Feb 08 10:25:46 2012
MRP0 started with pid=26, OS id=18107
MRP0: Background Managed Standby Recovery process started (paystd)
started logmerger process
Wed Feb 08 10:25:51 2012
Managed Standby Recovery starting Real Time Apply
Parallel Media Recovery started with 12 slaves
Waiting for all non-current ORLs to be archived…
All non-current ORLs have been archived.
Media Recovery Log /data/oracle/oradata/paystd/arch/1_95_770564180.dbf
Media Recovery Waiting for thread 1 sequence 96 (in transit)
Recovery of Online Redo Log: Thread 1 Group 6 Seq 96 Reading mem 0
Mem# 0: /data/oracle/oradata/paystd/std2.log
Completed: alter database recover managed standby database disconnect using current logfile
Wed Feb 08 19:45:46 2012
RFS[1]: Selected log 5 for thread 1 sequence 97 dbid 1905896596 branch 770564180
Wed Feb 08 19:45:46 2012
Media Recovery Waiting for thread 1 sequence 97 (in transit)
Recovery of Online Redo Log: Thread 1 Group 5 Seq 97 Reading mem 0
Mem# 0: /data/oracle/oradata/paystd/std1.log
Wed Feb 08 19:45:46 2012
Archived Log entry 24 added for thread 1 sequence 96 ID 0x7198f794 dest 1:

没有按照文档做导致的匪夷所思的错误,至于standby 为何无法直接读取primary 端的redo内容,有待后续查证

Oracle 9i 单节点文件系统 升级到 10g RAC

November 10, 2011 migration, oracle, RAC No comments

最近做了一次Oracle 9i 升级到10g RAC 的测试,现做如下简要记录:
1.查看目前数据库版本


SQL> select * from v$version;

BANNER
—————————————————————-
Oracle9i Enterprise Edition Release 9.2.0.4.0 – Production
PL/SQL Release 9.2.0.4.0 – Production
CORE 9.2.0.3.0 Production
TNS for Linux: Version 9.2.0.4.0 – Production
NLSRTL Version 9.2.0.4.0 – Production

FILE_ID FILE_NAME
———- ————————————————————
1 /u01/app/oracle/oradata/honcho/system01.dbf
2 /u01/app/oracle/oradata/honcho/undotbs01.dbf
3 /u01/app/oracle/oradata/honcho/hongzx01.dbf
4 /u01/app/oracle/oradata/honcho/drsys01.dbf
5 /u01/app/oracle/oradata/honcho/example01.dbf
9 /u01/app/oracle/oradata/honcho/users01.dbf
10 /u01/app/oracle/oradata/honcho/xdb01.dbf

FILE# NAME
———- ————————————————————
1 /u01/app/oracle/oradata/honcho/temp01.dbf

2.对当前数据库做全备

RMAN> run
2> {
3> allocate channel c1 type disk;
4> backup full database format=’/u01/backup/honcho_full_%U_%T’ tag=’honcho_full’;
5> sql ‘alter system archive log current’;
6> backup archivelog all tag=’arc_bak’ format=’/u01/backup/arch_%U_%T’ delete input;
7> backup current controlfile tag=’bak_ctlfile’ format=’/u01/backup/ctl_file_%U_%T’;
8> backup spfile tag=’spfile’ format=’/u01/backup/honcho_spfile_%U_%T’;
9> release channel c1;
10> }

RMAN> list backup summary;

List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Tag
——- — — – ———– ————— ——- ——- —
1 B F A DISK 08-NOV-11 1 1 honcho_FULL
2 B F A DISK 08-NOV-11 1 1
3 B A A DISK 08-NOV-11 1 1 ARC_BAK
4 B F A DISK 08-NOV-11 1 1 BAK_CTLFILE
5 B F A DISK 08-NOV-11 1 1 SPFILE
6 B F A DISK 08-NOV-11 1 1

3.搭建10g 的clusterware 和ASM 实例
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2

3.直接拷贝原9.2.0.4的pfile到192.168.1.131相应目录,然后在两节点创建相应目录
[oracle@rac1 dbs]$ mkdir -p /u01/app/oracle/admin/honcho/bdump
[oracle@rac1 dbs]$ mkdir -p /u01/app/oracle/admin/honcho/cdump
[oracle@rac1 dbs]$ mkdir -p /u01/app/oracle/admin/honcho/udump

4.修改之后的参数如下:
[oracle@rac1 dbs]$ cat inithoncho1.ora
*.aq_tm_processes=0
*.background_dump_dest=’/u01/app/oracle/admin/honcho/bdump’
*.compatible=’10.2.0.1.0′
*.core_dump_dest=’/u01/app/oracle/admin/honcho/cdump’
*.db_block_size=8192
*.db_cache_size=33554432
*.db_domain=”
*.db_file_multiblock_read_count=16
*.db_name=’honcho’
*.dispatchers='(PROTOCOL=TCP) (SERVICE=honchoXDB)’
*.fast_start_mttr_target=300
*.instance_name=’honcho’
*.java_pool_size=83886080
*.job_queue_processes=0
*.large_pool_size=16777216
*.open_cursors=300
*.pga_aggregate_target=25165824
*.processes=150
*.query_rewrite_enabled=’FALSE’
*.remote_login_passwordfile=’EXCLUSIVE’
*.shared_pool_size=83886080
*.sort_area_size=524288
*.star_transformation_enabled=’FALSE’
*.timed_statistics=TRUE
*.undo_management=’AUTO’
*.undo_retention=10800
*.user_dump_dest=’/u01/app/oracle/admin/honcho/udump’
*.cluster_database_instances=2
*.cluster_database=true
*.control_files=’+DATA/honcho/controlfile/control01.ctl’,’+RECOVERY/honcho/controlfile/control02.ctl’
*.db_create_file_dest=’+DATA’
*.db_recovery_file_dest=’+RECOVERY’
*.db_recovery_file_dest_size=2147483648
honcho1.instance_name=’honcho1′
honcho2.instance_name=’honcho2′
honcho1.instance_number=1
honcho2.instance_number=2
*.log_archive_dest_1=’LOCATION=+RECOVERY/honcho/archivelog’
honcho1.thread=1
honcho2.thread=2
honcho1.undo_tablespace=’UNDOTBS1′
honcho2.undo_tablespace=’UNDOTBS2′

5.使用pfile生产spfile

[oracle@rac1 dbs]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.1.0 – Production on Wed Nov 9 10:52:17 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

SQL> conn / as sysdba
Connected.
SQL> create spfile=’+DATA/honcho/PARAMETERFILE/spfilehoncho.ora’ from pfile=’/u01/app/oracle/product/10.0.2.1/db_1/dbs/inithoncho1.ora’;

File created.

6.重新编译pfile,让其指向spfile,在两节点都需要编译
[oracle@rac1 dbs]$ echo “SPFILE=’+DATA/honcho/PARAMETERFILE/spfilehoncho.ora'” > /u01/app/oracle/product/10.0.2.1/db_1/dbs/inithoncho1.ora
[oracle@rac2 ~]$ cd $ORACLE_HOME/dbs
[oracle@rac2 dbs]$ touch inithoncho2.ora
[oracle@rac2 dbs]$ echo “SPFILE=’+DATA/honcho/PARAMETERFILE/spfilehoncho.ora'” > /u01/app/oracle/product/10.0.2.1/db_1/dbs/inithoncho2.ora

7.创建密码文件:
[oracle@rac1 dbs]$ orapwd file=orapwhoncho1 password=oracle entries=20
[oracle@rac2 dbs]$ orapwd file=orapwhoncho2 password=oracle entries=20

8.开启数据库到nomount状态:
SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup nomount
ORACLE instance started.

Total System Global Area 222298112 bytes
Fixed Size 1218628 bytes
Variable Size 184551356 bytes
Database Buffers 33554432 bytes
Redo Buffers 2973696 bytes

9.恢复controlfile

RMAN> restore controlfile from ‘/u01/backup/ctl_file_04mr66b3_1_1_20111108’;

Starting restore at 09-NOV-11
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=150 instance=honcho1 devtype=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
output filename=+DATA/honcho/controlfile/control01.ctl
output filename=+RECOVERY/honcho/controlfile/control02.ctl
Finished restore at 09-NOV-11

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

10.restore database

RMAN> run {
2> set newname for datafile 1 to ‘+DATA/honcho/DATAFILE/system01.dbf’;
3> set newname for datafile 2 to ‘+DATA/honcho/DATAFILE/undotbs01.dbf’;
4> set newname for datafile 3 to ‘+DATA/honcho/DATAFILE/hongzx01.dbf’;
5> set newname for datafile 4 to ‘+DATA/honcho/DATAFILE/drsys01.dbf’;
6> set newname for datafile 5 to ‘+DATA/honcho/DATAFILE/example01.dbf’;
7> set newname for datafile 9 to ‘+DATA/honcho/DATAFILE/users01.dbf’;
8> set newname for datafile 10 to ‘+DATA/honcho/DATAFILE/xdb01.dbf’;
9> restore database;
10> switch datafile all;
11> switch tempfile all;
12> }

这里并没有restore tempfile

11.recover database

RMAN> recover database;

Starting recover at 09-NOV-11
using channel ORA_DISK_1

starting media recovery

channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=5
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=6
channel ORA_DISK_1: reading from backup piece /u01/backup/arch_03mr66ar_1_1_20111108
channel ORA_DISK_1: restored backup piece 1
piece handle=/u01/backup/arch_03mr66ar_1_1_20111108 tag=ARC_BAK
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
archive log filename=+RECOVERY/honcho/archivelog/1_5_766710700.dbf thread=1 sequence=5
archive log filename=+RECOVERY/honcho/archivelog/1_6_766710700.dbf thread=1 sequence=6
unable to find archive log
archive log thread=1 sequence=7
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 11/09/2011 13:39:31
RMAN-06054: media recovery requesting unknown log: thread 1 seq 7 lowscn 196183

12.修改online logfile 到ASM

SQL> alter database rename file ‘/u01/app/oracle/oradata/honcho/redo03.log’ to ‘+DATA/honcho/ONLINELOG/redo03.log’;

Database altered.

SQL> alter database rename file ‘/u01/app/oracle/oradata/honcho/redo02.log’ to ‘+DATA/honcho/ONLINELOG/redo02.log’;

Database altered.

SQL> alter database rename file ‘/u01/app/oracle/oradata/honcho/redo01.log’ to ‘+DATA/honcho/ONLINELOG/redo01.log’;

Database altered.

13.把数据库以resetlogs的方式打开
alter database open resetlogs;

14.升级数据库到10g
SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 222298112 bytes
Fixed Size 1218628 bytes
Variable Size 184551356 bytes
Database Buffers 33554432 bytes
Redo Buffers 2973696 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced

查看日志
Errors in file /u01/app/oracle/admin/honcho/udump/honcho1_ora_16126.trc:
ORA-39701: database must be mounted EXCLUSIVE for UPGRADE or DOWNGRADE
Error 39701 happened during db open, shutting down database
USER: terminating instance due to error 39701

解决方法:
[oracle@rac1]$ cd $ORACLE_HOME/nls/data/old
[oracle@rac1 old]$ perl cr9idata.pl
Creating directory /u01/app/oracle/product/10.0.2.1/db_1/nls/data/9idata …
Copying files to /u01/app/oracle/product/10.0.2.1/db_1/nls/data/9idata…
Copy finished.
Please reset environment variable ORA_NLS10 to /u01/app/oracle/product/10.0.2.1/db_1/nls/data/9idata!

编辑环境变量,添加:
export ORA_NLS10=/u01/app/oracle/product/10.0.2.1/db_1/nls/data/9idata

同时要将参数文件里面的与cluster 相关的参数去掉

SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 222298112 bytes
Fixed Size 1218628 bytes
Variable Size 184551356 bytes
Database Buffers 33554432 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.
SQL> select * from v$version;

BANNER
—————————————————————-
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Prod
PL/SQL Release 10.2.0.1.0 – Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 – Production
NLSRTL Version 10.2.0.1.0 – Production

15.给临时表空间添加文件
SQL> alter tablespace temp add tempfile ‘+DATA’ size 100m;

16.由于在9i里面没有sysaux表空间,这里也要添加sysaux表空间:
SQL> create tablespace sysaux datafile ‘+DATA’ size 500m reuse
2 extent management local
3 segment space management auto;

Tablespace created.

17.运行升级脚本

@?/rdbms/admin/catupgrd.sql
@?/rdbms/admin/catclust.sql
@?/rdbms/admin/utlrp.sql

18.添加与RAC相关的一些参数:


SQL> alter system set cluster_database=true scope=spfile;

System altered.

SQL> alter system set cluster_database_instances=2 scope=spfile;

System altered.

SQL> alter system set instance_number=1 scope=spfile sid=’honcho1′;

System altered.

SQL> alter system set instance_number=2 scope=spfile sid=’honcho2′;

System altered.

SQL> alter system set thread=1 scope=spfile sid=’honcho1′;

System altered.

SQL> alter system set thread=2 scope=spfile sid=’honcho2′;

System altered.

19.添加节点2的undo表空间和redolog file

SQL> create undo tablespace UNDOTBS2 datafile ‘+DATA/honcho/datafile/undotbs02.dbf’ size 50m;

Tablespace created.

SQL> alter system set undo_tablespace=’UNDOTBS2′ scope=spfile sid=’honcho2′;

System altered.

SQL> alter database add logfile thread 2 group 4 ‘+DATA’ size 100m;

Database altered.

SQL> alter database add logfile thread 2 group 5 ‘+DATA’ size 100m;

Database altered.

SQL> alter database add logfile thread 2 group 6 ‘+DATA’ size 100m;

Database altered.
SQL> alter database enable public thread 2;

Database altered.

20.重新打开1节点:
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 222298112 bytes
Fixed Size 1218628 bytes
Variable Size 184551356 bytes
Database Buffers 33554432 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.
SQL> show parameter cluster

NAME TYPE VALUE
———————————— ———– ——————————
cluster_database boolean TRUE
cluster_database_instances integer 2
cluster_interconnects string

在节点2上同样打开实例:
[oracle@rac2 ~]$ sqlplus /nolog

SQL*Plus: Release 10.2.0.1.0 – Production on Wed Nov 9 18:24:02 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.

Total System Global Area 222298112 bytes
Fixed Size 1218628 bytes
Variable Size 184551356 bytes
Database Buffers 33554432 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.

SQL> select instance_number,instance_name,host_name from gv$instance;

INSTANCE_NUMBER INSTANCE_NAME HOST_NAME
————— —————- ——————————
1 honcho1 rac1
2 honcho2 rac2

2 rows selected.

21.添加监听
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2
使用netca配置
[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
————————————————————
ora….SM1.asm application ONLINE ONLINE rac1
ora….C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora….C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2

22.添加服务

[oracle@rac1 ~]$ srvctl add database -d honcho -o $ORACLE_HOME -p +DATA/honcho/PARAMETERFILE/spfilehoncho.ora
[oracle@rac1 ~]$ srvctl add instance -d honcho -i honcho1 -n rac1
[oracle@rac1 ~]$ srvctl add instance -d honcho -i honcho2 -n rac2
[oracle@rac1 ~]$ srvctl modify instance -d honcho -i honcho1 -s +ASM1
[oracle@rac1 ~]$ srvctl modify instance -d honcho -i honcho2 -s +ASM2
[oracle@rac1 ~]$ srvctl start database -d honcho

[oracle@rac1 ~]$ crs_stat -t
Name Type Target State Host
————————————————————
ora.honcho.db application ONLINE ONLINE rac1
ora….b1.inst application ONLINE ONLINE rac1
ora….b2.inst application ONLINE ONLINE rac2
ora….SM1.asm application ONLINE ONLINE rac1
ora….C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application ONLINE ONLINE rac1
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip application ONLINE ONLINE rac1
ora….SM2.asm application ONLINE ONLINE rac2
ora….C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application ONLINE ONLINE rac2
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip application ONLINE ONLINE rac2

SQL> select * from v$version;

BANNER
—————————————————————-
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – Prod
PL/SQL Release 10.2.0.1.0 – Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 – Production
NLSRTL Version 10.2.0.1.0 – Production

还有一些相关参数需要调整。

OK. Game over!

记录一次9i单节点转rac,以及一套rac存储迁移之后的整体容灾方案

November 7, 2011 migration, oracle 1 comment

数据库版本9.2.0.1 OS版本 AIX5

由于9.2.0.1的默认maxlogfiles 为5 maxinstance 为1所以需要重建controlfile 修改maxinstance为4 maxlogfiles 为 16

SQL> alter database backup controlfile to trace;

cd $ORACLE_BASE/admin/priap/udump/

查看最新的trace文件 priap_ora_143648.trc

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “PRIAP” NORESETLOGS NOARCHIVELOG
— SET STANDBY TO MAXIMIZE PERFORMANCE
MAXLOGFILES 5 —–>(16)
MAXLOGMEMBERS 3
MAXDATAFILES 309
MAXINSTANCES 1 ——>(4)
MAXLOGHISTORY 5445
LOGFILE
GROUP 1 ‘/dev/rredo01’ SIZE 100M,
GROUP 2 ‘/dev/rredo02’ SIZE 100M,
GROUP 3 ‘/dev/rredo03’ SIZE 100M
— STANDBY LOGFILE
DATAFILE
‘/dev/rsystem01’,
‘/dev/rundotbs01’,
‘/dev/rcwmlite01’,
‘/dev/rPRIAP_INDEX16’
‘/dev/rdksh01’,
‘/dev/rdksh02’,
‘/dev/rDOGPHOTO’,
‘/dev/rdrsys01’,
‘/dev/rindx01’,
‘/dev/rlogmnr01’,
‘/dev/rodm01’,
‘/dev/rPRIAP01’,
‘/dev/rPRIAP02’,
‘/dev/rPRIAP04’,
‘/dev/rPRIAP06’,
‘/dev/rPRIAP19’,
‘/dev/rPRIAP18’,
‘/dev/rPRIAP17’,
‘/dev/rPRIAP16’,
‘/dev/rPRIAP15’,
‘/dev/rPRIAP14’,
‘/dev/rPRIAP13’,
‘/dev/rPRIAP12’,
‘/dev/rPRIAP11’,
‘/dev/rPRIAP43’,
‘/dev/rPRIAP42’,
‘/dev/rPRIAP41’,
‘/dev/rPRIAP40’,
‘/dev/rPRIAP29’,
‘/dev/rPRIAP28’,
‘/dev/rPRIAP27’,
‘/dev/rPRIAP26’,
‘/dev/rPRIAP25’,
‘/dev/rPRIAP24’,
‘/dev/rPRIAP23’,
‘/dev/rPRIAP22’,
‘/dev/rPRIAP21’,
‘/dev/rPRIAP20’,
‘/dev/rPRIAP10’,
‘/dev/rPRIAP09’,
‘/dev/rPRIAP08’,
‘/dev/rPRIAP53’,
‘/dev/rPRIAP52’,
‘/dev/rPRIAP51’,
‘/dev/rPRIAP50’,
‘/dev/rPRIAP49’,
‘/dev/rPRIAP48’,
‘/dev/rPRIAP47’,
‘/dev/rPRIAP46’,
‘/dev/rPRIAP45’,
‘/dev/rPRIAP70’,
‘/dev/rPRIAP69’,
‘/dev/rPRIAP68’,
‘/dev/rPRIAP67’,
‘/dev/rPRIAP66’,
‘/dev/rPRIAP65’,
‘/dev/rPRIAP64’,
‘/dev/rPRIAP63’,
‘/dev/rPRIAP62’,
‘/dev/rPRIAP87’,
‘/dev/rPRIAP86’,
‘/dev/rPRIAP85’,
‘/dev/rPRIAP84’,
‘/dev/rPRIAP83’,
‘/dev/rPRIAP82’,
‘/dev/rPRIAP81’,
‘/dev/rPRIAP80’,
‘/dev/rPRIAP79’,
‘/dev/rPRIAP95’,
‘/dev/rPRIAP94’,
‘/dev/rPRIAP93’,
‘/dev/rPRIAP92’,
‘/dev/rPRIAP91’,
‘/dev/rPRIAP90’,
‘/dev/rPRIAP89’,
‘/dev/rPRIAP88’,
‘/dev/rPRIAP78’,
‘/dev/rPRIAP77’,
‘/dev/rPRIAP76’,
‘/dev/rPRIAP75’,
‘/dev/rPRIAP74’,
‘/dev/rPRIAP73’,
‘/dev/rPRIAP72’,
‘/dev/rPRIAP71’,
‘/dev/rPRIAP61’,
‘/dev/rPRIAP60’,
‘/dev/rPRIAP59’,
‘/dev/rPRIAP58’,
‘/dev/rPRIAP57’,
‘/dev/rPRIAP56’,
‘/dev/rPRIAP55’,
‘/dev/rPRIAP54’,
‘/dev/rPRIAP44’,
‘/dev/rPRIAP39’,
‘/dev/rPRIAP38’,
‘/dev/rPRIAP37’,
‘/dev/rPRIAP36’,
‘/dev/rPRIAP35’,
‘/dev/rPRIAP34’,
‘/dev/rPRIAP33’,
‘/dev/rPRIAP32’,
‘/dev/rPRIAP31’,
‘/dev/rPRIAP30’,
‘/dev/rPRIAP07’,
‘/dev/rPRIAP05’,
‘/dev/rPRIAP03’,
‘/dev/rPRIAP21-01’,
‘/dev/rpriap21-02’,
‘/dev/rPRIAP22-01’,
‘/dev/rPRIAP22-02’,
‘/dev/rPRIAP23-01’,
‘/dev/rPRIAP23-02’,
‘/dev/rPRIAP24-01’,
‘/dev/rPRIAP24-02’,
‘/dev/rPRIAP_INDEX01’,
‘/dev/rPRIAP_INDEX04’,
‘/dev/rPRIAP_INDEX31’,
‘/dev/rPRIAP_INDEX30’,
‘/dev/rPRIAP_INDEX29’,
‘/dev/rPRIAP_INDEX24’,
‘/dev/rPRIAP_INDEX23’,
‘/dev/rPRIAP_INDEX22’,
‘/dev/rPRIAP_INDEX21’,
‘/dev/rPRIAP_INDEX20’,
‘/dev/rPRIAP_INDEX18’,
‘/dev/rPRIAP_INDEX79’,
‘/dev/rPRIAP_INDEX78’,
‘/dev/rPRIAP_INDEX77’,
‘/dev/rPRIAP_INDEX76’,
‘/dev/rPRIAP_INDEX75’,
‘/dev/rPRIAP_INDEX74’,
‘/dev/rPRIAP_INDEX73’,
‘/dev/rPRIAP_INDEX72’,
‘/dev/rPRIAP_INDEX71’,
‘/dev/rPRIAP_INDEX70’,
‘/dev/rPRIAP_INDEX69’,
‘/dev/rPRIAP_INDEX68’,
‘/dev/rPRIAP_INDEX67’,
‘/dev/rPRIAP_INDEX66’,
‘/dev/rPRIAP_INDEX65’,
‘/dev/rPRIAP_INDEX64’,
‘/dev/rPRIAP_INDEX63’,
‘/dev/rPRIAP_INDEX62’,
‘/dev/rPRIAP_INDEX61’,
‘/dev/rPRIAP_INDEX60’,
‘/dev/rPRIAP_INDEX59’,
‘/dev/rPRIAP_INDEX58’,
‘/dev/rPRIAP_INDEX57’,
‘/dev/rPRIAP_INDEX56’,
‘/dev/rPRIAP_INDEX55’,
‘/dev/rPRIAP_INDEX54’,
‘/dev/rPRIAP_INDEX53’,
‘/dev/rPRIAP_INDEX52’,
‘/dev/rPRIAP_INDEX51’,
‘/dev/rPRIAP_INDEX50’,
‘/dev/rPRIAP_INDEX49’,
‘/dev/rPRIAP_INDEX48’,
‘/dev/rPRIAP_INDEX47’,
‘/dev/rPRIAP_INDEX46’,
‘/dev/rPRIAP_INDEX45’,
‘/dev/rPRIAP_INDEX44’,
‘/dev/rPRIAP_INDEX43’,
‘/dev/rPRIAP_INDEX42’,
‘/dev/rPRIAP_INDEX41’,
‘/dev/rPRIAP_INDEX40’,
‘/dev/rPRIAP_INDEX36’,
‘/dev/rPRIAP_INDEX35’,
‘/dev/rPRIAP_INDEX34’,
‘/dev/rPRIAP_INDEX33’,
‘/dev/rPRIAP_INDEX32’,
‘/dev/rPRIAP_INDEX17’,
‘/dev/rPRIAP_INDEX15’,
‘/dev/rPRIAP_INDEX14’,
‘/dev/rPRIAP_INDEX13’,
‘/dev/rPRIAP_INDEX12’,
‘/dev/rPRIAP_INDEX11’,
‘/dev/rPRIAP_INDEX10’,
‘/dev/rPRIAP_INDEX09’,
‘/dev/rPRIAP_INDEX39’,
‘/dev/rPRIAP_INDEX38’,
‘/dev/rPRIAP_INDEX37’,
‘/dev/rPRIAP_INDEX28’,
‘/dev/rPRIAP_INDEX27’,
‘/dev/rPRIAP_INDEX26’,
‘/dev/rPRIAP_INDEX25’,
‘/dev/rPRIAP_INDEX19’,
‘/dev/rPRIAP_INDEX08’,
‘/dev/rPRIAP_INDEX07’,
‘/dev/rPRIAP_INDEX06’,
‘/dev/rPRIAP_INDEX05’,
‘/dev/rPRIAP_INDEX03’,
‘/dev/rPRIAP_INDEX02’,
‘/dev/rPRIAP_LOG01’,
‘/dev/rPRIAP_LOG02’,
‘/dev/rPRIAP_LOG69’,
‘/dev/rPRIAP_LOG68’,
‘/dev/rPRIAP_LOG67’,
‘/dev/rPRIAP_LOG66’,
‘/dev/rPRIAP_LOG65’,
‘/dev/rPRIAP_LOG64’,
‘/dev/rPRIAP_LOG63’,
‘/dev/rPRIAP_LOG62’,
‘/dev/rPRIAP_LOG16’,
‘/dev/rexample01’,
‘/dev/rPRIAP_LOG72’,
‘/dev/rPRIAP_LOG71’,
‘/dev/rPRIAP_LOG70’,
‘/dev/rPRIAP_LOG15’,
‘/dev/rPRIAP_LOG14’,
‘/dev/rPRIAP_LOG13’,
‘/dev/rPRIAP_LOG12’,
‘/dev/rPRIAP_LOG11’,
‘/dev/rPRIAP_LOG10’,
‘/dev/rPRIAP_LOG09’,
‘/dev/rPRIAP_LOG08’,
‘/dev/rPRIAP_LOG61’,
‘/dev/rPRIAP_LOG60’,
‘/dev/rPRIAP_LOG59’,
‘/dev/rPRIAP_LOG58’,
‘/dev/rPRIAP_LOG57’,
‘/dev/rPRIAP_LOG56’,
‘/dev/rPRIAP_LOG55’,
‘/dev/rPRIAP_LOG54’,
‘/dev/rPRIAP_LOG53’,
‘/dev/rPRIAP_LOG52’,
‘/dev/rPRIAP_LOG51’,
‘/dev/rPRIAP_LOG50’,
‘/dev/rPRIAP_LOG49’,
‘/dev/rPRIAP_LOG47’,
‘/dev/rPRIAP_LOG46’,
‘/dev/rPRIAP_LOG45’,
‘/dev/rPRIAP_LOG44’,
‘/dev/rPRIAP_LOG43’,
‘/dev/rPRIAP_LOG42’,
‘/dev/rPRIAP_LOG41’,
‘/dev/rPRIAP_LOG40’,
‘/dev/rPRIAP_LOG33’,
‘/dev/rPRIAP_LOG48’,
‘/dev/rPRIAP_LOG32’,
‘/dev/rPRIAP_LOG31’,
‘/dev/rPRIAP_LOG30’,
‘/dev/rPRIAP_LOG29’,
‘/dev/rPRIAP_LOG28’,
‘/dev/rPRIAP_LOG27’,
‘/dev/rPRIAP_LOG26’,
‘/dev/rPRIAP_LOG25’,
‘/dev/rPRIAP_LOG39’,
‘/dev/rPRIAP_LOG38’,
‘/dev/rPRIAP_LOG37’,
‘/dev/rPRIAP_LOG36’,
‘/dev/rPRIAP_LOG35’,
‘/dev/rPRIAP_LOG34’,
‘/dev/rPRIAP_LOG24’,
‘/dev/rPRIAP_LOG23’,
‘/dev/rPRIAP_LOG22’,
‘/dev/rPRIAP_LOG21’,
‘/dev/rPRIAP_LOG20’,
‘/dev/rPRIAP_LOG19’,
‘/dev/rPRIAP_LOG18’,
‘/dev/rPRIAP_LOG17’,
‘/dev/rPRIAP_LOG07’,
‘/dev/rPRIAP_LOG06’,
‘/dev/rPRIAP_LOG05’,
‘/dev/rPRIAP_LOG04’,
‘/dev/rPRIAP_LOG03’,
‘/dev/rtools01’,
‘/dev/rtools04’,
‘/dev/rtools05’,
‘/dev/rtools03’,
‘/dev/rtools02’,
‘/dev/rTS_FZ4_DATA’,
‘/dev/rTS_DATA_APPSYS’,
‘/dev/rTS_FZ4_DEFAULT0’,
‘/dev/rTS_FZ4_DEFAULT1’,
‘/dev/rTS_FZ4_DIC’,
‘/dev/rTS_FZ4_INDEX’,
‘/dev/rTS_IND_APPSYS’,
‘/dev/rusers01’,
‘/dev/rxdb01’,
‘/dev/rxdb02’,
‘/dev/rundotbs03’,
‘/dev/rsystem02’
CHARACTER SET ZHS16GBK
;

新建3个lv 用于controlfile (scontrol01 scontrol02 scontrol03)

修改initpriap.ora (将controlfile 路径改为’/dev/rscontrol01′,’/dev/rscontrol02′,’/dev/rscontrol03′)

关闭实例 将实例启动到mount状态

SQL>shutdown immediate;
SQL>startup nomount;

用上面语句创建controlfile;

Control file created.

SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required

SQL> alter database open;

Database altered.

SQL>

controlfile 重建完毕

单实例转rac

SQL>shutdown immediate;

关闭数据库

将rac1 rac2节点原oravg varryoff 将single_inst 节点datavg import至 rac1 rac2 节点.

将datavg 加入hacmp资源组 (将datavg置于concurrent mode)

用rac1 节点开启数据库 (指向control01,control02,control03 变为 scontrol01,scontrol02,scontrol01)

修改db_files=500

SQL>alter system set db_files=500 scope=spfile;
SQL>shutdown immediate;
SQL>startup mount;

将数据库置于mount状态

添加thread 2 日志组

SQL>alter database add logfile thread 2 group 4 (‘/dev/rredo11’) size 100M;

SQL>alter database add logfile thread 2 group 5 (‘/dev/rrdeo12’) szie 100M;

SQL>alter database add logfile thread 2 group 6 (‘/dev/rredo13’) size 100M;

打开数据库

SQL>alter database open

添加undo tablespace

SQL> create undo tablespace undotbs2 datafile ‘/dev/rundotbs03′ size 25000M;

SQL> alter system set undo_tablespace=’UNDOTBS2′ scope=both sid=’priap2’;

将redo logfile 置于public

SQL> alter database enable public thread 2;

SQL>@catclust.sql 创建rac视图

将 44 节点打开数据库 (control* 变为scontrol*)
同时将spfile中的undo_tablespace指定为undotbs2

SQL>startup;

SQL>select instance_name ,status from gv$instance;

INSTANCE_NAME STATUS
———— ——
priap1 open
priap2 open

开启listener (rac1,rac2) lsnrctl start

迁移完成.

———————————————————————————————————————————-

下面我们利用9i dataguard 将另外一套rac迁移至新的存储 (这套rac将和上面那套rac做容灾)

迁移思路, 利用rman copy 做一个rac-single_instance 的DG 利用failover 将standby 置为primary 将VG加入hacmp从而利用原rac两个节点将新库拉起

迁移步骤:
1.配置dataguard rac3,rac4–single_inst

alter system set log_archive_dest_state_2=defer scope=both sid=’*’;
alter system set log_archive_dest_2=’service=priap_dg’ scope=both sid=’*’;
alter system set standby_file_management=AUTO scope=both sid=’*’;
alter system set fal_client=priap_dg scope=both sid=’*’;
alter system set fal_server=priap1,priap2 scope=both sid=’*’;

alter system set log_archive_dest_state_2=defer scope=both;
alter system set log_archive_dest_2=’service=priap’ scope=both;
alter system set standby_file_management=AUTO scope=both;
alter system set fal_client=’priap1,priap2′ scope=both;
alter system set fal_server=priap_dg scope=both;

rac3,rac4,single_inst 节点TNS以及listener

rac3 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.14.18.39)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = priap)
(INSTANCE_NAME = priap2)
)
)

rac4 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.14.18.34)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = priap)
(INSTANCE_NAME = priap1)
)
)

single_inst =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.14.18.77)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = priap)
)
)

single_inst listener.ora

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = zaxxrkback)(PORT = 1521))
)
)
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/app/oracle/product/920)
(PROGRAM = extproc)
)
(SID_DESC =
(ORACLE_HOME = /oracle/app/oracle/product/920)
(SID_NAME = priap)
)
)

rac3 listener.ora

LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.14.18.39)(PORT = 1521))
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/app/oracle/product/920)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = priap)
(ORACLE_HOME = /oracle/app/oracle/product/920)
(SID_NAME = priap2)
)
)

rac4 listener.ora

LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.14.18.34)(PORT = 1521))
)

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/app/oracle/product/920)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = priap)
(ORACLE_HOME = /oracle/app/oracle/product/920)
(SID_NAME = priap1)
)
)

2.检查两端数据一致性:archive log list (两端)

3.failover database
(1).检查归档文件是否连续,是否有gap

在standby库执行

SQL> select THREAD#,LOW_SEQUENCE#,HIGH_SEQUENCE# from v$archive_gap;

(2).如果步骤1查询出来纪录,则在primary库上执行,否则跳过此步骤

在主库上执行语句,按步骤1查询出来的纪录找出归档文件

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN XX AND XX;

–如果primary存在,拷贝相应的归档到STANDBY数据库,并注册.

SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE ‘xxx’;

(3).检查归档文件是否完整

分别在primary/standby执行下列语句:

SQL> select distinct thread#,max(sequence#) over(partition by thread#) a from v$archived_log;

把相差的归档复制到待转换的standby服务器,并手工register

4.开始做failover

察看standby进程状态

SQL> select process,client_process,sequence#,status from v$managed_standby;

SQL> alter database recover managed standby database finish force ;

FORCE关键字将会停止当前活动的RFS进程,以便立刻执行failover。

SQL> alter database recover managed standby database finish skip standby logfile;

SQL> alter database commit to switchover to primary;

SQL> shutdown immediate

5.将single_inst 节点的datavg挂到rac3,rac4 同时将datavg挂到hacmp中,用rac3节点的initpriap1.ora 开启数据库

SQL> create spfile from pfile;
SQL> startup;

rac4节点同时开启数据库

SQL> startup;

6.查看数据库状态
SQL> select database_role from v$database;

Failover切换成功

7. 将single_inst 节点挂到rac1,rac2节点 原库所在存储,将oravg,oravg2剔除concurrent vg;

8. 启动priap database ;(rac->单实例)

9. 切换完成

现在我们完成了单节点转rac 以及另外一套rac的迁移工作,现在需要的工作就是两套rac之间的容灾工作。为了不影响两套rac之间独立应用,我们采取ogg单表重做的方式,使用exp的方法指定
SCN加载trail文件,过程略

以后会推出9i单节点转10g RAC的文档。

continuing ……

手动升级9i-10g全步骤

October 29, 2011 migration, oracle 3 comments

某局威武,升级弄完都到凌晨6点了 居然还要搭DG ,下面记录这次的升级过程(9.2.0.7-10.2.0.4)

迁移步骤:

1 首先安装10g的数据库软件,基于当时的AIX情况,由于系统是通过OS升级重新导入到P6 570上的需要重新开启AIO,数据库版本为9.2.0.7 需要再打一个bos.cifs.**补丁

2 copy 10g ORACLE_HOME/rdbms/admin 下的脚本
ORACLE_HOME/rdbms/admin/utlu102i.sql /tmp/utlu102i.sql
ORACLE_HOME/rdbms/admin/utltzuv2.sql /tmp/utltzuv2.sql

3 收集数据库信息

SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off

检查各项参数:(Database,Logfiles,Tablespaces,Update Parameters,Deprecated Parameters,Obsolete Parameters,Components,Miscellaneous Warnings,SYSAUX Tablespace

3 检查角色

SELECT grantee FROM dba_role_privs
WHERE granted_role = ‘CONNECT’ and
grantee NOT IN (
‘SYS’, ‘OUTLN’, ‘SYSTEM’, ‘CTXSYS’, ‘DBSNMP’,
‘LOGSTDBY_ADMINISTRATOR’, ‘ORDSYS’,
‘ORDPLUGINS’, ‘OEM_MONITOR’, ‘WKSYS’, ‘WKPROXY’,
‘WK_TEST’, ‘WKUSER’, ‘MDSYS’, ‘LBACSYS’, ‘DMSYS’,
‘WMSYS’, ‘OLAPDBA’, ‘OLAPSVR’, ‘OLAP_USER’,
‘OLAPSYS’, ‘EXFSYS’, ‘SYSMAN’, ‘MDDATA’,
‘SI_INFORMTN_SCHEMA’, ‘XDB’, ‘ODM’);

由于9i的connect role具有如下角色:

SELECT GRANTEE, PRIVILEGE FROM DBA_SYS_PRIVS
WHERE GRANTEE=’CONNECT’

GRANTEE PRIVILEGE
—————————— —————————
CONNECT CREATE VIEW
CONNECT CREATE TABLE
CONNECT ALTER SESSION
CONNECT CREATE CLUSTER
CONNECT CREATE SESSION
CONNECT CREATE SYNONYM
CONNECT CREATE SEQUENCE
CONNECT CREATE DATABASE LINK

10g 的connect 仅仅具有 create session 其他privilege需要手动grant

4 创建dblink 脚本 (跳过)

5 检查timestamp 数据类型

$ sqlplus ‘/as sysdba’

SQL> spool TimeZone_Info.log
SQL> @utltzuv2.sql
SQL> spool off

如果 timezone_info.log 指出timezone格式不支持 如下操作:

For example, user scott has a table tztab:

create table tztab(x number primary key, y timestamp with time zone);
insert into tztab values(1, timestamp ”);

Before upgrade, you can create a table tztab_back. Note column y here is defined as VARCHAR2 to preserve the original value.

create table tztab_back(x number primary key, y varchar2(256));
insert into tztab_back select x,
to_char(y, ‘YYYY-MM-DD HH24.MI.SSXFF TZR’) from tztab;
After upgrade, you need to update the data in the table tztab using the value in tztab_back.

update tztab t set t.y = (select to_timestamp_tz(t1.y,
‘YYYY-MM-DD HH24.MI.SSXFF TZR’) from tztab_back t1 where t.x=t1.x);

6 NLS_NCHAR_CHARACTERSET 跳过 (8i-10g 需要检查此项)

7 收集系统用户信息 (9i没有dictionary 信息收集)
sqlplus ‘/as sysdba’

SQL>spool gdict

SQL>grant analyze any to sys;

exec dbms_stats.gather_schema_stats(‘WMSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘MDSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘CTXSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘XDB’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘WKSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘LBACSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘OLAPSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘DMSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘ODM’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘ORDSYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘ORDPLUGINS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘SI_INFORMTN_SCHEMA’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘OUTLN’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘DBSNMP’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘SYSTEM’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);
SQL>exec dbms_stats.gather_schema_stats(‘SYS’,options=>’GATHER’,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

SQL>spool off

8 检查数据库对象状态

spool invalid_pre.lst
select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <> ‘VALID’;
spool off

重新编辑invalid objects
$ sqlplus ‘/as sysdba’
SQL> @?/rdbms/admin/utlrp.sql

If you are upgrading from Oracle9iR2 (9.2), verify that the view dba_registry contains data. If the view is empty, run the following scripts from the 9.2 home:

$ sqlplus ‘/as sysdba’
SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql
SQL> @?/rdbms/admin/utlrp.sql

9 Check for corruption in the dictionary useing the following commands in SQL*Plus connected as sys:

Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql

Select ‘Analyze cluster “‘||cluster_name||'” validate structure cascade;’
from dba_clusters
where owner=’SYS’
union
Select ‘Analyze table “‘||table_name||'” validate structure cascade;’
from dba_tables
where owner=’SYS’ and partitioned=’NO’ and (iot_type=’IOT’ or iot_type is NULL)
union
Select ‘Analyze table “‘||table_name||'” validate structure cascade into invalid_rows;’
from dba_tables
where owner=’SYS’ and partitioned=’YES’;

spool off

This creates a script called analyze.sql.
Now execute the following steps.

$ sqlplus ‘/as sysdba’
SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql
SQL> @analyze.sql

analyze.sql 不应该返回任何错误

10 确保所有物化视图刷新完毕,并且所有刷新停止

$ sqlplus ‘/ as sysdba’
SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times;

11 停止监听

$ lsnrctl
LSNRCTL> stop

确保没有datafile 需要recover

$ sqlplus ‘/ as sysdba’
SQL> select * from v$recover_file;

12 确保数据库没有datafile在backup状态

SQL> select * from v$backup where status!=’NOT ACTIVE’;

13 查看是否存在分布式事务

SQL> select * from dba_2pc_pending;

14 关闭所有jobs

15 确保system sys用户的默认表空间为system

SQL> select username, default_tablespace from dba_users
where username in (‘SYS’,’SYSTEM’);
To modify use:

SQL> alter user sys default tablespace SYSTEM;
SQL> alter user system default tablespace SYSTEM;

16 查看aud$是否启用

SQL> select tablespace_name from dba_tables where table_name=’AUD$’;
If the AUD$ table exists, and is in use, upgrade performance can be effected depending on the number of records in the table.

Note.979942.1 Ext/Pub Database upgrade appears to have halted at SYS.AUD$ Table

这次升级直接select count (*) from aud$ 返回值为0 直接 继续下一步

17 Note down where all control files are located.

SQL> select * from v$controlfile;

18 If table XDB.MIGR9202STATUS exists in the database, drop it before upgrading the database
(to avoid the issue described in Note:356082.1)

19 Shutdown the database.

$ sqlplus ‘/as sysdba’
SQL> shutdown immediate;

20 备份数据库

21 copy pfile from 9i $ORACLE_HOME to 10g $ORACLE_HOME

Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to
at least 24 MB.

Ensure that the SHARED_POOL_SIZE and the LARGE_POOL_SIZE are at least 150Mb

Make sure the JAVA_POOL_SIZE initialization parameter is set to at least 150 MB.

AQ_TM_PROCESSES=0 and JOB_QUEUE_PROCESSES=0 for the duration of the upgrade

If you have defined an UNDO tablespace, set the parameter UNDO_MANAGEMENT=AUTO

虽然在10g上通过startup upgarde oracle会自动禁止这些参数,但是这里还是提前设置
并且可以参照上面的database_info信息进行调整

22 检查剩余空间是否充足,确保archive归档空间

23 检查nls_lang-> database nls_characterset

24 cp listener* tnsnames* to 10g $ORACLE_HOME

25 创建密码文件

26 Update the oratab entry, to set the new ORACLE_HOME and disable automatic startup:

SID:ORACLE_HOME:N

27 确定环境变量 注意unset TNS_ADMIN ORA_NLS33 chang to ORA_NLS10

export ORA_NLS10= /u01/finprod/finproddb/10.2/db_1/nls/data/9idata

/u01/finprod/finproddb/10.2/db_1/perl/bin/perl

oraprod(DATABASE)@nbrbwdev5:perl $ORACLE_HOME/nls/data/old/cr9idata.pl
Creating directory /u01/finprod/finproddb/10.2/db_1/nls/data/9idata …
Copying files to /u01/finprod/finproddb/10.2/db_1/nls/data/9idata…
Copy finished.

28 建立sysaux 表空间 (注意 本地管理 自动段空间管理 不然后面跑脚本会报错)
CREATE TABLESPACE sysaux DATAFILE ‘xxxx sysaux01.dbf’
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

29 开始升级
SQL> spool upgrade.log
SQL> @catupgrd.sql

30 查看组件状态

SQL> @utlu102s.sql TEXT

SQL> select comp_name, status, version from dba_registry;

31 Restart the database:

SQL> shutdown immediate (DO NOT use “shutdown abort” !!!)
SQL> startup restrict

32 如果开启Oracle Label Security policies

SQL> @olstrig.sql

此库没有采用跳过

33 Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.

SQL> @utlrp.sql

If there are still objects which are not valid after running the script run the following:

spool invalid_post.lst
Select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status
from
dba_objects where status <>‘VALID’;
spool off
Now compare the invalid objects in the file ‘invalid_post.lst’ with the invalid objects in the file ‘invalid_pre.lst’ you created in step 9.

NOTE: If you have upgraded from version 9.2 to version 10.2 and find that the following views are invalid, the views can be safely ignored (or dropped):

SYS.V_$KQRPD
SYS.V_$KQRSD
SYS.GV_$KQRPD
SYS.GV_$KQRSD

Invalid x_$ Objects After Upgrade [ID 361757.1]

ERROR IN CATUPGRD: ORA-00904 IN DBMS_SQLPA [ID 565600.1]

34 开启数据库 ,开启lsnrctl ,启用rbo 使用dbms_stats.gather_database_stats收集信息,开启rbo 修改compatible ,观察数据库性能35 修改数据库字符集由于csscan扫描转换成zhs16gbk某些lob字段可能产生乱码 所以放弃csalter转换字符集方案,并且不存在超集,子集关系 所以采用强制转换方式:
SQL>STARTUP MOUNT;
SQL>ALTER SYSTEM ENABLE RESTRICTED SESSION;
SQL>ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
SQL>ALTER SYSTEM SET AQ_TM_PROCESSES=0;
SQL>ALTER DATABASE OPEN;
SQL>ALTER DATABASE CHARACTER SET INTERNAL_USE zhs16gbk;
SQL>shutdown immediate;
SQL>startup

Related posts:

Invalid x_$ Objects After Upgrade [ID 361757.1]
ERROR IN CATUPGRD: ORA-00904 IN DBMS_SQLPA [ID 565600.1]
lock-held-by-in-doubt-distributed-transaction

ORACLE – 9i to 10g database Upgrade using DBUA.

October 17, 2011 migration, oracle No comments

Purpose of this document is to show a complete 9i to 10g upgrade of an Oracle database on

AIX 5L using DBUA ( Database Uprgrade Assistant)
3.2 TECHNICAL SOLUTION
10g upgrade from 9.2.06 to 10.2.0.3 for any database -:
Check the init.ora parameter file in the 9I ORACLE_HOME/dbs
Make sure there is no “If event=”30084 trace name context forever, level1”
defined in pfile of spfile.
LOGIN TO NEW PSERIES MACHINE AS oraprod user.

Important : Issue unix command unset TNS_ADMIN

export ORACLE_HOME=/u01/finprod/finproddb/10.2/db_1
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:/usr/sbin:/usr/bin/
export PERL5LIB=$ORACLE_HOME/perl/lib/5.8.3:$ORACLE_HOME/perl/lib/site_pearl/5.8.3
export LIBPATH=$ORACLE_HOME/lib:/usr/dt/lib:/usr/openwin/lib:$ORACLE_HOME/lib32:$ORACLE_HOME/ctx/lib
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/usr/dt/lib:/usr/openwin/lib:$ORACLE_HOME/ctx/lib
export ORA_NLS10=/u01/finprod/finproddb/10.2/db_1/nls/data/9idata
unset TNS_ADMIN

To make sure run the following to ascertain correct paths below.
echo $ORACLE_HOME
echo $PATH
echo $PERL5LIB
echo $LIBPATH
echo $LD_LIBRARY_PATH
echo $NLS_010

Firstly create the /NLS/DATA/9i data directory.

# DESCRIPTION
The script copies every nlb files under $ORACLE_HOME/nls/data and
for customer needs to revert back to 9i locale behavior.

Make sure perl utility being used points to the 10g oracle environment.

Make sure there is no reference to the old 9i environment in the $PATH or this command will
fail as the old version of perl is picked up.

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:/usr/sbin:/usr/bin/

$ which perl
/u01/finprod/finproddb/10.2/db_1/perl/bin/perl
$
oraprod(DATABASE)@nbrbwdev5:perl $ORACLE_HOME/nls/data/old/cr9idata.pl
Creating directory /u01/finprod/finproddb/10.2/db_1/nls/data/9idata …
Copying files to /u01/finprod/finproddb/10.2/db_1/nls/data/9idata…
Copy finished.

Please reset environment variable ORA_NLS10 to /u01/finprod/finproddb/10.2/db_1/nls/data/9idata

export ORA_NLS10=/u01/finprod/finproddb/10.2/db_1/nls/data/9idata

oraprod(DATABASE)@nbrbwdev5:echo $ORA_NLS10
/u01/finprod/finproddb/10.2/db_1/nls/data/9idata

Now configure listener for 10g home

First unset TNS_ADMIN so that the old 9i environment is unset.

unset TNS_ADMIN

oraprod(DATABASE)@nbrbwdev5:cd $ORACLE_HOME/bin (10g ORACLE_HOME)

oraprod(DATABASE)@nbrbwdev5:export DISPLAY=10.254.101.25:0.0

oraprod(DATABASE)@nbrbwdev5:ls netca
netca
oraprod(DATABASE)@nbrbwdev5:./netca

Oracle Net Services Configuration:
$ORACLE_HOME/nls/data/old to directory $ORACLE_HOME/nls/data/9idata




Use same listener name as in old machine….FINPROD was used here and not LISTENER as above.



Message on screen :

Oracle Net Services Configuration:
Configuring Listener:FINPROD
Default local naming configuration complete.
Listener configuration complete.
Oracle Net Listener Startup:
Running Listener Control:
/u01/finprod/finproddb/10.2/db_1/bin/lsnrctl start FINPROD
Listener Control complete.
Listener started successfully.
Oracle Net Services configuration successful. The exit code is 0

SHUTDOWN LISTENER

lsnrctl stop FINPROD ( Database name )

Now start the PRE-UPGRADE STEPS:

Important : START THE DATABASE in the 9i environment and not from the new 10g environment.

If you haven’t changed your .profile and are still using the 9i .profile , just log into a new session so that the old 9i environment is picked up.

oraprod(DATABASE)@nbrbwdev5:sqlplus “sys as sysdba”

SQL*Plus: Release 9.2.0.6.0 – Production on Thu Feb 21 11:57:55 2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.

Enter password:
Connected to an idle instance.

SQL> startup;
ORACLE instance started.

Total System Global Area 2149552280 bytes
Fixed Size 745624 bytes
Variable Size 1879048192 bytes
Database Buffers 268435456 bytes
Redo Buffers 1323008 bytes
Database mounted.
Database opened.
SQL>

STEPS FOR UPGRADING THE DATABASE TO 10G RELEASE 2
=================================================

Preparing to Upgrade
——————–

In this section all the steps need to be performed to the previous version of Oracle.

Please note that the database must be running in normal mode in the old release- i.e 9i environment.

PRE UPGRADE DATABASE HEALTH CHECK:

Log in to the system as the owner of the new 10gR2 ORACLE_HOME and copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the /tmp directory on your system:

cp $ORACLE_HOME/rdbms/admin/utlu102i.sql /tmp—have to be in the 10g environment to do this.

Make a note of the new location of these files.

Change to the temporary directory that you copied files to in Step 1.

cd /tmp

Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges.

Then run and spool the utlu102i.sql file.

sqlplus ‘/as sysdba’ —In 9i environment.

SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off

Then, check the spool file and examine the output of the upgrade information tool. The sections which follow, describe the output of the Upgrade Information Tool (utlu102i.sql).

Database:

This section displays global database information about the current database such
as the database name, release number, and compatibility level. A warning is displayed
if the COMPATIBLE initialization parameter needs to be adjusted before the database is
upgraded.

Logfiles:

This section displays a list of redo log files in the current database whose size is less than 4 MB. For each log file, the file name, group number, and recommended size is displayed.
New files of at least 4 MB (preferably 10 MB) need to be created in the current database. Any redo log files less than 4 MB must be dropped before the database is upgraded.

Tablespaces:

This section displays a list of tablespaces in the current database. For each tablespace,the tablespace name and minimum required size is displayed. In addition, a message is displayed if the tablespace is adequate for the upgrade.
If the tablespace does not have enough free space, then space must be added to the tablespace in the current database.
Tablespace adjustments need to be made before the database is upgraded.

Update Parameters:

This section displays a list of initialization parameters in the parameter file of the current database that must be adjusted before the database is upgraded. The adjustments need to be made to the parameter file after it is copied to the new Oracle Database 10g
release.

Deprecated Parameters:

This section displays a list of initialization parameters in the parameter file of the current database that are deprecated in the new Oracle Database 10g release.

Obsolete Parameters:

This section displays a list of initialization parameters in the parameter file of the
current database that are obsolete in the new Oracle Database 10g release. Obsolete
initialization parameters need to be removed from the parameter file before the database is upgraded.

Components:

This section displays a list of database components in the new Oracle Database 10g release that will be upgraded or installed when the current database is upgraded.

Miscellaneous Warnings:

This section provides warnings about specific situations that may require attention before
and/or after the upgrade.

SYSAUX Tablespace:

This section displays the minimum required size for the SYSAUX tablespace, which is required in Oracle Database 10g.
The SYSAUX tablespace must be created after the new Oracle Database 10g release is started and BEFORE the upgrade scripts are invoked.

However since we are using DBUA to upgrade, the assistant will prompt you for the creation of the SYSAUX tablespace so there is no need to create it manually.

2. CHECK FOR CONNECT ROLE PRIVILEGE CHANGES

The CONNECT role in 10g Release 2 has only create session privilege.

Therefore create a new role with the usual privileges which CONNECT role had in 9i and grant it to the users who are not generic database users.

CREATE ROLE CONNECT1;

Grant all privs usually associated with the CONNECT role in previous versions.

grant create view to connect1;
grant create table to connect1;
grant create cluster to connect1;
grant create session to connect1;
grant create synonym to connect1;
grant create sequence to connect1;
grant create database link to connect1;
grant alter session to connect1;

spool grant_connect1.sql

Select ‘grant connect1 to ‘ || grantee || ‘;’ from dba_role_privs
WHERE granted_role = ‘CONNECT’ and
grantee NOT IN (
‘SYS’, ‘OUTLN’, ‘SYSTEM’, ‘CTXSYS’, ‘DBSNMP’,
‘LOGSTDBY_ADMINISTRATOR’, ‘ORDSYS’,
‘ORDPLUGINS’, ‘OEM_MONITOR’, ‘WKSYS’, ‘WKPROXY’,
‘WK_TEST’, ‘WKUSER’, ‘MDSYS’, ‘LBACSYS’, ‘DMSYS’,
‘WMSYS’, ‘OLAPDBA’, ‘OLAPSVR’, ‘OLAP_USER’,
‘OLAPSYS’, ‘EXFSYS’, ‘SYSMAN’, ‘MDDATA’,
‘SI_INFORMTN_SCHEMA’, ‘XDB’, ‘ODM’);

spool off

Run @grant_connect1.sql and make sure any bb staff I’d are in quotes.

Ex grant connect1 to “bb16872.1”;

Scripts for the above already in place :

Just run create_connect1.sql and grant_connect1.sql from /u02/oradata/software

3. GATHER SCRIPT TO RECREATE DATABASE LINKS IN CASE OF DOWNGRADE:

During the upgrade to 10gR2, any passwords in database links will be encrypted. To downgrade back to the original release, all of the database links with encrypted passwords must be dropped prior to the downgrade.
Consequently, the database links will not exist in the downgraded database.
If you anticipate a requirement to be able to downgrade back to your original release, then save the information about affected database links from the SYS.LINK$ table, so that you can recreate the database links after the downgrade.

Following script can be used to construct the dblink.

Spool db_links.txt

Set head off

SELECT
‘create ‘||DECODE(U.NAME,’PUBLIC’,’public ‘)||’database link ‘||CHR(10)||DECODE(U.NAME,’PUBLIC’,Null, U.NAME||’.’)|| L.NAME||chr(10) ||’connect to ‘ || L.USERID || ‘ identified by ”’
||L.PASSWORD||”’ using ”’ || L.host || ””
||chr(10)||’;’ TEXT FROM sys.link$ L, sys.user$ U
WHERE L.OWNER# = U.USER# ;

Spool off

Spool file above already in place in /u02/oradata/software

4. GATHER STATISTICS BEFORE UPGRADE:

To run this script, connect to the database AS SYSDBA using SQL*Plus.
spool gdict

grant analyze any to sys;

exec dbms_stats.gather_schema_stats(‘WMSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘MDSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘CTXSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘XDB’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘WKSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘LBACSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘OLAPSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘DMSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘ODM’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘ORDSYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘ORDPLUGINS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘SI_INFORMTN_SCHEMA’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘OUTLN’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘DBSNMP’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘SYSTEM’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

exec dbms_stats.gather_schema_stats(‘SYS’,options=>’GATHER’, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => ‘FOR ALL COLUMNS SIZE AUTO’, cascade => TRUE);

spool off

The above steps take 1.5 hours max.

Run /u02/oradata/software/analyze.sql

Get a count of all INVALID OBJECTS PRIOR TO UPGRADE:

spool invalid_pre.lst

select substr(owner,1,12) owner,
substr(object_name,1,30) object,
substr(object_type,1,30) type, status from
dba_objects where status <>’VALID’;

select count(*),object_type,owner from dba_objects where status <>’VALID’
group by object_type, owner;
spool off;

LOGOFF FROM MACHINE.

LOGIN AGAIN AND INVOKE 10g ENVIRONMENT

UPGRADE STEPS

Make sure environment in 10g and all env variables defined –

export ORACLE_HOME=/u01/finprod/finproddb/10.2/db_1

$export PERL5LIB=/u01/finprod/finproddb/10.2/db_1/perl/lib/5.8.3:/u01/finprod/finproddb/10.2/db_1/perl/lib/site_pearl/5.8.3
$export LD_LIBRARY_PATH=/u01/finprod/finproddb/10.2/db_1/lib:/usr/dt/lib:/usr/openwin/lib:/u01/finprod/finproddb/10.2/db_1/ctx/lib
$export LIBPATH=/u01/finprod/finproddb/10.2/db_1/lib:/usr/dt/lib:/usr/openwin/lib:/u01/finprod/finproddb/10.2/db_1/lib32:/u01/finprod/finproddb/10.2/db_1/ctx/lib
$export PATH=/u01/finprod/finproddb/10.2/db_1/Apache/perl/bin:/u01/finprod/finproddb/10.2/db_1/Apache/perl/bin:/usr/bin:/u01/finprod/finprodora/iAS/Apache/perl/bin:/u01/finprod/finproddb/10.2/db_1/bin:/usr/ccs/bin:/usr/sbin:/bin:/u01/finprod/finproddb/10.2/db_1/Apache/perl/bin:/usr/bin:/usr/bin:/u01/finprod/finproddb/10.2/db_1/bin:/usr/ccs/bin:/usr/sbin:/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/x11:/sbin:/u01/oraprod:/u01/oraprod/bin:/bin:/u01/oraprod/scripts:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/x11:/sbin:/u01/oraprod:/u01/oraprod/bin:/bin:/usr/local/bin:/u01/oracle/prod/scripts:/u01/local/bwa/prod/scripts:/u01/local/bwa/prod/c/bin:/u01/local/pvcs:/usr/ccs/bin:/usr/openwin/bin:.:/u01/local/bwa/prod/scripts:/u01/local/sas8:/u01/finprod/finproddb/10.2/db_1/OPatch
$export ORA_NLS10=/u01/finprod/finproddb/10.2/db_1/nls/data/9idata
export DISPLAY=10.254.101.25:0.0

unset TNS_ADMIN

unset ORACLE_BASE

echo all the env variables above and then proceed.

To avoid a local_listener error after launching DBUA since port 1521 is in use, copy the following files from the 9i ORACLE_HOME/network/admin/finprod_frmbwds03 to ORACLE_HOME/network/admin

Refer note 431437.1 in metalink…..synopsis below
Cause
Apparently if there are no sqlnet config files in the default location of the 9.2 ORACLE_HOME/network/admin directory then DBUA fails.? The sqlnet config files are:
listener.ora
tnsnames.ora
sqlnet.ora
The TNS_ADMIN environment variable is not set in 10g. “Installation Guide for Solaris Operating System (SPARC 64-Bit)” wants it to be unset during software installation/database upgrade. During DBUA’s “Pre Upgrade” check, it runs database from 9.2 home. But?in an?Oracle E-business Suite environment?the sqlnet config files?are not in the default location of $ORACLE_HOME?/network/admin, therefore?DBUA cannot resolve the local_listener.
Solution
1. Make sure that the sqlnet config files listener.ora, tnsnames.ora, sqlnet.ora are in the default location of $ORACLE_HOME/network/admin.?
This should be the 9.2 ORACLE_HOME????2. If they are not there then find out where the TNS_ADMIN variable points to.????3. Copy the sqlnet config files from TNS_ADMIN location to $ORACLE_HOME/network/admin.
This?should be the 9.2 ORACLE_HOME
4. Rerun DBUA
Steps below :

cd /u01/finprod/finproddb/92/network/admin/finprod_frmbwds03

cp listener.ora /u01/finprod/finproddb/92/network/admin
cp tnsnames.ora /u01/finprod/finproddb/92/network/admin
cp sqlnet_ifile.ora /u01/finprod/finproddb/92/network/admin

cd /u01/finprod/finproddb/92/network/admin
$ ls -lrt
-rw-r–r– 1 oraprod dba 1315 Apr 24 11:26 listener.ora
-rw-r–r– 1 oraprod dba 2292 Apr 24 11:26 tnsnames.ora
-rw-r–r– 1 oraprod dba 627 Apr 24 11:26 sqlnet_ifile.ora
$

Before upgrading to 10.2.0.3 apply the following patches using opatch to avoid the upgrade failing half way.

The upgrade will fail if the following PATCHES are not applied.

The patch numbers for AIX5L are

5755471
5892355
5871314

Download the patches and transfer to any directory.

Unzip the files and go to directory created on unzip

For example patch number 5755471

Go to /5755471 directory

opatch lsinventory –invPtrLoc $ORACLE_HOME/oraInst.loc

Make sure right inventory is picked up – the new 10g patched environment.

opatch apply –invPtrLoc $ORACLE_HOME/oraInst.loc

For two of the patches you will see a few relinking errors which can be ignored…

Error similar to this…

ld: 0711-783 WARNING: TOC overflow. TOC size: 67040 Maximum size: 65536Extra instructions are being generated for each reference to a TOCsymbol if the symbol is in the TOC overflow area.ld: 0711-783 WARNING: TOC overflow. TOC size: 67040 Maximum size: 65536Extra instructions are being generated for each reference to a TOCsymbol if the symbol is in the TOC overflow area.

Now proceed with UPGRADE of database.

Now launch DBUA from the 10g ORACLE_HOME/bin

cd $ORACLE_HOME/bin

export DISPLAY=10.254.101.25:0.0- This is the I.P. address of your machine/laptop.

Should be in the 10g HOME.

./dbua

 











Takes a long time to go from 11 percent to 13 percent.

From 32 percent the JServer JAVA Virtual Machine upgrade starts.
Post upgrade starts at 58 percent.
Takes a long time at 33,39,73 percent.

Takes about 7 hours at 82 percent -running the compilation job of all objects as part of the POST UPGRADE…as screenshot above shows.
To find out how much has completed run this command:

SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;

The count should increase .
To find out how many objects remain to be compiled run this command:

SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);

The count should reduce.