一、在win7上安装virtualbox(省略)
打开vbox软件找到主菜单然后 [管理] >> [全局设定]
二、在virtualbox上创建server01虚拟主机并安装Oracle Enterprise Linux 5.8系统
1.下载Oracle Enterprise Linux 5.8安装介质
2.创建虚拟机server01,打开vbox软件点击新建虚拟机,步骤如下图
3.设置虚拟机
3.给虚拟机安装ORACLE EnterpriseLinux5
这里只对需要修改的或注意的地方提供截图
安装完成后需重启,在这里需要关闭防火墙和SELinux,可以选用REDHAT提供的NTP服务器,如下图
最后点OK ...........(∩_∩)
重启完成后,srv01.rac.ora 这台虚拟主机就安装完成了
三、配置server01
1.配置安装Oracle Validated RPM Package
[root@srv01 ~]# mount /dev/cdrom /media/mount: block device /dev/cdrom is write-protected, mounting read-only[root@srv01 ~]# touch /etc/yum.repos.d/public-yum-el5.repo[root@srv01 ~]# cat /etc/yum.repos.d/public-yum-el5.repo[oel5]name = Enterprise Linux 5.8 DVDbaseurl=file:///media/Server/gpgcheck=0enabled=1[root@srv01 ~]# yum install oracle-validated
Oracle Validated 帮我们完成了以前需要手动完成的如下工作:
oracle 用户和组的创建
内核参数设置
/etc/security/limits.conf 文件中对用户设置
2.创建目录
[root@srv01 ~]# mkdir /u01[root@srv01 ~]# chown oracle:oinstall /u01/
3.修改hosts文件
[root@srv01 ~]# cat /etc/hosts# Do not remove the following line, or various programs# that require network functionality will fail.127.0.0.1 localhost.localdomain localhost::1 localhost6.localdomain6 localhost6192.168.1.91 srv01 srv01.rac.ora192.168.1.93 srv01-vip192.168.1.92 srv02 srv02.rac.ora192.168.1.94 srv02-vip172.16.1.91 srv01-priv172.16.1.92 srv02-priv
配置完成后关闭sever01,准备clone出server02
四、由server01克隆出server02
复制完成后启动server02虚拟机并配置网卡,hosts文件,启动运行级别。
五、给srv01和srv02添加共享存储
1.关闭srv01和srv02
2.在srv01上创建一个磁盘ocr.vdi存放OCR文件
3.用上面的方法在srv02上创建3个磁盘dbshare{1,2,3}.vdi用于存放数据文件
4.[管理]>>[虚拟介质管理]
[选中dbshare1.vdi点击右键]>>[修改]
然后对剩下的三个磁盘也开启共享
5.将共享的4个磁盘添加到srv02上
6.srv01上配置裸设备(votedisk 和 ocr使用的磁盘在10g中要求是裸设备)
1).在srv01上的/dev/sdb上创建两个分区
[root@srv01 rules.d]# fdisk -lDisk /dev/sda: 107.3 GB, 107374182400 bytes255 heads, 63 sectors/track, 13054 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sda1 * 1 13 104391 83 Linux/dev/sda2 14 13054 104751832+ 8e Linux LVMDisk /dev/sdb: 3221 MB, 3221225472 bytes255 heads, 63 sectors/track, 391 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdb doesn't contain a valid partition tableDisk /dev/sdc: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdc doesn't contain a valid partition tableDisk /dev/sdd: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sdd doesn't contain a valid partition tableDisk /dev/sde: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/sde doesn't contain a valid partition tableDisk /dev/dm-0: 103.0 GB, 103045660672 bytes255 heads, 63 sectors/track, 12527 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/dm-0 doesn't contain a valid partition tableDisk /dev/dm-1: 4194 MB, 4194304000 bytes255 heads, 63 sectors/track, 509 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesDisk /dev/dm-1 doesn't contain a valid partition table[root@srv01 rules.d]# fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)Command (m for help): nCommand action e extended p primary partition (1-4)p Partition number (1-4): 1First cylinder (1-391, default 1): Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-391, default 391): +1500MCommand (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 2First cylinder (184-391, default 184): Using default value 184Last cylinder or +size or +sizeM or +sizeK (184-391, default 391): +1500MCommand (m for help): pDisk /dev/sdb: 3221 MB, 3221225472 bytes255 heads, 63 sectors/track, 391 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System/dev/sdb1 1 183 1469916 83 Linux/dev/sdb2 184 366 1469947+ 83 LinuxCommand (m for help): wThe partition table has been altered!Calling ioctl() to re-read partition table.WARNING: Re-reading the partition table failed with error 16: Device or resource busy.The kernel still uses the old table.The new table will be used at the next reboot.Syncing disks.[root@srv01 rules.d]# partprobe [root@srv01 rules.d]# ls -l /dev/sdbsdb sdb1 sdb2 [root@srv01 rules.d]# ls -l /dev/sdb*brw-r----- 1 root disk 8, 16 Jul 15 11:07 /dev/sdbbrw-r----- 1 root disk 8, 17 Jul 15 11:08 /dev/sdb1brw-r----- 1 root disk 8, 18 Jul 15 11:08 /dev/sdb2
2).编辑/etc/udev/rules.d/60-raw.rules配置裸设备
在60-raw.rules中加入以下配置
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"ACTION=="add", KERNEL=="raw*", OWNER=="oracle", GROUP=="oinstall", MODE=="0660"
重新启动UDEV
[root@srv01 rules.d]# start_udev Starting udev: [ OK ]
7.srv01上使用UDEV 将数据文件使用的共享磁盘/dev/sdc /dev/sdd /dev/sde绑定成固定的设备名称
[root@srv01 rules.d]# for i in c d e ;doecho "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id -g -u -s %p\", RESULT==\"`scsi_id -g -u -s /block/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"oracle\", GROUP=\"oinstall\", MODE=\"0660\"" >> /etc/udev/rules.d/99-oralce-asmdevices.rulesdone [root@srv01 rules.d]# start_udev Starting udev: [ OK ] [root@srv01 rules.d]# ls -l /dev/asm-disk* brw-rw---- 1 oracle oinstall 8, 32 Jul 15 03:35 /dev/asm-diskc brw-rw---- 1 oracle oinstall 8, 48 Jul 15 03:35 /dev/asm-diskd brw-rw---- 1 oracle oinstall 8, 64 Jul 15 03:35 /dev/asm-diske
8.在srv02上绑定磁盘设备
将srv01下的配置文件copy到srv02,再重启srv02上的udev即可
[root@srv01 rules.d]# scp 60-raw.rules 99-oralce-asmdevices.rules srv02:/etc/udev/rules.d/The authenticity of host 'srv02 (192.168.1.92)' can't be established.RSA key fingerprint is 6b:b5:de:2e:08:b0:e7:92:da:89:26:b2:ce:da:1e:c7.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'srv02,192.168.1.92' (RSA) to the list of known hosts.root@srv02's password: 60-raw.rules 100% 526 0.5KB/s 00:00 99-oralce-asmdevices.rules 100% 543 0.5KB/s 00:00 [root@srv02 ~]# partprobe [root@srv02 ~]# start_udev Starting udev: [ OK ] [root@srv02 ~]# ls -l /dev/raw/raw* crw-rw---- 1 oracle oinstall 162, 1 Jul 15 03:51 /dev/raw/raw1 crw-rw---- 1 oracle oinstall 162, 2 Jul 15 03:51 /dev/raw/raw2 [root@srv02 ~]# ls -l /dev/asm-disk* brw-rw---- 1 oracle oinstall 8, 32 Jul 15 03:49 /dev/asm-diskc brw-rw---- 1 oracle oinstall 8, 48 Jul 15 03:49 /dev/asm-diskd brw-rw---- 1 oracle oinstall 8, 64 Jul 15 03:49 /dev/asm-diske
六、配置oracle用户ssh等效性
1.在srv01节点创建私钥和公钥
[root@srv01 ~]# su - oracle[oracle@srv01 ~]$ ssh-keygen -t dsaGenerating public/private dsa key pair.Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Created directory '/home/oracle/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa.Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.The key fingerprint is:c9:bb:c9:a5:33:1f:a0:42:63:e5:a1:ea:dc:d7:81:50 oracle@srv01.rac.ora
2.在srv02节点创建私钥和公钥
[oracle@srv02 ~]$ ssh-keygen -t dsaGenerating public/private dsa key pair.Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Created directory '/home/oracle/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa.Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.The key fingerprint is:50:2b:ae:72:80:1f:50:83:72:5a:7a:87:80:9d:3e:44 oracle@srv02.rac.ora
3.将srv01,srv02节点的公钥内容存入srv01节点的authorized_keys文件中,修改权限后再将authorized_keys文件copy到srv02的oracle用户的~/.ssh/目录下
[oracle@srv01 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys[oracle@srv01 ~]$ ssh srv02 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys[oracle@srv01 ~]$ chmod 644 ~/.ssh/authorized_keys[oracle@srv01 ~]$ ls -l ~/.ssh/authorized_keys-rw-r--r-- 1 oracle oinstall 1220 Jul 15 05:07 /home/oracle/.ssh/authorized_keys[oracle@srv01 ~]$ scp ~/.ssh/authorized_keys srv02:~/.ssh/
4.验证用户等效性
[oracle@srv01 ~]$ ssh srv02 date [oracle@srv01 ~]$ ssh srv01 date [oracle@srv02 ~]$ ssh srv01 date [oracle@srv02 ~]$ ssh srv02 date 必须这么执行四次,否则后面安装clusterware的时候会因为用户等效性报错
七、配置图形化服务(VNCserver)
1 .安装软件包
[root@srv01 ~]# yum install vnc-server
2.启动vnc服务
[oracle@srv01 ~]# vncserver :1 You will require a password to access your desktops. Password:
Verify:
xauth: creating new authority file /oracle/.Xauthority New 'srv01.rac.ora:1 (oracle)' desktop is srv01.rac.ora:1 Creating default startup script /oracle/.vnc/xstartup Starting applications specified in /oracle/.vnc/xstartup Log file is /oracle/.vnc/srv01.rac.ora:1.log
3 修改vnc桌面配置
第一次启动后编辑 ~/.vnc/xstartup
注释最后一行:
#twm&
加入新行
gnome-session &
保存退出
这里将默认的twm桌面改成了gnome-session,为了能够正常显示gnome桌面,确保以下的软件包已经安装在操作系统中:
gnome-session
gnome-themes
gnome-terminal
dbus-x11
xclock
4.重启vnc服务
[oracle@srv01 ~]$ vncserver -kill :1
Killing Xvnc process ID 14535
[oracle@srv01 ~]$ vncserver :1
New 'srv01.rac.ora:1 (oracle)' desktop is srv01.rac.ora:1
Starting applications specified in /home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/srv01.rac.ora:1.log
5.使用vnc客户端软件测试
输入刚才设置的密码后就可以进入Linux的桌面了
八、安装集群件Oracle Clusterware
1.解压安装包
[oracle@srv01 ~]$ cd package/[oracle@srv01 package]$ zcat 10201_clusterware_linux_x86_64.cpio.gz | cpio -idvm
2. 在图行界面的终端窗口里运行runInstaller开始安装
[oracle@srv01 clusterware]$ ./runInstaller ********************************************************************************Please run the script rootpre.sh as root on all machines/nodes. The script can be found at the toplevel of the CD or stage-area. Once you have run the script, please type Y to proceedAnswer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle Clusterware installation.Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.********************************************************************************Has 'rootpre.sh' been run by root? [y/n] (n)y #注意:这个脚本在/your/path/package/clusterware/rootpre/rootpre.sh Starting Oracle Universal Installer...Checking installer requirements...Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2 Failed <<<
分别在srv01和srv02用root用户执行上面的脚本,不可同时在两节点上执行
[root@srv01 ~]# /u01/oraInventory/orainstRoot.sh Changing permissions of /u01/oraInventory to 770.Changing groupname of /u01/oraInventory to oinstall.The execution of the script is complete[root@srv01 ~]# /u01/oracle/product/10.2.0/crs/root.sh WARNING: directory '/u01/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/oracle/product' is not owned by rootWARNING: directory '/u01/oracle' is not owned by rootWARNING: directory '/u01' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/oracle/product' is not owned by rootWARNING: directory '/u01/oracle' is not owned by rootWARNING: directory '/u01' is not owned by rootSuccessfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node: node 1: srv01 srv01-priv srv01node 2: srv02 srv02-priv srv02Creating OCR keys for user 'root', privgrp 'root'..Operation successful.Now formatting voting device: /dev/raw/raw2Format of 1 voting devices complete.Startup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. srv01CSS is inactive on these nodes. srv02Local node checking complete.Run root.sh on remaining nodes to start CRS daemons.
[root@srv02 ~]# /u01/oraInventory/orainstRoot.sh Changing permissions of /u01/oraInventory to 770.Changing groupname of /u01/oraInventory to oinstall.The execution of the script is complete[root@srv02 ~]# /u01/oracle/product/10.2.0/crs/root.sh WARNING: directory '/u01/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/oracle/product' is not owned by rootWARNING: directory '/u01/oracle' is not owned by rootWARNING: directory '/u01' is not owned by rootChecking to see if Oracle CRS stack is already configured/etc/oracle does not exist. Creating it now.Setting the permissions on OCR backup directorySetting up NS directoriesOracle Cluster Registry configuration upgraded successfullyWARNING: directory '/u01/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/oracle/product' is not owned by rootWARNING: directory '/u01/oracle' is not owned by rootWARNING: directory '/u01' is not owned by rootclscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node: node 1: srv01 srv01-priv srv01node 2: srv02 srv02-priv srv02clscfg: Arguments check out successfully.NO KEYS WERE WRITTEN. Supply -force parameter to override.-force is destructive and will destroy any previous clusterconfiguration.Oracle Cluster Registry for cluster has already been initializedStartup will be queued to init within 90 seconds.Adding daemons to inittabExpecting the CRS daemons to be up within 600 seconds.CSS is active on these nodes. srv01 srv02CSS is active on all nodes.Waiting for the Oracle CRSD and EVMD to startOracle CRS stack installed and running under init(1M)Running vipca(silent) for configuring nodeapps/u01/oracle/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory # 此处的错误可以先忽略,以后在处理
先cancel以后在配置
到这里clusterware 就安装完成 了
[root@srv01 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl check cssCSS appears healthy[root@srv01 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl check crsCSS appears healthyCRS appears healthyEVM appears healthy [root@srv01 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl query crs activeversion CRS active version on the cluster is [10.2.0.1.0]
九、升级clusterware
1. 停止CRS服务
[root@srv01 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl stop crsStopping resources.Successfully stopped CRS resources Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.
[root@srv02 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl stop crsStopping resources.Successfully stopped CRS resources Stopping CSSD.Shutting down CSS daemon.Shutdown request successfully issued.
2.解压补丁包,并运行runInstaller
[oracle@srv01 package]$ unzip p8202632_10205_Linux-x86-64.zip [oracle@srv01 package]$ cd Disk1/ [oracle@srv01 Disk1]$ ./runInstaller
安装完成后还需按照上面的提示执行root102.sh这个脚本(需要在两个节点执行)
[root@srv01 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl stop crs[root@srv01 ~]# /u01/oracle/product/10.2.0/crs/install/root102.sh Creating pre-patch directory for saving pre-patch clusterware filesCompleted patching clusterware files to /u01/oracle/product/10.2.0/crsRelinking some shared libraries.Relinking of patched files is complete.WARNING: directory '/u01/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/oracle/product' is not owned by rootWARNING: directory '/u01/oracle' is not owned by rootWARNING: directory '/u01' is not owned by rootPreparing to recopy patched init and RC scripts.Recopying init and RC scripts.Startup will be queued to init within 30 seconds.Starting up the CRS daemons.Waiting for the patched CRS daemons to start. This may take a while on some systems..10205 patch successfully applied.clscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Successfully deleted 1 values from OCR.Successfully deleted 1 keys from OCR.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node: node 1: srv01 srv01-priv srv01Creating OCR keys for user 'root', privgrp 'root'..Operation successful.clscfg -upgrade completed successfullyCreating '/u01/oracle/product/10.2.0/crs/install/paramfile.crs' with data used for CRS configurationSetting CRS configuration values in /u01/oracle/product/10.2.0/crs/install/paramfile.crs
[root@srv02 ~]# /u01/oracle/product/10.2.0/crs/bin/crsctl stop crsStopping resources.Error while stopping resources. Possible cause: CRSD is down.Stopping CSSD.Unable to communicate with the CSS daemon.[root@srv02 ~]# /u01/oracle/product/10.2.0/crs/install/root102.sh Creating pre-patch directory for saving pre-patch clusterware filesCompleted patching clusterware files to /u01/oracle/product/10.2.0/crsRelinking some shared libraries.Relinking of patched files is complete.WARNING: directory '/u01/oracle/product/10.2.0' is not owned by rootWARNING: directory '/u01/oracle/product' is not owned by rootWARNING: directory '/u01/oracle' is not owned by rootWARNING: directory '/u01' is not owned by rootPreparing to recopy patched init and RC scripts.Recopying init and RC scripts.Startup will be queued to init within 30 seconds.Starting up the CRS daemons.Waiting for the patched CRS daemons to start. This may take a while on some systems..10205 patch successfully applied.clscfg: EXISTING configuration version 3 detected.clscfg: version 3 is 10G Release 2.Successfully deleted 1 values from OCR.Successfully deleted 1 keys from OCR.Successfully accumulated necessary OCR keys.Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.node: node 2: srv02 srv02-priv srv02Creating OCR keys for user 'root', privgrp 'root'..Operation successful.clscfg -upgrade completed successfullyCreating '/u01/oracle/product/10.2.0/crs/install/paramfile.crs' with data used for CRS configurationSetting CRS configuration values in /u01/oracle/product/10.2.0/crs/install/paramfile.crs
在两节点的/etc/profile中增加下面内容方便命令执行
export ORA_CRS_HOME=/u01/oracle/product/10.2.0/crsexport PATH=$PATH:$ORA_CRS_HOME/bin
[root@srv02 ~]# crsctl query crs activeversionCRS active version on the cluster is [10.2.0.5.0]
十、配置VIP
1.使用root用户登录图形界面
[root@srv01 ~]# vncserver :2New 'srv01.rac.ora:2 (root)' desktop is srv01.rac.ora:2Starting applications specified in /root/.vnc/xstartupLog file is /root/.vnc/srv01.rac.ora:2.log
[root@srv01 ~]# crs_stat -tName Type Target State Host ------------------------------------------------------------ora.srv01.gsd application ONLINE ONLINE srv01 ora.srv01.ons application ONLINE ONLINE srv01 ora.srv01.vip application ONLINE ONLINE srv01 ora.srv02.gsd application ONLINE ONLINE srv02 ora.srv02.ons application ONLINE ONLINE srv02 ora.srv02.vip application ONLINE ONLINE srv02 [root@srv01 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:5A:EA:51 inet addr:192.168.1.91 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2538614 errors:0 dropped:0 overruns:0 frame:0 TX packets:1791886 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2477088198 (2.3 GiB) TX bytes:1834687012 (1.7 GiB)eth0:1 Link encap:Ethernet HWaddr 08:00:27:5A:EA:51 inet addr:192.168.1.93 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1#此处为srv01上的VIPeth1 Link encap:Ethernet HWaddr 08:00:27:FE:D3:12 inet addr:172.16.1.91 Bcast:172.16.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38554 errors:0 dropped:0 overruns:0 frame:0 TX packets:41485 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8917328 (8.5 MiB) TX bytes:13812311 (13.1 MiB)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:37191 errors:0 dropped:0 overruns:0 frame:0 TX packets:37191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4822010 (4.5 MiB) TX bytes:4822010 (4.5 MiB)
升级crs到这里就完成了
十一、按照oracle database software
1.使用oracle用户登录图形界面,解压安装包运行runInstaller
在两节点执行root.sh脚本
2.升级数据库软件Oracle Database Software到10205,使用的补丁包和升级crs时的是一样的
srv01 oracle的.bash_profile加入环境变量
export ORACLE_BASE=/u01
export ORACLE_HOME=$ORACLE_BASE/oracle/product/10.2.0/db_1export ORACLE_SID=seaox1export PATH=$PATH:$ORACLE_HOME/binsrv02 oracle的.bash_profile加入环境变量
export ORACLE_BASE=/u01
export ORACLE_HOME=$ORACLE_BASE/oracle/product/10.2.0/db_1export ORACLE_SID=seaox2export PATH=$PATH:$ORACLE_HOME/bin用oracle用户登录图形界面,使用dbca创建数据库
十二、还需配置下hangcheck-timer模块
1.查看模块位置
[root@srv01 ~]# find /lib -name "hangcheck-timer.ko"/lib/modules/2.6.32-300.10.1.el5uek/kernel/drivers/char/hangcheck-timer.ko/lib/modules/2.6.18-308.el5/kernel/drivers/char/hangcheck-timer.ko
2.配置系统自动加载模块
[root@srv01 ~]# echo "modprobe hangcheck-timer" >> /etc/rc.local
3.配置hangcheck-timer参数
echo "options hangcheck-timer hangcheck_tick=10 hangcheck_margin=30 hangcheck_reboot=1" >> /etc/modprobe.conf