国内最全IT社区平台 联系我们 | 收藏本站
华晨云阿里云优惠2
您当前位置:首页 > 数据库 > 数据库应用 > 10gR2 Clusterware 简明安装步骤

10gR2 Clusterware 简明安装步骤

来源:程序员人生   发布时间:2015-01-21 08:47:37 阅读次数:2517次
1、安装操作系统(略)
注意,我这里用的都是OEL 5u5版本,由于自带了Oracle提供的环境变量安装包,因此可以很方便地进行Oracle软件的安装,因此推荐使用该版本

2、配置本地yum源

--把安装盘mount到media
mount -t iso9660 /dev/cdrom /media

--配置repos添加以下内容
vi /etc/yum.repos.d/oel5.repos
[oel5]
name = oel 5 DVD
basurl = file:///media/Server
gpgcheck =0 
enabled = 1    --为1表示启用这个repos

3、配置信任关系(oracle用户)

--节点1和节点2分别履行:
ssh-keygen -t rsa
ssh-keygen -t dsa

--节点1履行:
cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
ssh rac2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

总之就是把2边节点的信息写到公钥文件中去,然后相互ssh就不再需要密码了,由于RAC安装时需要在节点之间复制文件

--验证
节点1履行:
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date

节点2履行:
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date

多验证几次,直到不需要输入密码为止,这里要注意authorized_keys中不要漏掉s,我曾试了半天也没有成功,最后发现是公钥文件名错了,少了个s致使的

4、配置同享磁盘

--添加若干块磁盘并分区
fdisk -l
fdisk /dev/sdb
fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sde

如果碰到需要重启后才能辨认分区信息,可使用linux自带的partporbe命令,履行1下就能够在不重启的情况下让kernel辨认新的分区
也能够履行partprobe -s进行查看

[root@rac1 ~]# fdisk /dev/sdc

Command (m for help): n
Command action
   e   extended
   p   primary partition (1⑷)
p
Partition number (1⑷): 1
First cylinder (1⑴2, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1⑴2, default 12): 
Using default value 12

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@rac1 ~]# partprobe -s
/dev/sda: msdos partitions 1 2
/dev/sdb: msdos partitions 1
/dev/sdc: msdos partitions 1
/dev/sdd: msdos partitions 1
/dev/sde: msdos partitions 1
[root@rac2 install]# 

--绑定磁盘
对4.x和5.x的系统,绑定方法是不同的

4.x:
# vi /etc/sysconfig/rawdevices 

/dev/raw/raw1 /dev/sdc1
/dev/raw/raw2 /dev/sdd1
/dev/raw/raw3 /dev/sde1

#  vi /etc/udev/permissions.d/50-udev.permissions 
# raw devices
ram*:root:disk:0660
#raw/*:root:disk:0660
raw/*:oracle:dba:0660

5.x:
# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add",KERNEL=="/dev/sdb1",RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add",KERNEL=="/dev/sdc1",RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add",KERNEL=="/dev/sdd1",RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add",KERNEL=="/dev/sde1",RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add",KERNEL=="raw*",OWNER="oracle",GROUP="oinstall",MODE="660"

只需要添加红色部份就能够了,然后重启udev,配置正常就会显示以下内容:

[root@rac1 ~]$ ll /dev/raw
total 0
crw-rw---- 1 oracle oinstall 162,  1 Jan 15 22:54 raw1
crw-rw---- 1 oracle oinstall 162,  2 Jan 15 22:24 raw2
crw-rw---- 1 oracle oinstall 162,  3 Jan 15 22:56 raw3
crw-rw---- 1 oracle oinstall 162,  4 Jan 15 22:54 raw4

5、添加环境变量

节点1:
[oracle@rac1 ~]$vi .bash_profile
export ORACLE_SID=RAC1
export ORACLE_HOME=/u01/oracle/10.2.0/db_1
export ORA_CRS_HOME=/u01/oracle/10.2.0/crs_1
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$ORACLE_HOME/jdk/bin:$PATH

节点2:
[oracle@rac1 ~]$vi .bash_profile
export ORACLE_SID=RAC2
export ORACLE_HOME=/u01/oracle/10.2.0/db_1
export ORA_CRS_HOME=/u01/oracle/10.2.0/crs_1
export PATH=$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:$ORACLE_HOME/jdk/bin:$PATH

6、安装Clusterware

--先进行安装前校验
cd ./clusterware/cluvfy
./cluvfy.sh -stage pre crsinst -n rac1,rac2 -verbose

会列出很多内容,检测是不是符合安装Clusterware的条件,其中需要单独安装1个compat_db的包,oracle-validated包不会装这个
另外会报几个其他compat包检测失败则不用理睬,由于只是版本不对而已
如果还碰到其他的没有passed的内容,则需要处理,直到除以上几个毛病以外全都pass为止

--开始安装
cd ./clusterware
./runInstall -ignoreSysPreReqs (参数可以疏忽大小写,命令不允许)

运行OUI到最后,会要求分别在2个节点履行2个脚本,顺序为:脚本1:RAC1->RAC2 ->脚本2:RAC1->RAC2
前3个次履行都没有问题,到第4步,在节点2上履行root.sh的时候,会报错:

[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac1
        rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/oracle/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

需要耐心等待好几分钟(90s+600s),然后会出现1个报错,这是由于Oracle在10.2.0.1上的bug而至,解决办法是通过修改$ORA_CRS_HOME/bin下满的vipca和srvctl文件,分别在这2个文件的第124行168行(注意,不是文件末尾,否则更改可能会无效)添加unset LD_ASSUME_KERNEL后保存退出,再重新再节点2履行root.sh

[root@rac2 bin]# /u01/oracle/10.2.0/crs_1/root.sh
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
Oracle CRS stack is already configured and will be running under init(1M)
[root@rac2 bin]# ./crs_stat -t
CRS-0202: No resources are registered.

此时由于还没有配置vip,因此没有资源被注册,去任意节点运行vipca(条件是这个节点的vipca已修改过),如果报以下毛病:

[oracle@rac1 bin]$ vipca
Error 0(Native: listNetInterfaces:[3])
  [Error 0(Native: listNetInterfaces:[3])]

那末需要配置1下网卡:
[oracle@rac1 bin]$ ./oifcfg iflist
eth0  192.168.1.0
eth1  10.0.0.0
[oracle@rac1 bin]$ ./oifcfg getif
[oracle@rac1 bin]$ ./oifcfg setif -global eth0/192.168.1.0:public
[oracle@rac1 bin]$ ./oifcfg setif -global eth1/10.10.10.1:cluster_interconnect
[oracle@rac1 bin]$ ./oifcfg getif
eth0  192.168.1.0  global  public
eth1  10.10.10.1  global  cluster_interconnect

注意要有打开图形界面的权限,并用root用户去履行vipca,而不是oracle用户,否则会报权限不足
[oracle@rac1 bin]$ vipca
Insufficient privileges.
Insufficient privileges.

接着就会跳出vip配置助手的OUI界面,开始配置vip,输入vip的节点别名后会自动弥补vip的IP地址(进程略)
运行完vipca后退出,再次履行crs_stat,就会发现资源都已注册到crs了
[root@rac1 bin]# ./crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.rac1.gsd   application    ONLINE    ONLINE    rac1        
ora.rac1.ons   application    ONLINE    ONLINE    rac1        
ora.rac1.vip   application    ONLINE    ONLINE    rac1        
ora.rac2.gsd   application    ONLINE    ONLINE    rac2        
ora.rac2.ons   application    ONLINE    ONLINE    rac2        
ora.rac2.vip   application    ONLINE    ONLINE    rac2

最后有个小细节要注意,就是在ORACLE_HOME/bin路径下面也有srvctl的命令,也是需要通过设置unset LD_ASSUME_KERNEL来解决的,否则会和ORA_CRS_HOME/bin中的srvctl1样,报库文件链接毛病,如果不想改,也不是不可以,那末就要调剂1下PATH环境变量中ORA_CRS_HOME和ORACLE_HOME的位置了,由于在履行命令的时候,默许会从第1个路径开始查找,直到最后1个路径,这样就不会用到ORACLE_HOME/bin下的那个srvctl命令文件了,把ORA_CRS_HOME放到最前面,写法以下:
export PATH=$ORA_CRS_HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/jdk/bin:$PATH
不管采取哪一种方法(推荐使用第2种方法),都可以解决oracle用户不能履行srvctl和crsctl等命令问题了

生活不易,码农辛苦
如果您觉得本网站对您的学习有所帮助,可以手机扫描二维码进行捐赠
程序员人生
------分隔线----------------------------
分享到:
------分隔线----------------------------
关闭
程序员人生