Sunday, September 12, 2010

Oracle Real Application Cluster 10g

Introduction

Oracle Real Application Cluster (RAC) is a revolution in the database management system. It is an extension of Oracle single instance database. RAC is basically a cluster of instances working on the same database. As you know instance is nothing but the computer memory and some background processes, so in case of RAC we have multiple such instances which are installed and configured on different nodes and we have a single database (datafiles) which are accessed by these instances. This post explains the technical details about the RAC architecture and also I will discuss about the installation of RAC.

What is Oracle Real Application Cluster 10g?
Software Architecture
A RAC is a clustered database. A cluster is a group of independent servers that cooperate as a single system. In the event of system failure clustering ensure high availablity to the user. Access to mission critical data is not lost. Redundant hardware components, such as additional nodes, interconnects and disks, allow the cluster to provide high availability. Such redundant hardware architecture avoids a single point of failure and ensures high availability for the system.


Above figure shows the architecture for RAC. In RAC each instance runs on a seperate server which can access database made of multiple disks. For RAC to act as a sngle database, each seperate instance in a RAC should be a part of cluster. For the external users all the instance (nodes) which are part of cluster will look as single instance.
For each instance to be a part of cluster, we need to have some cluster software installed and all the instance should register in the cluster software. From Oracle Database 10g onwards, Oracle provides its own clusterware, A software to be installed on the nodes which are the part of cluster. Advantage with Oracle clusterware is that customer doesn’t have to purchase any third party clusterware. Also the clusterware provided by Oracle is integrated with OUI for easy installation. When a node in a Oracle cluster is started, all instances, listener and services are stared automatically. If an instance fail, the clusterware will automatically restart the instance so the services is often restored before the administrator notices it was down.
Network Architecture
Each RAC node should have at least one static IP address for the public network (Used by application) and one static IP address for the private cluster interconnect. Also we can have 1 virtual IP address(VIP) for each node.
The private networks are critical components of a RAC cluster. The private networks should only be used by Oracle to carry Cluster Manager and Cache Fusion (Explained Later) inter-node connection. A RAC database does not require a separate private network, but using a public network can degrade database performance (high latency, low bandwidth). Therefore the private network should have high-speed NICs (preferably one gigabit or more) and it should only be used by Oracle.
Virtual IPs are required for fail over. This is called TAF (Transparent Application Failover). Processes external to the Oracle 10g RAC cluster control the Transparent Application Failover (TAF). This means that the failover types and methods can be unique for each Oracle Net client. For failover to happen client connections are made using VIPs.
Hardware Architecture
Both Oracle Clusterware and Oracle RAC require access to disks that are shared by each node in the cluster. The shared disks must be configured using OCFS (1 or 2), raw devices or third party cluster file system such as GPFS or Veritas.
OCFS2 is a general-purpose cluster file system that can be used to store Oracle Clusterware files, Oracle RAC database files, Oracle software, or any other types of files normally stored on a standard filesystem such as ext3. This is a significant change from OCFS Release 1, which only supported Oracle Clusterware files and Oracle RAC database files. Note that ASM cannot be used to store the Oracle clusterware files, since clusterware is installed before installaing ASM and also clusterware have to be started before starting ASM.
OCFS2 is available free of charge from Oracle as a set of three RPMs: a kernel module, support tools, and a console. There are different kernel module RPMs for each supported Linux kernel.
Installing RAC 10g
Installing a RAC is a 5 step process as given below.
1) Complete Pre-Installation Task
Hardware Requirement
Software Requirement
Environment Configuration, Kernel parameter and so on.
2) Perform CRS installation
3) Perform Oracle Database 10g Installation
4) Perform Cluster Database creation
5) Complete post installation task
Pre-Installation Task
Check System Requirement
- Atleast 512MB of RAM
Run below command to check
# grep MemTotal /proc/meminfo
- Atleast 1G of swap space
Run below command to check
# grep SwapTotal /proc/meminfo
- /tmp directory should be 400M
Run below command to check
df -h /tmp
Check Software Requirement
- package Requirements
For installing RAC, the packages required for Red Hat 3.0 are:
gcc-3.2.3-2
compat-db-4.0.14.5
compat-gcc-7.3-2.96.122
compat-gcc-c++-7.3-2.96.122
compat-libstdc++-7.3-2.96.122
compat-libstdc++-devel-7.3-2.96.122
openmotif21-2.1.30-8
setarch-1.3-1
you can verify if these or higher version packages are present or not using following command
# rpm -q <package_name>
- Create Groups and Users
You can create unix user groups and user IDs using groupadd and useradd commands. We need 1 oracle user and 2 groups – “oinstall” being the primary and “dba” being secondary.
# groupadd -g 500 oinstall
# groupadd -g 501 dba
# useradd -u 500 -d /home/oracle -g “oinstall” -G “dba” -m -s /bin/bash oracle

Configure Kernel Paramters
- Make sure that following parameters are set in /etc/sysctl.conf
kernel.shmall = 2097152
kernel.shmmax = 536870912
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 658576
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 1048536
net.core.wmem_default = 262144
net.core.wmem_max = 1048536

To load the new setting run /sbin/sysctl –p
These are the minimum required values, you can have higher values as well if your server configuration allows.
Setting the system environment
- Set the user Shell limits
cat >> /etc/security/limits.conf << EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
EOF

cat >> /etc/pam.d/login << EOF
session required /lib/security/pam_limits.so
EOF

cat >> /etc/profile << EOF
if [ \$USER = "oracle" ]; then
if [ \$SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF

cat >> /etc/csh.login << EOF
if ( \$USER == “oracle” ) then
limit maxproc 16384
limit descriptors 65536
umask 022
endif
EOF

- Configure the Hangcheck Timer
Hangcheck-timer module monitors the Linux kernal for extended operating system hangs that can affect the reliability of RAC node and cause database corruption. If a hang occurs, the module reboots the node.
You can check if the hangcheck-timer module is loaded by running lsmod command as root user.
/sbin/lsmode | grep -i hang
If the module is not running you can load it manually using below command.
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
cat >> /etc/rc.d/rc.local << EOF
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180
EOF

- Configuring /etc/hosts
/etc/hosts contains the hostname and IP address of the server.
You will need 3 hostnames for each node in the cluster. One will be public hostname for primary interface. Second will be private hostname for cluster interconnect. Third will be virtual hostnames (VIP) for high availability.
For Node 1
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
140.87.222.169 ocvmrh2045.us.oracle.com ocvmrh2045 #node1 public
140.87.241.194 ocvmrh2045-nfs.us.oracle.com ocvmrh2045-nfs ocvmrh2045-a #node1 nfs
152.68.143.111 ocvmrh2045-priv.us.oracle.com ocvmrh2045-priv #node1 private
152.68.143.112 ocvmrh2053-priv.us.oracle.com ocvmrh2053-priv #node2 private
140.87.222.220 ocvmrh2051.us.oracle.com ocvmrh2051 # Node1 vip
140.87.222.225 ocvmrh2056.us.oracle.com ocvmrh2056 # Node2 vip

For Node 2
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
140.87.222.222 ocvmrh2053.us.oracle.com ocvmrh2053 # Node2 Public
140.87.241.234 ocvmrh2053-nfs.us.oracle.com ocvmrh2053-nfs ocvmrh2053-a # Node2 nfs
152.68.143.111 ocvmrh2045-priv.us.oracle.com ocvmrh2045-priv # Node1 Private
152.68.143.112 ocvmrh2053-priv.us.oracle.com ocvmrh2053-priv # Node2 Private
140.87.222.220 ocvmrh2051.us.oracle.com ocvmrh2051 # Node1 vip
140.87.222.225 ocvmrh2056.us.oracle.com ocvmrh2056 # Node2 vip

- Creating database Directories
You have to get the following directories created for you with a write permission for oracle user.
Oracle Base Directories
Oracle Inventory Directory
CRS Home Directory
Oracle Home Directory
In our case the directories are:
Oracle Base Directories – /u01/app/
Oracle Inventory Directory – /u01/app/oraInventory
CRS Home Directory – /u01/app/oracle/product/10.2.0/crs
Oracle Home Directory – /u01/app/oracle/product/10.2.0/db
Configure SSH for User Equivalence
The OUI detects whether the machine on which you are installing RAC is a part of cluster. If its a part of cluster then you have to select the other nodes which are the part of cluster and on which you want to install the patch. But when OUI tries to install the patch on other node while connecting from 1st node, it will ask for login credential and prompt for a password in between the installation, which we want to avoid. For this purpose we have to have user equivelence in place. User equivalence can be achieved by using SSH. First you have ot configure SSH.
Logon as the “oracle” UNIX user account
# su – oracle
If necessary, create the .ssh directory in the “oracle” user’s home directory and set the correct permissions on it:
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh

Enter the following command to generate an RSA key pair (public and private key) for version 3 of the SSH protocol:
$ /usr/local/git/bin/ssh-keygen -t rsa
Enter the following command to generate a DSA key pair (public and private key) for version 3 of the SSH protocol:
$ /usr/local/git/bin/ssh-keygen -t dsa
Repeat the above steps for all Oracle RAC nodes in the cluster
Create authorized key file.
$ touch ~/.ssh/authorized_keys
$ cd ~/.ssh
bash-3.00$ ls -lrt *.pub
-rw-r–r– 1 oracle oinstall 399 Nov 20 11:51 id_rsa.pub
-rw-r–r– 1 oracle oinstall 607 Nov 20 11:51 id_dsa.pub

The listing above should show the id_rsa.pub and id_dsa.pub public keys created in the previous section
In this step, use SSH to copy the content of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub public key from all Oracle RAC nodes in the cluster to the authorized key file just created (~/.ssh/authorized_keys).
Here node 1 is ocvmrh2045 and node 2 is ocvmrh2053
ssh ocvmrh2045 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh ocvmrh2045 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
ssh ocvmrh2053 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh ocvmrh2053 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Now that we have the entry for all the public keys on both the node in this file, we should copy the file to all the RAC nodes. We done have to do this on all nodes, just copying the file to other nodes will do.
scp ~/.ssh/authorized_keys ocvmrh2053:.ssh/authorized_keys
Set permissions to the authorized file
chmod 600 ~/.ssh/authorized_keys
Establish User Equivalence
Once SSH is configured we can go ahead with configuring user equivalence.
su – oracle
exec /usr/local/git/bin/ssh-agent $SHELL
$ /usr/local/git/bin/ssh-add

Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)

- Test Connectivity
Try the below command and it should not ask for the password. It might ask the password for the first time, but after that it should be able to execute the steps without asking for password.
ssh ocvmrh2045 “date;hostname”
ssh ocvmrh2053 “date;hostname”
ssh ocvmrh2045-priv “date;hostname”
ssh ocvmrh2053-priv “date;hostname”

Partitioning the disk
In order to use OCFS2, you need to first partition the unused disk. You can use “/sbin/sfdisk –s” as a root user to check the partitions. We will be creating a single partition to be used by OCFS2. As a root user, run the below command.
# fdisk /dev/sdc
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.
The number of cylinders for this disk is set to 1305.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0×0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1305, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1305, default 1305):
Using default value 1305
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
You can verify the new partition now as
# fdisk -l /dev/sdc
Disk /dev/sdc: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdc1 1 1305 10482381 83 Linux

When finished partitioning, run the ‘partprobe‘ command as root on each of the remaining cluster nodes in order to assure that the new partitions are configured.
Configure OCFS2
We will be using OCFS2 here in this installation. OCFS is a cluster file system solution provided by Oracle, which is specially meant for RAC instllation. Once configure the disk for OCFS we can use the same for clusterware files (like OCR – Oracle Cluster Registry file and Voting file), as well as we can use the same disk for database files.
# ocfs2console

Select Cluster -> Configure Nodes Click on Add on the next window, and enter the Name and IP Address of each node in the cluster.
Note: Use node name to be the same as returned by the ‘hostname’ command

Apply, and Close the window.
After exiting the ocfs2console, you will have a /etc/ocfs2/cluster.conf similar to the following on all nodes. This OCFS2 configuration file should be exactly the same on all of the nodes:
node:
ip_port = 7777
ip_address = 140.87.222.169
number = 0
name = ocvmrh2045
cluster = ocfs2

node:
ip_port = 7777
ip_address = 140.87.222.222
number = 1
name = ocvmrh2053
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2

Configure O2CB to Start on Boot and Adjust O2CB Heartbeat Threshold
You now need to configure the on-boot properties of the O2CB driver so that the cluster stack services will start on each boot. You will also be adjusting the OCFS2 Heartbeat Threshold from its default setting of 7 to 601. All the tasks within this section will need to be performed on both nodes in the cluster as root user.
Set the on-boot properties as follows:
# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure

Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on boot. The current values will be shown in brackets (‘[]‘). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster to start on boot (Enter “none” to clear) [ocfs2]: ocfs2
Specify heartbeat dead threshold (>=7) [7]: 601
Writing O2CB configuration: OK
Loading module “configfs”: OK
Mounting configfs filesystem at /config: OK
Loading module “ocfs2_nodemanager”: OK
Loading module “ocfs2_dlm”: OK
Loading module “ocfs2_dlmfs”: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

We can now check again to make sure the settings took place in for the o2cb cluster stack:
Verify that ocfs2 and o2cb are started. Check this on both nodes. As root user:
# chkconfig –list |egrep “ocfs2|o2cb”
ocfs2 0:off 1:off 2:on 3:on 4:on 5:on 6:off
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off

If it doesn’t look like above on both nodes, turn them on by following command as root:
# chkconfig ocfs2 on
# chkconfig o2cb on

Create and format the OCFS2 filesystem on the unused disk partition
As root on each of the cluster nodes, create the mount point directory for the OCFS2 file system.
# mkdir /u03
Run the below command as a root user only on 1 node to create a OCFS2 file system on the unused disk /dev/sdc1, that you partitioned above.
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L /u03 /dev/sdc1
mkfs.ocfs2 1.2.2
Filesystem label=/u03
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=10733944832 (327574 clusters) (2620592 blocks)
11 cluster groups (tail covers 5014 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Formatting Journals: done
Writing lost+found: done
mkfs.ocfs2 successful

The meaning of the above command is partition with a volume label of “/u03″ (-L /u03), a block size of 4K (-b 4K) and a cluster size of 32K (-C 32K) with 4 node slots (-N 4).
Once OCFS2 filesystem is configured on the disk, you can mount the same.
Mount OCFS2 filesystem on both nodes
Run the below command on all nodes to mount the disk having OCFS2 file system.
# mount -t ocfs2 -L /u03 -o datavolume /u03
You can verify if the disk is mounted correctly or not using below command
# df /u03

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1 10482368 268736 10213632 3% /u03

Create the directories for shared files
As root user, run the following commands on node1 only. Since /u03 is on a shared disk, all the files added from one node will be visible on other nodes.
CRS files:
# mkdir /u03/oracrs
# chown oracle:oinstall /u03/oracrs
# chmod 775 /u03/oracrs

Database files:
# mkdir /u03/oradata
# chown oracle:oinstall /u03/oradata
# chmod 775 /u03/oradata

Installaing Oracle Clusterware
Before installing the Oracle RAC database, we need to install Oracle clusterware. Clusterware will create 2 important files. One is the OCR file (Oracle Cluster Registry) and other is Voting file. OCR file is used for registering the nodes involved in RAC installation and to store all the details about those nodes. Voting file is used to get the status of each node after a definite period of time. Each node will register its presence after a definite time into this voting file. This is called heart beat of RAC. If a node goes down, then it wont be able to register its presence in voting file and other instance will come to know. Other instance will then bring up the crashed instance.
Follow the below step to install clusterware.
From the setup directory run the ./runInstaller command
Below are the main screen and the inputs to be given.
Welcome page - Click Next

11.jpg
Specify Inventory Directory and Credentials – Enter the inventory location where it should create inventory
12.jpg
Specify Home Details - Specify the correct location of home. Provide the location for crs home. Note that this location may not be a shared location. This is the location for installing a crs software and not for OCR and voting file.
13.jpg
Product Specific Prerequisite Checks - OUI will perform the required pre-reqs checks. Once done, press Next.
14.jpg
Specify Cluster Configuration – On this screen you need to add all the servers that will be part of RAC installation. Basically this is a push install, where the installation will be pushed to all the nodes we are adding here. So that we “don’t’ have to install CRS again from node 2.
15.jpg
Specify Network Interface Usage – We need at least 1 network to be private and not to be used by application. So make 1 network as private network, so that we can use the same for interconnect.
16.jpg
Specify OCR Location – This is where you will provide the location for OCR file. Remember that this file should be shared and accessible to all the nodes. We know that we have a shared disk /u03. In the above step under “Create the directories for shared files”, we created a “/u03/oracrs” directory. This can be provided here.
17.jpg
Specify Voting Disk Location – On this screen you will provide the location for voting file. You need to provide the shared location here as well. You can provide the same shared location we created in above step.
18.jpg
Summary – Click on Install
You can verify the cluster installation by running olsnodes.
bash-3.00$ /u01/app/oracle/product/10.2.0/crs/bin/olsnodes
ocvmrh2045
ocvmrh2053

Create the RAC Database
You can follow the same steps that you follow for installing the single instance database, only couple of screens are new in this instllation as compared to single instance database installation.
111.jpg
4) Specify Hardware Cluster Installation Mode – Select cluster installation and click on select all to select all the nodes in the cluster. This will propogate the installation in all the nodes.
121.jpg
10) Specify Database Storage Options – In this case if you are not using ASM or Raw devices and using file system then specify the shared location we created above. This is important because all the instance should have access to the datafiles. We are creating multiple instances but we are having single database(database files).
At the end, it will give the summary and you click on install.
Congratulations! Your new Oracle 10g RAC Database is up and ready for use.
References
Oracle RAC Documentation – http://download.oracle.com/docs/html/B10766_08/toc.htm
Oracle Technical White Paper May 2005 by Barb Lundhild
Converting a single instance database to RAC – http://www.oracle.com/technology/pub/articles/chan_sing2rac_install.html#prelim

No comments:

Post a Comment