Overview
This guide provides a walkthrough of installing an Oracle Database 10g Release 2 RAC database on commodity hardware for the purpose of evaluation. If you are new to Linux and/or Oracle, this guide is for you. It starts with the basics and walks you through an installation of Oracle Database 10g Release 2 RAC from the bare metal up.
This guide will take the approach of offering the easiest paths, with the fewest number of steps, for accomplishing a task. This approach often means making configuration choices that would be inappropriate for anything other than an evaluation. For that reason, this guide is not appropriate for building production-quality environments, nor does it reflect best practices.
The three Linux distributions certified for Oracle 10g Release 2 RAC are:
This guide is divided into four parts: Part I covers the installation of the Linux operating system, Part II covers configuring Linux for Oracle, Part III discusses the essentials of partitioning shared disk, and Part IV covers installation of the Oracle software.
A Release 1 version of this guide is also available.
Background The illustration below shows the major components of an Oracle RAC 10g Release 2 configuration. Nodes in the cluster are typically separate servers (hosts).
Hardware
At the hardware level, each node in a RAC cluster shares three things:
Oracle RAC relies on a shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later). There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS).
Private Network
Each cluster node is connected to all other nodes via a private high-speed network, also known as the cluster interconnect or high-speed interconnect (HSI). This network is used by Oracle’s Cache Fusion technology to effectively combine the physical memory (RAM) in each host into a single cache. Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
The private network is typically built with Gigabit Ethernet, but for high-volume environments, many vendors offer proprietary low-latency, high-bandwidth solutions specifically designed for Oracle RAC. Linux also offers a means of bonding multiple physical NICs into a single virtual NIC (not covered here) to provide increased bandwidth and availability.
Public Network
To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of node failure, the failed node’s IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.
Configuring the Cluster Hardware
There are many different ways to configure the hardware for an Oracle RAC cluster. Our configuration here uses two servers with two CPUs, 1GB RAM, two Gigabit Ethernet NICs, a dual channel SCSI host bus adapter (HBA), and eight SCSI disks connected via copper to each host (four disks per channel). The disks were configured as Just a Bunch Of Disks (JBOD)—that is, with no hardware RAID controller.
Software
At the software level, each node in a RAC cluster needs:
Oracle RAC is supported on many different operating systems. This guide focuses on Linux. The operating system must be properly configured for the OS–including installing the necessary software packages, setting kernel parameters, configuring the network, establishing an account with the proper security, configuring disk devices, and creating directory structures. All these tasks are described in this guide.
Oracle Cluster Ready Services becomes Oracle Clusterware
Oracle RAC 10g Release 1 introduced Oracle Cluster Ready Services (CRS), a platform-independent set of system services for cluster environments. In Release 2, Oracle has renamed this product to Oracle Clusterware.
Clusterware maintains two files: the Oracle Cluster Registry (OCR) and the Voting Disk. The OCR and the Voting Disk must reside on shared disks as either raw partitions or files in a cluster filesystem. This guide describes creating the OCR and Voting Disks using a cluster filesystem (OCFS2) and walks through the CRS installation.
Oracle RAC Software
Oracle RAC 10g Release 2 software is the heart of the RAC database and must be installed on each cluster node. Fortunately, the Oracle Universal Installer (OUI) does most of the work of installing the RAC software on each node. You only have to install RAC on one node—OUI does the rest.
Oracle Automatic Storage Management (ASM)
ASM is a new feature in Oracle Database 10g that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. Oracle ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove “hot spots.” It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
Oracle ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and the RMAN Flash Recovery Area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64MB for most systems. In Oracle RAC environments, an ASM instance must be running on each cluster node.
Part I: Installing Linux Install and Configure Linux as described in the first guide in this series. You will need three IP addresses for each server: one for the private network, one for the public network, and one for the virtual IP address. Use the operating system’s network configuration tools to assign the private and public network addresses. Do not assign the virtual IP address using the operating system’s network configuration tools; this will be done by the Oracle Virtual IP Configuration Assistant (VIPCA) during Oracle RAC software installation.
Red Hat Enterprise Linux 4 (RHEL4)
Required Kernel:
2.6.9-11.EL or higher
Verify kernel version:
Required Package Sets:
Basis Runtime System
YaST
Graphical Base System
Linux Tools
KDE Desktop Environment
C/C++ Compiler and Tools (not selected by default)
Do not install:
Authentication Server (NIS, LDAP, Kerberos)
Required Kernel:
2.6.5-7.97 or higher
Verify kernel version:
Part II: Configure Linux for Oracle Create the Oracle Groups and User Account
Next we’ll create the Linux groups and user account that will be used to install and maintain the Oracle 10g Release 2 software. The user account will be called ‘oracle’ and the groups will be ‘oinstall’ and ‘dba.’ Execute the following commands as root on one cluster node only:
Now create mount points to store the Oracle 10g Release 2 software. This guide will adhere to the Optimal Flexible Architecture (OFA) for the naming conventions used in creating the directory structure. For more information on OFA standards, see Appendix D of the Oracle Database 10g Release 2 Installation Guide.
Issue the following commands as root:
Login as root and configure the Linux kernel parameters on each node.
Set the kernel parameter
Oracle recommends setting the limits to the number of processes and number of open files each Linux account may use. To make these changes, cut and past the following commands as root.
All RHEL releases:
Some Linux distributions associate the host name with the loopback address (127.0.0.1). If this occurs, remove the host name from the loopback address.
/etc/hosts file used for this walkthrough:
During the installation of Oracle RAC 10g Release 2, OUI needs to copy files to and execute programs on the other nodes in the cluster. In order to allow OUI to do that, you must configure SSH to allow user equivalence. Establishing user equivalence with SSH provides a secure means of copying files and executing programs on other nodes in the cluster without requiring password prompts.
The first step is to generate public and private keys for SSH. There are two versions of the SSH protocol; version 1 uses RSA and version 2 uses DSA, so we will create both types of keys to ensure that SSH can use either version. The ssh-keygen program will generate public and private keys of either type depending upon the parameters passed to it.
When you run ssh-keygen, you will be prompted for a location to save the keys. Just press Enter when prompted to accept the default. You will then be prompted for a passphrase. Enter a password that you will remember and then enter it again to confirm. When you have completed the steps below, you will have four files in the ~/.ssh directory: id_rsa, id_rsa.pub, id_dsa, and id_dsa.pub. The id_rsa and id_dsa files are your private keys and must not be shared with anyone. The id_rsa.pub and id_dsa.pub files are your public keys and must be copied to each of the other nodes in the cluster.
From each node, logged in as oracle:
From the first node ONLY, logged in as oracle (copy the local account’s keys so that ssh to the local node will work):
Finally, after all of the generating of keys, copying of files, and repeatedly entering passwords and passphrases (isn’t security fun?), you’re ready to establish user equivalence. When user equivalence is established, you won’t be prompted for a password again.
As oracle on the node where the Oracle 10g Release 2 software will be installed (ds1):
Test Connectivity
If everything is set up correctly, you can now use ssh to log in, execute programs, and copy files on the other cluster nodes without having to enter a password. Verify user equivalence by running a simple command like date on a remote cluster node:
Part III: Prepare the Shared Disks Both Oracle Clusterware and Oracle RAC require access to disks that are shared by each node in the cluster. The shared disks must be configured using one of the following methods. Note that you cannot use a “standard” filesystem such as ext3 for shared disk volumes since such file systems are not cluster aware.
For Clusterware:
Partition the Disks
In order to use either OCFS2 or ASM, you must have unused disk partitions available. This section describes how to create the partitions that will be used for OCFS2 and for ASM.
WARNING: Improperly partitioning a disk is one of the surest and fastest ways to wipe out everything on your hard disk. If you are unsure how to proceed, stop and get help, or you will risk losing data.
This example uses /dev/sdb (an empty SCSI disk with no existing partitions) to create a single partition for the entire disk (36 GB).
Ex:
# partprobe
Oracle Cluster File System (OCFS) Release 2
OCFS2 is a general-purpose cluster file system that can be used to store Oracle Clusterware files, Oracle RAC database files, Oracle software, or any other types of files normally stored on a standard filesystem such as ext3. This is a significant change from OCFS Release 1, which only supported Oracle Clusterware files and Oracle RAC database files.
Obtain OCFS2
OCFS2 is available free of charge from Oracle as a set of three RPMs: a kernel module, support tools, and a console. There are different kernel module RPMs for each supported Linux kernel so be sure to get the OCFS2 kernel module for your Linux kernel. OCFS2 kernel modules may be downloaded from http://oss.oracle.com/projects/ocfs2/files/ and the tools and console may be downloaded from http://oss.oracle.com/projects/ocfs2-tools/files/.
To determine the kernel-specific module that you need, use uname -r.
Run ocfs2console as root:
Click on Add and enter the Name and IP Address of each node in the cluster
Once all of the nodes have been added, click on Cluster –> Propagate Configuration. This will copy the OCFS2 configuration file to each node in the cluster. You may be prompted for root passwords as ocfs2console uses ssh to propagate the configuration file. Leave the OCFS2 console by clicking on File –> Quit. It is possible to format and mount the OCFS2 partitions using the ocfs2console GUI; however, this guide will use the command line utilities.
Enable OCFS2 to start at system boot:
As root, execute the following command on each cluster node to allow the OCFS2 cluster stack to load at boot time:
/etc/init.d/o2cb enable
Ex:
As root on each of the cluster nodes, create the mount point directory for the OCFS2 filesystem
The example below creates an OCFS2 filesystem on the unused /dev/sdc1 partition with a volume label of “/u03″ (-L /u03), a block size of 4K (-b 4K) and a cluster size of 32K (-C 32K) with 4 node slots (-N 4). See the OCFS2 Users Guide for more information on mkfs.ocfs2 command line options.
Since this filesystem will contain the Oracle Clusterware files and Oracle RAC database files, we must ensure that all I/O to these files uses direct I/O (O_DIRECT). Use the “datavolume” option whenever mounting the OCFS2 filesystem to enable direct I/O. Failure to do this can lead to data loss in the event of system failure.
To verify that the OCFS2 filesystem is mounted, issue the mount command or run df:
To automatically mount the OCFS2 filesystem at system boot, add a line similar to the one below to /etc/fstab on each cluster node:
ASM was a new storage option introduced with Oracle Database 10gR1 that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove “hot spots.” It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and flash recovery area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64 MB for most systems.
Installing ASM
On Linux platforms, ASM can use raw devices or devices managed via the ASMLib interface. Oracle recommends ASMLib over raw devices for ease-of-use and performance reasons. ASMLib 2.0 is available for free download from OTN. This section walks through the process of configuring a simple ASM instance by using ASMLib 2.0 and building a database that uses ASM for disk storage.
Determine Which Version of ASMLib You Need
ASMLib 2.0 is delivered as a set of three Linux packages:
First, determine which kernel you are using by logging in as root and running the following command:
Use this information to find the correct ASMLib packages on OTN:
Before using ASMLib, you must run a configuration script to prepare the driver. Run the following command as root, and answer the prompts as shown in the example below. Run this on each node in the cluster.
You mark disks for use by ASMLib by running the following command as root from one of the cluster nodes:
Part IV: Install Oracle Software Oracle Database 10g Release 2 can be downloaded from OTN. Oracle offers a development and testing license free of charge. However, no support is provided and the license does not permit production use. A full description of the license agreement is available on OTN.
The easiest way to make the Oracle Database 10g Release 2 distribution media available on your server is to download them directly to the server.
Use the graphical login to log in as oracle.
Create a directory to contain the Oracle Database 10g Release 2 distribution:
Click on the 10201_database_linux32.zip link, and save the file in the directory you created for this purpose —if you have not already logged in to OTN, you may be prompted to do so at this point.
Since you will be creating a RAC database, you will also need to download and install Oracle Clusterware Release 2. Click on the 10201_clusterware_linux32.zip link and save the file.
Unzip and extract the files:
If you have not already done so, login as oracle and establish user equivalency between nodes:
Before installing the Oracle RAC 10g Release 2 database software, you must first install Oracle Clusterware. Oracle Clusterware requires two files to be shared among all of the nodes in the cluster: the Oracle Cluster Registry (100MB) and the Voting Disk (20MB). These files may be stored on raw devices or on a cluster filesystem. (NFS is also supported for certified NAS systems, but that is beyond the scope of this guide.) Oracle ASM may not be used for these files because ASM is dependent upon services provided by Clusterware. This guide will use OCFS2 as a cluster filesystem to store the Oracle Cluster Registry and Voting Disk files.
Start the installation using “runInstaller” from the “clusterware” directory:
Verify that the installation succeeded by running olsnodes from the $ORACLE_BASE/product/crs/bin directory; for example:
Create the ASM Instance
If you are planning to use OCFS2 for database storage, skip this section and continue with Create the RAC Database. If you plan to use Automatic Storage Management (ASM) for database storage, follow the instructions below to create an ASM instance on each cluster node. Be sure you have installed the ASMLib software as described earlier in this guide before proceeding.
Start the installation using “runInstaller” from the “database” directory:
Start the installation using “runInstaller” from the “database” directory:
This guide provides a walkthrough of installing an Oracle Database 10g Release 2 RAC database on commodity hardware for the purpose of evaluation. If you are new to Linux and/or Oracle, this guide is for you. It starts with the basics and walks you through an installation of Oracle Database 10g Release 2 RAC from the bare metal up.
This guide will take the approach of offering the easiest paths, with the fewest number of steps, for accomplishing a task. This approach often means making configuration choices that would be inappropriate for anything other than an evaluation. For that reason, this guide is not appropriate for building production-quality environments, nor does it reflect best practices.
The three Linux distributions certified for Oracle 10g Release 2 RAC are:
- Red Hat Enterprise Linux 4 (RHEL4)
- Red Hat Enterprise Linux 3 (RHEL3)
- Novell SUSE Linux Enterprise Server 9
This guide is divided into four parts: Part I covers the installation of the Linux operating system, Part II covers configuring Linux for Oracle, Part III discusses the essentials of partitioning shared disk, and Part IV covers installation of the Oracle software.
A Release 1 version of this guide is also available.
Background The illustration below shows the major components of an Oracle RAC 10g Release 2 configuration. Nodes in the cluster are typically separate servers (hosts).
Hardware
At the hardware level, each node in a RAC cluster shares three things:
- Access to shared disk storage
- Connection to a private network
- Access to a public network.
Oracle RAC relies on a shared disk architecture. The database files, online redo logs, and control files for the database must be accessible to each node in the cluster. The shared disks also store the Oracle Cluster Registry and Voting Disk (discussed later). There are a variety of ways to configure shared storage including direct attached disks (typically SCSI over copper or fiber), Storage Area Networks (SAN), and Network Attached Storage (NAS).
Private Network
Each cluster node is connected to all other nodes via a private high-speed network, also known as the cluster interconnect or high-speed interconnect (HSI). This network is used by Oracle’s Cache Fusion technology to effectively combine the physical memory (RAM) in each host into a single cache. Oracle Cache Fusion allows data stored in the cache of one Oracle instance to be accessed by any other instance by transferring it across the private network. It also preserves data integrity and cache coherency by transmitting locking and other synchronization information across cluster nodes.
The private network is typically built with Gigabit Ethernet, but for high-volume environments, many vendors offer proprietary low-latency, high-bandwidth solutions specifically designed for Oracle RAC. Linux also offers a means of bonding multiple physical NICs into a single virtual NIC (not covered here) to provide increased bandwidth and availability.
Public Network
To maintain high availability, each cluster node is assigned a virtual IP address (VIP). In the event of node failure, the failed node’s IP address can be reassigned to a surviving node to allow applications to continue accessing the database through the same IP address.
Configuring the Cluster Hardware
There are many different ways to configure the hardware for an Oracle RAC cluster. Our configuration here uses two servers with two CPUs, 1GB RAM, two Gigabit Ethernet NICs, a dual channel SCSI host bus adapter (HBA), and eight SCSI disks connected via copper to each host (four disks per channel). The disks were configured as Just a Bunch Of Disks (JBOD)—that is, with no hardware RAID controller.
Software
At the software level, each node in a RAC cluster needs:
- An operating system
- Oracle Clusterware
- Oracle RAC software
- An Oracle Automatic Storage Management (ASM) instance (optional).
Oracle RAC is supported on many different operating systems. This guide focuses on Linux. The operating system must be properly configured for the OS–including installing the necessary software packages, setting kernel parameters, configuring the network, establishing an account with the proper security, configuring disk devices, and creating directory structures. All these tasks are described in this guide.
Oracle Cluster Ready Services becomes Oracle Clusterware
Oracle RAC 10g Release 1 introduced Oracle Cluster Ready Services (CRS), a platform-independent set of system services for cluster environments. In Release 2, Oracle has renamed this product to Oracle Clusterware.
Clusterware maintains two files: the Oracle Cluster Registry (OCR) and the Voting Disk. The OCR and the Voting Disk must reside on shared disks as either raw partitions or files in a cluster filesystem. This guide describes creating the OCR and Voting Disks using a cluster filesystem (OCFS2) and walks through the CRS installation.
Oracle RAC Software
Oracle RAC 10g Release 2 software is the heart of the RAC database and must be installed on each cluster node. Fortunately, the Oracle Universal Installer (OUI) does most of the work of installing the RAC software on each node. You only have to install RAC on one node—OUI does the rest.
Oracle Automatic Storage Management (ASM)
ASM is a new feature in Oracle Database 10g that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. Oracle ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove “hot spots.” It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
Oracle ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and the RMAN Flash Recovery Area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64MB for most systems. In Oracle RAC environments, an ASM instance must be running on each cluster node.
Part I: Installing Linux Install and Configure Linux as described in the first guide in this series. You will need three IP addresses for each server: one for the private network, one for the public network, and one for the virtual IP address. Use the operating system’s network configuration tools to assign the private and public network addresses. Do not assign the virtual IP address using the operating system’s network configuration tools; this will be done by the Oracle Virtual IP Configuration Assistant (VIPCA) during Oracle RAC software installation.
Red Hat Enterprise Linux 4 (RHEL4)
Required Kernel:
2.6.9-11.EL or higher
Verify kernel version:
# uname -r
2.6.9-22.ELsmpOther required package versions (or higher):
binutils-2.15.92.0.2-10.EL4 compat-db-4.1.25-9 control-center-2.8.0-12 gcc-3.4.3-9.EL4 gcc-c++-3.4.3-9.EL4 glibc-2.3.4-2 glibc-common-2.3.4-2 gnome-libs-1.4.1.2.90-44.1 libstdc++-3.4.3-9.EL4 libstdc++-devel-3.4.3-9.EL4 make-3.80-5 pdksh-5.2.14-30 sysstat-5.0.5-1
xscreensaver-4.18-5.rhel4.2Verify installed packages:
# rpm -q binutils compat-db control-center gcc gcc-c++ glibc glibc-common \ gnome-libs libstdc++ libstdc++-devel make pdksh sysstat xscreensaver binutils-2.15.92.0.2-15 compat-db-4.1.25-9 control-center-2.8.0-12.rhel4.2 gcc-3.4.4-2 gcc-c++-3.4.4-2 glibc-2.3.4-2.13 glibc-common-2.3.4-2.13 gnome-libs-1.4.1.2.90-44.1 libstdc++-3.4.4-2 libstdc++-devel-3.4.4-2 make-3.80-5 pdksh-5.2.14-30.3 sysstat-5.0.5-1
xscreensaver-4.18-5.rhel4.9SUSE Linux Enterprise Server 9 (SLES9)
Required Package Sets:
Basis Runtime System
YaST
Graphical Base System
Linux Tools
KDE Desktop Environment
C/C++ Compiler and Tools (not selected by default)
Do not install:
Authentication Server (NIS, LDAP, Kerberos)
Required Kernel:
2.6.5-7.97 or higher
Verify kernel version:
# uname -r 2.6.5-7.97-smpOther required package versions (or higher):
gcc-3.3 gcc-c++-3.3.3-43 glibc-2.3.3-98.28 libaio-0.3.98-18 libaio-devel-0.3.98-18 make-3.80 openmotif-libs-2.2.2-519.1Verify installed packages:
# rpm -q gcc gcc-c++ glibc libaio libaio-devel make openmotif-libs gcc-3.3.3-43.24 gcc-c++-3.3.3-43.24 libaio-0.3.98-18.3 libaio-devel-0.3.98-18.3 make-3.80-184.1 openmotif-libs-2.2.2-519.1
Part II: Configure Linux for Oracle Create the Oracle Groups and User Account
Next we’ll create the Linux groups and user account that will be used to install and maintain the Oracle 10g Release 2 software. The user account will be called ‘oracle’ and the groups will be ‘oinstall’ and ‘dba.’ Execute the following commands as root on one cluster node only:
/usr/sbin/groupadd oinstall /usr/sbin/groupadd dba /usr/sbin/useradd -m -g oinstall -G dba oracle id oracleEx:
# /usr/sbin/groupadd oinstall # /usr/sbin/groupadd dba # /usr/sbin/useradd -m -g oinstall -G dba oracle # id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba)The User ID and Group IDs must be the same on all cluster nodes. Using the information from the id oracle command, create the Oracle Groups and User Account on the remaining cluster nodes:
/usr/sbin/groupadd -g 501 oinstall /usr/sbin/groupadd -g 502 dba /usr/sbin/useradd -m -u 501 -g oinstall -G dba oracleEx:
# /usr/sbin/groupadd -g 501 oinstall # /usr/sbin/groupadd -g 502 dba # /usr/sbin/useradd -m -u 501 -g oinstall -G dba oracle # id oracle uid=501(oracle) gid=501(oinstall) groups=501(oinstall),502(dba)Set the password on the oracle account:
# passwd oracle Changing password for user oracle. New password: Retype new password: passwd: all authentication tokens updated successfully.Create Mount Points
Now create mount points to store the Oracle 10g Release 2 software. This guide will adhere to the Optimal Flexible Architecture (OFA) for the naming conventions used in creating the directory structure. For more information on OFA standards, see Appendix D of the Oracle Database 10g Release 2 Installation Guide.
Issue the following commands as root:
mkdir -p /u01/app/oracle chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/app/oracleEx:
# mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app/oracle # chmod -R 775 /u01/app/oracleConfigure Kernel Parameters
Login as root and configure the Linux kernel parameters on each node.
cat >> /etc/sysctl.conf << EOF kernel.shmall = 2097152 kernel.shmmax = 536870912 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 658576 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 1048536 net.core.wmem_max = 1048536 EOF /sbin/sysctl -pOn SUSE Linux Enterprise Server 9.0 only:
Set the kernel parameter
disable_cap_mlock
as follows:disable_cap_mlock = 1Run the following command after completing the steps above:
/sbin/chkconfig boot.sysctl onSetting Shell Limits for the oracle User
Oracle recommends setting the limits to the number of processes and number of open files each Linux account may use. To make these changes, cut and past the following commands as root.
cat >> /etc/security/limits.conf << EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOF cat >> /etc/pam.d/login << EOF session required /lib/security/pam_limits.so EOFFor Red Hat Enterprise Linux releases, use the following:
cat >> /etc/profile << EOF if [ \$USER = "oracle" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF cat >> /etc/csh.login << EOF if ( \$USER == "oracle" ) then limit maxproc 16384 limit descriptors 65536 umask 022 endif EOFFor Novell SUSE releases, use the following:
cat >> /etc/profile.local << EOF if [ \$USER = "oracle" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF cat >> /etc/csh.login.local << EOF if ( \$USER == "oracle" ) then limit maxproc 16384 limit descriptors 65536 umask 022 endif EOFConfigure the Hangcheck Timer
All RHEL releases:
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 cat >> /etc/rc.d/rc.local << EOF modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 EOFAll SLES releases:
modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 cat >> /etc/init.d/boot.local << EOF modprobe hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 EOFConfigure /etc/hosts
Some Linux distributions associate the host name with the loopback address (127.0.0.1). If this occurs, remove the host name from the loopback address.
/etc/hosts file used for this walkthrough:
127.0.0.1 localhost.localdomain localhost 192.168.100.51 ds1-priv.orademo.org ds1-priv # ds1 private 192.168.100.52 ds2-priv.orademo.org ds2-priv # ds1 private 192.168.200.51 ds1.orademo.org ds1 # ds1 public 192.168.200.52 ds2.orademo.org ds2 # ds2 public 192.168.200.61 ds1-vip.orademo.org ds1-vip # ds1 virtual 192.168.200.62 ds2-vip.orademo.org ds2-vip # ds2 virtualConfigure SSH for User Equivalence
During the installation of Oracle RAC 10g Release 2, OUI needs to copy files to and execute programs on the other nodes in the cluster. In order to allow OUI to do that, you must configure SSH to allow user equivalence. Establishing user equivalence with SSH provides a secure means of copying files and executing programs on other nodes in the cluster without requiring password prompts.
The first step is to generate public and private keys for SSH. There are two versions of the SSH protocol; version 1 uses RSA and version 2 uses DSA, so we will create both types of keys to ensure that SSH can use either version. The ssh-keygen program will generate public and private keys of either type depending upon the parameters passed to it.
When you run ssh-keygen, you will be prompted for a location to save the keys. Just press Enter when prompted to accept the default. You will then be prompted for a passphrase. Enter a password that you will remember and then enter it again to confirm. When you have completed the steps below, you will have four files in the ~/.ssh directory: id_rsa, id_rsa.pub, id_dsa, and id_dsa.pub. The id_rsa and id_dsa files are your private keys and must not be shared with anyone. The id_rsa.pub and id_dsa.pub files are your public keys and must be copied to each of the other nodes in the cluster.
From each node, logged in as oracle:
mkdir ~/.ssh chmod 755 ~/.ssh /usr/bin/ssh-keygen -t rsaCut and paste the following line separately:
/usr/bin/ssh-keygen -t dsaEx:
$ mkdir ~/.ssh $ chmod 755 ~/.ssh $ /usr/bin/ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_rsa. Your public key has been saved in /home/oracle/.ssh/id_rsa.pub. The key fingerprint is: 4b:df:76:77:72:ba:31:cd:c4:e2:0c:e6:ef:30:fc:37 oracle@ds1.orademo.org $ /usr/bin/ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/oracle/.ssh/id_dsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/oracle/.ssh/id_dsa. Your public key has been saved in /home/oracle/.ssh/id_dsa.pub. The key fingerprint is: af:37:ca:69:3c:a0:08:97:cb:9c:0b:b0:20:70:e3:4a oracle@ds1.orademo.orgNow the contents of the public key files id_rsa.pub and id_dsa.pub on each node must be copied to the ~/.ssh/authorized_keys file on every other node. Use ssh to copy the contents of each file to the ~/.ssh/authorized_keys file. Note that the first time you access a remote node with ssh its RSA key will be unknown and you will be prompted to confirm that you wish to connect to the node. SSH will record the RSA key for the remote nodes and will not prompt for this on subsequent connections to that node.
From the first node ONLY, logged in as oracle (copy the local account’s keys so that ssh to the local node will work):
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keysNow copy the keys to the other node so that we can ssh to the remote node without being prompted for a password.
ssh oracle@ds2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys(If you are cut and pasting these commands, run each of them separately. SSH will prompt for the oracle password each time and if the commands are pasted at the same time, the other commands will be lost when the first one flushes the input buffer prior to prompting for the password.)
ssh oracle@ds2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys chmod 644 ~/.ssh/authorized_keysEx:
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys $ ssh oracle@ds2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys The authenticity of host 'ds2 (192.168.200.52)' can't be established. RSA key fingerprint is d1:23:a7:df:c5:fc:4e:10:d2:83:60:49:25:e8:eb:11. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ds2,192.168.200.52' (RSA) to the list of known hosts. oracle@ds2's password: $ ssh oracle@ds2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys oracle@ds2's password: $ chmod 644 ~/.ssh/authorized_keysNow do the same for the second node. Notice that this time SSH will prompt for the passphrase you used when creating the keys rather than the oracle password. This is because the first node (ds1) now knows the public keys for the second node and SSH is now using a different authentication protocol. Note, if you didn’t enter a passphrase when creating the keys with ssh-keygen, you will not be prompted for one here.
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ssh oracle@ds1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys ssh oracle@ds1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys chmod 644 ~/.ssh/authorized_keysEx:
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys $ ssh oracle@ds1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys The authenticity of host 'ds1 (192.168.200.51)' can't be established. RSA key fingerprint is bd:0e:39:2a:23:2d:ca:f9:ea:71:f5:3d:d3:dd:3b:65. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'ds1,192.168.200.51' (RSA) to the list of known hosts. Enter passphrase for key '/home/oracle/.ssh/id_rsa': $ ssh oracle@ds1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Enter passphrase for key '/home/oracle/.ssh/id_rsa': $ chmod 644 ~/.ssh/authorized_keysEstablish User Equivalence
Finally, after all of the generating of keys, copying of files, and repeatedly entering passwords and passphrases (isn’t security fun?), you’re ready to establish user equivalence. When user equivalence is established, you won’t be prompted for a password again.
As oracle on the node where the Oracle 10g Release 2 software will be installed (ds1):
exec /usr/bin/ssh-agent $SHELL /usr/bin/ssh-addEx:
$ exec /usr/bin/ssh-agent $SHELL $ /usr/bin/ssh-add Enter passphrase for /home/oracle/.ssh/id_rsa: Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa) Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)(Note that user equivalence is established for the current session only. If you switch to a different session or log out and back in, you will have to run ssh-agent and ssh-add again to re-establish user equivalence.)
Test Connectivity
If everything is set up correctly, you can now use ssh to log in, execute programs, and copy files on the other cluster nodes without having to enter a password. Verify user equivalence by running a simple command like date on a remote cluster node:
$ ssh ds2 date Sat Jan 21 13:31:07 PST 2006It is crucial that you test connectivity in each direction from all servers. That will ensure that messages like the one below do not occur when the OUI attempts to copy files during CRS and database software installation. This message will only occur the first time an operation on a remote node is performed, so by testing the connectivity, you not only ensure that remote operations work properly, you also complete the initial security key exchange.
The authenticity of host 'ds2 (192.168.200.52)' can't be established. RSA key fingerprint is 8f:a3:19:76:ca:4f:71:85:42:c2:7a:da:eb:53:76:85. Are you sure you want to continue connecting (yes/no)? yes
Part III: Prepare the Shared Disks Both Oracle Clusterware and Oracle RAC require access to disks that are shared by each node in the cluster. The shared disks must be configured using one of the following methods. Note that you cannot use a “standard” filesystem such as ext3 for shared disk volumes since such file systems are not cluster aware.
For Clusterware:
- OCFS (Release 1 or 2)
- raw devices
- third party cluster filesystem such as GPFS or Veritas
- OCFS (Release 1 or 2)
- ASM
- raw devices
- third party cluster filesystem such as GPFS or Veritas
Partition the Disks
In order to use either OCFS2 or ASM, you must have unused disk partitions available. This section describes how to create the partitions that will be used for OCFS2 and for ASM.
WARNING: Improperly partitioning a disk is one of the surest and fastest ways to wipe out everything on your hard disk. If you are unsure how to proceed, stop and get help, or you will risk losing data.
This example uses /dev/sdb (an empty SCSI disk with no existing partitions) to create a single partition for the entire disk (36 GB).
Ex: # fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. The number of cylinders for this disk is set to 4427. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sdb: 255 heads, 63 sectors, 4427 cylinders Units = cylinders of 16065 * 512 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-4427, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-4427, default 4427): Using default value 4427 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: If you have created or modified any DOS 6.x partitions, please see the fdisk manual page for additional information. Syncing disks.Now verify the new partition:
Ex: # fdisk -l /dev/sdb Disk /dev/sdb: 36.4 GB, 36420075008 bytes 255 heads, 63 sectors/track, 4427 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 4427 35559846 83 LinuxRepeat the above steps for each disk to be partitioned. Disk partitioning should be done from one node only. When finished partitioning, run the ‘partprobe’ command as root on each of the remaining cluster nodes in order to assure that the new partitions are configured.
Ex:
# partprobe
Oracle Cluster File System (OCFS) Release 2
OCFS2 is a general-purpose cluster file system that can be used to store Oracle Clusterware files, Oracle RAC database files, Oracle software, or any other types of files normally stored on a standard filesystem such as ext3. This is a significant change from OCFS Release 1, which only supported Oracle Clusterware files and Oracle RAC database files.
Obtain OCFS2
OCFS2 is available free of charge from Oracle as a set of three RPMs: a kernel module, support tools, and a console. There are different kernel module RPMs for each supported Linux kernel so be sure to get the OCFS2 kernel module for your Linux kernel. OCFS2 kernel modules may be downloaded from http://oss.oracle.com/projects/ocfs2/files/ and the tools and console may be downloaded from http://oss.oracle.com/projects/ocfs2-tools/files/.
To determine the kernel-specific module that you need, use uname -r.
# uname -r 2.6.9-22.ELsmpFor this example I downloaded:
ocfs2console-1.0.3-1.i386.rpm ocfs2-tools-1.0.3-1.i386.rpm ocfs2-2.6.9-22.ELsmp-1.0.7-1.i686.rpmInstall OCFS2 as root on each cluster node
# rpm -ivh ocfs2console-1.0.3-1.i386.rpm \ ocfs2-tools-1.0.3-1.i386.rpm \ ocfs2-2.6.9-22.ELsmp-1.0.7-1.i686.rpm Preparing... ########################################### [100%] 1:ocfs2-tools ########################################### [ 33%] 2:ocfs2console ########################################### [ 67%] 3:ocfs2-2.6.9-22.ELsmp ########################################### [100%]Configure OCFS2
Run ocfs2console as root:
# ocfs2consoleSelect Cluster → Configure Nodes
Click on Add and enter the Name and IP Address of each node in the cluster
Once all of the nodes have been added, click on Cluster –> Propagate Configuration. This will copy the OCFS2 configuration file to each node in the cluster. You may be prompted for root passwords as ocfs2console uses ssh to propagate the configuration file. Leave the OCFS2 console by clicking on File –> Quit. It is possible to format and mount the OCFS2 partitions using the ocfs2console GUI; however, this guide will use the command line utilities.
Enable OCFS2 to start at system boot:
As root, execute the following command on each cluster node to allow the OCFS2 cluster stack to load at boot time:
/etc/init.d/o2cb enable
Ex:
# /etc/init.d/o2cb enable Writing O2CB configuration: OK Loading module "configfs": OK Mounting configfs filesystem at /config: OK Loading module "ocfs2_nodemanager": OK Loading module "ocfs2_dlm": OK Loading module "ocfs2_dlmfs": OK Mounting ocfs2_dlmfs filesystem at /dlm: OK Starting cluster ocfs2: OKCreate a mount point for the OCFS filesystem
As root on each of the cluster nodes, create the mount point directory for the OCFS2 filesystem
Ex: # mkdir /u03Create the OCFS2 filesystem on the unused disk partition
The example below creates an OCFS2 filesystem on the unused /dev/sdc1 partition with a volume label of “/u03″ (-L /u03), a block size of 4K (-b 4K) and a cluster size of 32K (-C 32K) with 4 node slots (-N 4). See the OCFS2 Users Guide for more information on mkfs.ocfs2 command line options.
Ex:
# mkfs.ocfs2 -b 4K -C 32K -N 4 -L /u03 /dev/sdc1 mkfs.ocfs2 1.0.3 Filesystem label=/u03 Block size=4096 (bits=12) Cluster size=32768 (bits=15) Volume size=36413280256 (1111245 clusters) (8889960 blocks) 35 cluster groups (tail covers 14541 clusters, rest cover 32256 clusters) Journal size=33554432 Initial number of node slots: 4 Creating bitmaps: done Initializing superblock: done Writing system files: done Writing superblock: done Writing lost+found: done mkfs.ocfs2 successfulMount the OCFS2 filesystem
Since this filesystem will contain the Oracle Clusterware files and Oracle RAC database files, we must ensure that all I/O to these files uses direct I/O (O_DIRECT). Use the “datavolume” option whenever mounting the OCFS2 filesystem to enable direct I/O. Failure to do this can lead to data loss in the event of system failure.
Ex: # mount -t ocfs2 -L /u03 -o datavolume /u03Notice that the mount command uses the filesystem label (-L u03) used during the creation of the filesystem. This is a handy way to refer to the filesystem without having to remember the device name.
To verify that the OCFS2 filesystem is mounted, issue the mount command or run df:
# mount -t ocfs2 /dev/sdc1 on /u03 type ocfs2 (rw,_netdev,datavolume) # df /u03 Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdc1 35559840 138432 35421408 1% /u03The OCFS2 filesystem can now be mounted on the other cluster nodes.
To automatically mount the OCFS2 filesystem at system boot, add a line similar to the one below to /etc/fstab on each cluster node:
LABEL=/u03 /u03 ocfs2 _netdev,datavolume,nointr 0 0Create the directories for shared files
CRS files mkdir /u03/oracrs chown oracle:oinstall /u03/oracrs chmod 775 /u03/oracrs Database files mkdir /u03/oradata chown oracle:oinstall /u03/oradata chmod 775 /u03/oradataAutomatic Storage Management (ASM)
ASM was a new storage option introduced with Oracle Database 10gR1 that provides the services of a filesystem, logical volume manager, and software RAID in a platform-independent manner. ASM can stripe and mirror your disks, allow disks to be added or removed while the database is under load, and automatically balance I/O to remove “hot spots.” It also supports direct and asynchronous I/O and implements the Oracle Data Manager API (simplified I/O system call interface) introduced in Oracle9i.
ASM is not a general-purpose filesystem and can be used only for Oracle data files, redo logs, control files, and flash recovery area. Files in ASM can be created and named automatically by the database (by use of the Oracle Managed Files feature) or manually by the DBA. Because the files stored in ASM are not accessible to the operating system, the only way to perform backup and recovery operations on databases that use ASM files is through Recovery Manager (RMAN).
ASM is implemented as a separate Oracle instance that must be up if other databases are to be able to access it. Memory requirements for ASM are light: only 64 MB for most systems.
Installing ASM
On Linux platforms, ASM can use raw devices or devices managed via the ASMLib interface. Oracle recommends ASMLib over raw devices for ease-of-use and performance reasons. ASMLib 2.0 is available for free download from OTN. This section walks through the process of configuring a simple ASM instance by using ASMLib 2.0 and building a database that uses ASM for disk storage.
Determine Which Version of ASMLib You Need
ASMLib 2.0 is delivered as a set of three Linux packages:
- oracleasmlib-2.0 – the ASM libraries
- oracleasm-support-2.0 – utilities needed to administer ASMLib
- oracleasm – a kernel module for the ASM library
First, determine which kernel you are using by logging in as root and running the following command:
uname -rm Ex: # uname -rm 2.6.9-22.ELsmp i686The example shows that this is a 2.6.9-22 kernel for an SMP (multiprocessor) box using Intel i686 CPUs.
Use this information to find the correct ASMLib packages on OTN:
- Point your Web browser to http://www.oracle.com/technology/tech/linux/asmlib/index.html
- Select the link for your version of Linux.
- Download the oracleasmlib and oracleasm-support packages for your version of Linux
- Download the oracleasm package corresponding to your kernel. In the example above, the oracleasm-2.6.9-22.ELsmp-2.0.0-1.i686.rpm package was used.Next, install the packages by executing the following command as root:
rpm -Uvh oracleasm-kernel_version-asmlib_version.cpu_type.rpm \ oracleasmlib-asmlib_version.cpu_type.rpm \ oracleasm-support-asmlib_version.cpu_type.rpm Ex: # rpm -Uvh \ > oracleasm-2.6.9-22.ELsmp-2.0.0-1.i686.rpm \ > oracleasmlib-2.0.1-1.i386.rpm \ > oracleasm-support-2.0.1-1.i386.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [ 33%] 2:oracleasm-2.6.9-22.ELsm########################################### [ 67%] 3:oracleasmlib ########################################### [100%]
Before using ASMLib, you must run a configuration script to prepare the driver. Run the following command as root, and answer the prompts as shown in the example below. Run this on each node in the cluster.
# /etc/init.d/oracleasm configure Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: dba Start Oracle ASM library driver on boot (y/n) [n]: y Fix permissions of Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: [ OK ] Creating /dev/oracleasm mount point: [ OK ] Loading module "oracleasm": [ OK ] Mounting ASMlib driver filesystem: [ OK ] Scanning system for ASM disks: [ OK ]Next you tell the ASM driver which disks you want it to use. Oracle recommends that each disk contain a single partition for the entire disk. See Partitioning the Disks at the beginning of this section for an example of creating disk partitions.
You mark disks for use by ASMLib by running the following command as root from one of the cluster nodes:
/etc/init.d/oracleasm createdisk DISK_NAME device_nameTip: Enter the DISK_NAME in UPPERCASE letters.
Ex: # /etc/init.d/oracleasm createdisk VOL1 /dev/sdb1 Marking disk "/dev/sdb1" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk VOL1 /dev/sdc1 Marking disk "/dev/sdc1" as an ASM disk: [ OK ] # /etc/init.d/oracleasm createdisk VOL1 /dev/sdd1 Marking disk "/dev/sdd1" as an ASM disk: [ OK ]Verify that ASMLib has marked the disks:
# /etc/init.d/oracleasm listdisks VOL1 VOL2 VOL3On all other cluster nodes, run the following command as root to scan for configured ASMLib disks:
/etc/init.d/oracleasm scandisks
Part IV: Install Oracle Software Oracle Database 10g Release 2 can be downloaded from OTN. Oracle offers a development and testing license free of charge. However, no support is provided and the license does not permit production use. A full description of the license agreement is available on OTN.
The easiest way to make the Oracle Database 10g Release 2 distribution media available on your server is to download them directly to the server.
Use the graphical login to log in as oracle.
Create a directory to contain the Oracle Database 10g Release 2 distribution:
mkdir 10gR2To download Oracle Database 10g Release 2 from OTN, point your browser (Firefox works well) to http://www.oracle.com/technology/software/products/database/oracle10g/htdocs/10201linuxsoft.html. Fill out the Eligibility Export Restrictions page, and read the OTN License agreement. If you agree with the restrictions and the license agreement, click on I Accept.
Click on the 10201_database_linux32.zip link, and save the file in the directory you created for this purpose —if you have not already logged in to OTN, you may be prompted to do so at this point.
Since you will be creating a RAC database, you will also need to download and install Oracle Clusterware Release 2. Click on the 10201_clusterware_linux32.zip link and save the file.
Unzip and extract the files:
cd 10gR2 unzip 10201_database_linux32.zip unzip 10201_clusterware_linux32.zipEstablish User Equivalency and Set Environment Variables
If you have not already done so, login as oracle and establish user equivalency between nodes:
exec /usr/bin/ssh-agent $SHELL /usr/bin/ssh-add Enter passphrase for /home/oracle/.ssh/id_rsa: Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa) Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
Set the ORACLE_BASE environment variable:
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASEInstall Oracle Clusterware
Before installing the Oracle RAC 10g Release 2 database software, you must first install Oracle Clusterware. Oracle Clusterware requires two files to be shared among all of the nodes in the cluster: the Oracle Cluster Registry (100MB) and the Voting Disk (20MB). These files may be stored on raw devices or on a cluster filesystem. (NFS is also supported for certified NAS systems, but that is beyond the scope of this guide.) Oracle ASM may not be used for these files because ASM is dependent upon services provided by Clusterware. This guide will use OCFS2 as a cluster filesystem to store the Oracle Cluster Registry and Voting Disk files.
Start the installation using “runInstaller” from the “clusterware” directory:
- Welcome
- Click on Next
- Specify Inventory Directory and Credentials
- The defaults should be correct
- Click on Next
- Specify Home Details
- Name: OraCRS_Home
- Path: /u01/app/oracle/product/crs
- Product-Specific Prerequisite Checks
- Correct any problems found before proceeding.
- Click on Next
- Specify Cluster Configuration
- Enter the cluster name (or accept the default of “crs”);
- Specify Network Interface Usage – Specify the Interface Type (public, private, or “do no use”) for each interface
- Specify Oracle Cluster Registry (OCR) Location
- Choose External Redundancy and enter the full pathname of the OCR file (ex: /u03/oracrs/ocr.crs).
- Specify Voting Disk Location
- Choose External Redundancy and enter the full pathname of the voting disk file (ex: /u03/oracrs/vote.crs)
- Summary
- Click on Install
- Execute Configuration Scripts
- Execute the scripts as root on each node, one at a time, starting with the installation node.
- Do not run the scripts simultaneously. Wait for one to finish before starting another.
- Click on OK to dismiss the window when done.
Verify that the installation succeeded by running olsnodes from the $ORACLE_BASE/product/crs/bin directory; for example:
$ /u01/app/oracle/product/crs/bin/olsnodes ds1 ds2Once Oracle Clusterware is installed and operating, it’s time to install the rest of the Oracle RAC software.
Create the ASM Instance
If you are planning to use OCFS2 for database storage, skip this section and continue with Create the RAC Database. If you plan to use Automatic Storage Management (ASM) for database storage, follow the instructions below to create an ASM instance on each cluster node. Be sure you have installed the ASMLib software as described earlier in this guide before proceeding.
Start the installation using “runInstaller” from the “database” directory:
- Welcome
- Click on Next
- Select Installation Type
- Select Enterprise Edition
- Click on Next
- Specify Home Details
- Name:
Ora10gASM
- Path: /u01/app/oracle/product/10.2.0/asm
Note:Oracle recommends using a different ORACLE_HOME for ASM than the ORACLE_HOME used for the database for ease of administration. - Click on Next
- Name:
- Specify Hardware Cluster Installation Mode
- Select Cluster Installation
- Click on Select All
- Click on Next
- Product-specific Prerequisite Checks
- If you’ve been following the steps in this guide, all the checks should pass without difficulty. If one or more checks fail, correct the problem before proceeding.
- Click on Next
- Select Configuration Option
- Select Configure Automatic Storage Management (ASM)
- Enter the ASM SYS password and confirm
- Click on Next
- Configure Automatic Storage Management
- Disk Group Name: DATA
- Redundancy
- High mirrors data twice.
- Normal mirrors data once. This is the default.
- External does not mirror data within ASM. This is typically used if an external RAID array is providing redundancy. - Add Disks
The disks you configured for use with ASMLib are listed as Candidate Disks. Select each disk you wish to include in the disk group. - Click on Next
- Summary
- A summary of the products being installed is presented.
- Click on Install.
- Execute Configuration Scripts
- At the end of the installation, a pop up window will appear indicating scripts that need to be run as root. Login as root and run the indicated scripts.
- Click on OK when finished.
- End of Installation
- Make note of the URLs presented in the summary, and click on Exit when ready.
- Congratulations! Your new Oracle ASM Instance is up and ready for use.
Start the installation using “runInstaller” from the “database” directory:
- Welcome
- Click on Next
- Select Installation Type
- Select Enterprise Edition
- Click on Next
- Specify Home Details
- Name:
OraDb10g_home1
- Path: /u01/app/oracle/product/10.2.0/db_1
Note:Oracle recommends using a different ORACLE_HOME for the database than the ORACLE_HOME used for ASM. - Click on Next
- Name:
- Specify Hardware Cluster Installation Mode
- Select Cluster Installation
- Click on Select All
- Click on Next
- Product-specific Prerequisite Checks
- If you’ve been following the steps in this guide, all the checks should pass without difficulty. If one or more checks fail, correct the problem before proceeding.
- Click on Next
- Select Configuration Option
- Select Create a Database
- Click on Next
- Select Databse Configuration
- Select General Purpose
- Click on Next
- Specify Database Configuration Options
- Database Naming: Enter the Global Database Name and SID
- Database Character Set: Accept the default
- Database Examples: Select Create database with sample schemas
- Click on Next
- Select Database Management Option
- Select Use Database Control for Database Management
- Click on Next
- Specify Database Storage Option
- If you are using OCFS2 for database storage
- Select File System
- Specify Database fle location: Enter the path name to the OCFS2 filesystem directory you wish to use.
ex: /u03/oradata/racdemo
- If you are using ASM for database storage
- Select Automatic Storage Management (ASM)
- Click on Next
- If you are using OCFS2 for database storage
- Specify Backup and Recovery Options
- Select Do not enable Automated backups
- Click on Next
- For ASM Installations Only:
- Select ASM Disk Group
-
- Select the DATA disk group created in the previous section
- Click on Next
- Specify Database Schema Passwords
- Select Use the same password for all the accounts
- Enter the password and confirm
- Click on Next
- Summary
- A summary of the products being installed is presented.
- Click on Install.
- Configuration Assistants
- The Oracle Net, Oracle Database, and iSQL*Plus configuration assistants will run automatically
- Execute Configuration Scripts
- At the end of the installation, a pop up window will appear indicating scripts that need to be run as root. Login as root and run the indicated scripts.
- Click on OK when finished.
- End of Installation
- Make note of the URLs presented in the summary, and click on Exit when ready.
- Congratulations! Your new Oracle Database is up and ready for use.
No comments:
Post a Comment