Oracle Labs by Yuri Khazin, Oracle DBA

Virtual Oracle RAC. Task 6: Configuring iSCSI targets in Linux and partitioning the volumes.

Link back: This guide is a part of the Virtual Oracle RAC project, the index to the whole project is here.

This part of the project provides instructions on configuring iSCSI initiators in Linux.

Configuring iSCSI services.

Start the “iscsid” serivce:

# service iscsid start

Run the following commands to make sure the services will run next time after reboot:

# chkconfig iscsid on
# chkconfig iscsi on

Now, we check if our iSCSI service can communicate with Openfiler. Run this command (shows available iSCSI targets):

# iscsiadm -m discovery -t sendtargets -p openfiler-priv

Is your output any different from what is shown above? Got less targets or none at all? And you say that your openfiler is up and running and you followed all instructions to the letter. Well, let’s go back and recall how we configured our iSCSI targets. There was something called “Network ACL” setup, that may be likened to a firewall in the openfiler. Remember now? It is set to “Deny” by default for each target. Go back and set it all to “Allow” for all targets. The change will take effect immediately, so you can run discovery again.

As it turns out, when the iSCSI initiator discovers the targets it configures the services to start up automatically on reboot and log into the targets. We can test this now by rebooting our linux machine. This is what you should see during restart if iSCSI setup was done properly:

If login to the targets did not happen on reboot you will need to execute the commands below: 

# iscsiadm -m node -T -p -l

# iscsiadm -m node -T -p –op update -n node.startup -v automatic

The “-l” option means – log into the target (“a node”, do not confuse this “node” with RAC nodes)

The “–op” option means update configuration property specified by “-n” option, it is “node.startup” in this case.

Do these two commands for each of the targets appropriately, I am showing only first of them.

The command below (no operation specified) can be used to query configuration of a target:

# iscsiadm -m node -T -p

Making device names persistent.

Linux “talks” to iSCSI targets using local device names. The mapping of our iSCSI targets to local SCSI device names is random and may change after reboot. It is a problem that needs fixing. The mapping of targets to the local devices is illustrated here:

Since we want to have a permanent and consistent mapping across all RAC nodes, we are going to create persistent local SCSI device names. This is done using “udev”, which is a Dynamic Device Management tool.

# cd /etc/udev/rules.d/

Create a file called “55-openiscsi.rules” with the following content:

# /etc/udev/rules.d/55-openiscsi.rules
KERNEL==”sd*”, BUS==”scsi”, PROGRAM=”/etc/udev/scripts/ %b”,SYMLINK+=”iscsi/%c/part%n”

Navigate to another directory:

# cd /etc/udev/scripts

We create here a new shell script called “” with the following content:


# FILE: /etc/udev/scripts/


[ -e /sys/class/iscsi_host ] || exit 1


target_name=$(cat ${file})

# This is not an open-scsi drive
if [ -z “${target_name}” ]; then
exit 1

echo “${target_name##*.}”

Make the new script executable:

# chmod 755 /etc/udev/scripts/

Let’s restart the iSCSI initiator service:

# service iscsi stop

# service iscsi start

Here is the outcome of all that (the image below may be too wide but I did not want the lines to wrap):

Well, how do we know if what we’ve done actually worked? We look at the names in /dev/iscsi (this directory has just been created after we ran these commands, it  did not exist before) and compare them to mapping in /dev/disk/by-path:

And …

Take for instance “/dev/iscsi/asm1/part”, it corresponds to the “asm1” target (through /dev/sda).

Now we have persistent local names for our targets. We can reboot the odbn1 machine and see that iscsi devices are still there and properly mapped.

Mapping of iSCSI Target Name to Local Device Name
iSCSI Target Name Local Device Name /dev/iscsi/asm1/part /dev/iscsi/asm2/part /dev/iscsi/asm3/part /dev/iscsi/asm4/part /dev/iscsi/crs/part

Next, we will have to create partitions in our SCSI volumes.

Creating partitions on iSCSI Volumes.

Before we start creating partitions it makes sense to showdown our virtual machines and take a snapshot of them. This provides an option of reverting to a known state of machine if something goes wrong.

Assuming that snapshot is taken and our machines are back online.

Notice: some of the material for this article was taken from Oracle author’s article, I recommend reading that article if you need more detailed information.

The following table lists the five iSCSI volumes and what file systems they will support:

Oracle Shared Drive Configuration
File System Type iSCSI Target (short) Name Size Mount Point ASM Diskgroup Name File Types
OCFS2 crs 2GB /u02 Oracle Cluster Registry (OCR) File – (~250 MB)
Voting Disk – (~20MB)
ASM asm1 8GB ORCL:VOL1 +RACDB_DATA1 Oracle Database Files
ASM asm2 8GB ORCL:VOL2 +RACDB_DATA1 Oracle Database Files
ASM asm3 8GB ORCL:VOL3 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area
ASM asm4 8GB ORCL:VOL4 +FLASH_RECOVERY_AREA Oracle Flash Recovery Area
Total 36GB

In the picture below shown the fdisk dialog that creates a primary partition of a maximum available size. Red arrows mark your input, where you either type values or accept defaults.

# fdisk /dev/iscsi/asm1/part

Repeat the command sequence for volumes asm2 through asm4 and then for crs, which is shown below. Remember to always create a primary partition, number 1, of max size):

Verify new partitions

Keep in mind that the mapping of iSCSI target names and local SCSI device names will be different on each of our RAC nodes (it may even change on each particular node after a reboot). This does not present a problem as we are using local device names presented to us by “udev”.

So, if you have not restarted your node after partitioning, run following command as root:

# partprobe

And now we will query the partitions with fdisk command:

# fdisk -l

Here is the results:

This is all for the volumes and the partitions at this stage of our project. When we clone our Linux guest (a node) at a later time, the clone will have all these settings and configurations already done. Notice that partitioning is only done once, since the storage is shared between all nodes.


Next chapter.


  1. Hi,

    I think there’s a bug in the file 55-openiscsi.rules. It didn’t create the symlinks correctly for me (actually, it didn’t create anything in /dev/iscsi. After reading the man page of udev, I corrected the file thus, and it worked correctly afterwards:

    # /etc/udev/rules.d/55-openiscsi.rules
    KERNEL==”sd*”,BUS==”scsi”,PROGRAM==”/etc/udev/scripts/ %b”,SYMLINK+=”iscsi/%c/part%n”

    The trick is to use == instead of =

    Comment by Juan Pablo Zaldivar — August 4, 2010 @ 17:44

    • In my OEL5 U3 Linux I used that script as is with just one “=” in PROGRAM parameter and it worked fine. Maybe other versions or flavors of Linux require other syntax. Thanks for noticing, though.

      Comment by oraclelabs — August 30, 2010 @ 12:28

    • thank you very much, i have red hat 5, and the trick works just fine 🙂

      Comment by amine — July 17, 2013 @ 07:02

RSS feed for comments on this post.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: