Advertisment

Build your own Storage Farm

author-image
CIOL Bureau
Updated On
New Update

Advertisment

In our last issue we discussed how to setup an iSCSI based storage using ordinary Windows and Linux based machines. This time, we'll go further and build a similar cluster, but using a different software called Openfiler. It's a very powerful storage software that can be used for building both a NAS and a SAN. Not only that, but we also ran our standard set of tests on it to see how well it performs. The results were astonishing as you'll soon find out. Before we get into the setup, let's take a quick recap of what all you need to setup this storage cluster.

The setup required for building the cluster remains more or less the same as last time. You'll need ordinary machines with some storage space. We used 10 P4 machines with 256 MB RAM, 40 GB HDD, and 1 Gbps network card. They all need to be hooked to a Gibabit Ethernet Switch. Plus you will need an eleventh machine running Windows 2003 Server.

Direct Hit!

Applies To: Storage managers

USP: Buid a low-cost IP SAN, with ordinary H/w and easily available S/w

Primary Link: tinyurl.com/27fdx3

Google Keywords: iSCSI, Linux

On CD:PCQXtreme System/labs/iscsitarget-0.4.14.tar.gz
Advertisment

This will act as the controller for aggregating the storage from the remaining ten machines. We have given ISO images of Openfiler on this month's DVD. These are for both 32 and 64 bit machines. We used the 32-bit version. Just be sure that when you burn the ISO on a CD, then keep the writing speed at 4x. What you'll get will be a bootable CD.

Prepare storage targets

After creating the CD, boot one of the cluster machines from it. The bootup will show a wizard driven installation screen. Just click next to start off, and on the next screen select the language to “English” and move to keyboard selection. Here select “U.S English” and on the subsequent screen you will be asked to do disk partitioning using Disk Druid. By default openfiler takes up the first 6- 8 GB on the first disk it finds on the system. But we suggest you choose the manual partitioning option and set boot to 100 MB, root (/) to one GB and swap partition to 500 MB if you are using 256 MB RAM on the machine (which we were).

Advertisment

 

Once you are done, click on Ok and move on. Keep the grub configuration as default, and you'll be then taken to the network configuration screen. If you are using a DHCP server on this isolated network, then tick the DHCP option. Otherwise give a manual IP and click next. Now you will be asked to set the time. Finally, the installer will ask you to set the root password. Once you are through with the installation, reboot the machine. After booting, you will see the URL on its console, from where you can configure the entire box as an iSCSI target storage. Run this installation on all the machines in your storage cluster.

You have to manually create the partitions on each node of your storage cluster. Choose Partition Type as “Physical Volume”.
Once you've created the physical partitions, you need to create a volume group from it. This comes in handy if you have multiple hard drives on a machine.
Once you've created a volume group, you need to convert it into an iSCI file system and define how much of the partition space will be allocated to it.
Advertisment

Configuring the cluster

Before configuring the openfiler based cluster boxes, hook up the controller machine to the same network subnet as these boxes. You will then be able to access the openfiler administration web interface of each machine from the controller itself. Fire up a web browser on the controller machine and open the interface of any of the cluster boxes by entering the URL https://: 446. Note that it's an https connection (secure http) and not an ordinary http connection. You'll get a login screen. Give usrename as “openfiler” and password as “password”. Once logged in, you'll see all the administrative options like Accounts, Volumes, Quota, shares, Services and General. First go to the General option and you will get “Local network Connection” screen. Give any name to the network, let's say “local”. Then give the IP subnet, where the machines are hosted, for example we have used “192.168.6.0” and finally give the subnet mask of the subnet as “255.255.255.0”. Once done, click on the update button.

By default, the iSCSI target service is disabled. In order for the storage cluster to

Create iSCSI volumes

From the same interface select the “Volumes” option and then from its submenu select “Physical Storage Mgmt”. It will show you a list of physical hard disks on the box and their partitions. On the same screen you will find “Edit Disk” header, through which you can edit the physical disk parameters (denoted as /dev/had). Click on parameter and scroll down to the bottom of the web page. Here you will see the remaining unallocated disk partition. By default partitions are set for extended partition, change settings to “Physical Volumes” and click on “Create” button. Even though it will show that partition is created instantly, wait for a few minutes because creation of partition is still going on in the background. Our advice is to wait for at least 15 to 20 minutes after executing this process. Then come to “Volume Group Mgmt” and scroll-down the page. Here you will be asked to create the volume group of the partitions.

Advertisment

Give any name to the volume group. For example, we called it “vol_iscsi1”. Then tick on the physical volume partition that you have created above. From the given list, click the “update” button. Now come to the “Create New Volume” option and click on that. You will be asked to fill the volume name and description of the volume that you want to create. Give any name that suits you and then on the same web page you have to define the volume size. A slider is given to adjust the volume size, slide the slider towards the end and maximize size. Then you have to select the file system type for the volume. By default it is set to ext3 but you have to select “iSCSI” from the drop down menu and click on create button. Once the volume is created, it will show the health of the volume as a pie chart.

 

Enable iSCSI services

Now you have to enable access to this volume for controller. For this on the same page, where it's showing the health of the just-created volume, you will see a “Properties” parameter and under that you will find “Edit” hyperlink. Click on that and it will bring edit volume- properties page.

Advertisment

Scroll down the page to bottom and you will find “Volume Host Access” property, which is by default set to “Deny”. Set this to “Allow” and click “Update” button to save the settings. Finally you have to start the iSCSI target services on the controller box, so that the volume that you created above can be exposed to the outside world as a block device. For this, select “Service” option given on top of the administrative web page. It will list the name of the services with enable/disable options that openfiler is offering currently.

Identify iSCSI service from the list and enable the service. With this one of your openfiler boxes is ready and acting as ISCSI target.

You have to follow the same process for all openfiler boxes you will be using for your storage farm.

Advertisment
All iSCSI target disks can be viewed together from the Windows based controller

Configuring controller

We have to accumulate and aggregate all storage hosted on different openfiler boxes into a single storage. For this we have used a controller machine running windows, where all the storage from openfiler boxes will be aggregated as a single huge windows volume and in future can be shared by users on the network.

 To access all openfiler boxes, which are being exposed on the IP network, you need a software called iSCSI initiator on the controller machine. Initiator is available for both Windows and Linux. Here we have used Microsoft iSCSI initiator, which is available free for download at http://tinyurl.com/ ywtw3.

Install this software on the controller and you will get Microsoft Initiator icon on your desktop. Double-click on this and you will get its interface with four tabs (General, Discovery, Targets, Persistent target and Bound volumes/ devices). Select the discovery tab and add all the IP addresses of different openfiller boxes that you are using in your cluster. Then select “Targets” tab and you will find the names of all the targets with their status.

On the same screen come down and select the “Logon” button, you will get a popup screen with logon to target options. Here enable both options “Automatically restore this computer when the system boots” and “Enable multipath” and then click ok to apply the settings. After this come to “Bind Volumes/Devices” tab and then click “Bind All” button.

Now go to your Management Console and open Disk Management. If you've setup the target machines in the initiator correctly, you will see a disconnected new drive in your Disk Management console. Right click on the disconnected drive and select “Initiate” option from its context menu. This will run a wizard to initiate and convert the drives into a dynamic disk. Select all the disks except disk0, which is Windows physical C: drive. Now on the wizard press next and then finish button to convert disk to dynamic volume. Till now all the disks will be shown as an unallocated storage. Now again select the disk1 and right-click on it, from the context menu select “New Volume”, which will open another wizard for creating fresh volume. Click next, on this screen you would be asked to select the type of volume you want to create. Select “Spanned” and click next. Add all the disks to the spanned volume and click next. Now on next screen give the volume name and drive letter and don't select quick format. Then click finish button to close the wizard. This process may take a few minutes, because it's formatting this aggregated huge volume to NTFS file system. Once the process is completed you will see a huge single volume in terabytes as a Windows physical disk. This drive is treated just like a local hardware drive, but it is all done over the network. You can further share this volume to the network users.

How it performs

We also measured the performance of this storage cluster using Iometer. To test this we connected two Win XP P4 machines running IOMeter to the same gigabit switch as the storage cluster. We created a share on the spanned volume of the storage cluster and mapped it as a local drive on the initiator. In Iometer we created four workers on each machine and targeted each with the mapped drive. To stress the storage cluster we used 64 K and 128 K transfer request sizes (the number of bytes read or written in each I/O request) and did 100% random and sequential read and write of data. We ran these tests three times, for 2, 5 and 10 minutes. The tests basically measured two performance parameters, namely total input/output operations per second (IOPS) and MB/sec throughput. Just to see where it stands, we compared the results of this cluster against those obtained in our NAS shootout done earlier this year (PCQuest, Feb 2007 issue). Our cluster gave the best results in 100% random reads, which is understandable since the storage is actually split across multiple machines. On the other hand, it gave the lowest performance in the sequential read test. This basically reflects the performance of each individual drive on each machine of the cluster, since the writing is happening sequentially. The results were average in the remaining two tests. This kind of a cluster is good for applications requiring heavy simultaneous read operations.

tech-news