Areca 1220 + LVM2 + ext3 Setup

fibersnet2

n00b
Joined
Dec 28, 2009
Messages
13
I have been doing some research into expanding my fileserver. Currently it has 7 x 300GB Seagates (ST3300822AS) and has been performing flawlessly for the past 3.5 years.

I just took the plunge and bought 5 x 7k2000 (Hitach 2TB Consumer Drives). Since WD disabled the option to change TLER settings and the many reports of the 7200.11 Seagates' click of death, I decided to try these Hitachis due to the low price and lack of a bipolar distribution between good and bad experiences. (This may or may not have been be wise, but I will let you know in due time)

Well, the purchase has been made and what remains left is setting up the system.

Due to reported difficulties with installing grub2 on a setup with GPT, I propose the following scheme for formatting the disc. I do not have a terrible amount of experience so this may make sense or there may be a serious flaw which I can not see.

1) Create a raidset with drives 12345
2) Create a Raid6 Volumeset that is 64GB that will be used to install the OS on. Since the size is relatively small, I do not have to worry about booting off a GPT partition table and I can use the standard dos partition with fdisk
3) Create a Raid6 Volumeset with a 128kB stripe that takes up the rest of the raidset with the default option of 64BitLBA
4) Install linux (Debian) on the /dev/sda (which will probably correspond to the 64GB volumeset)
5) Setup LVM on the /dev/sdb (which probably corresponds to the 6TB second volumeset)
6) Format the LVM with ext3 and making sure that the boundaries are aligned

This file server just stores large files for my house, it will not be heavily loaded or need to support concurrent users.



I know in the past, people have recommended to have the OS on another drive or even another array, and ideally I would like to do this but to keep costs to a minimum, why not install it on the array that already contains all of the data? It is okay if I take the server offline to do work on it as I am the only user.

The last requirement is large LVM partition can be resized. I have expanded my fileserver in the past by putting in more drives, expanding the raidset, expanding the volumeset, expanding the lvm, and finally expanding the filesystem (ext3)

Please let me know if you see any serious drawbacks or flaws in my logic (I would not be surprised). I am open to any and all feedback.

After this is all done, I will have a thorough writeup in case anyone else ever encounters a similar situation.
 
I'd consider running the OS on a separate flash disk (be it CF or a DOM or SSD) if its only task will be serving files, but I don't see any major downsides in letting the OS run on the RAID. As long as the system isn't RAM starved, anything from the OS partition that will be needed should make it into file cache quickly, so you shouldn't see much performance hit in real life, as long as you've not got other things going on that are churning data through RAM or such.

Otherwise I think you've got things well sorted out. I would personally choose a different filesystem, but there's nothing really wrong with ext3.
 
do not install ur os on the areca raid volume, use a separate drive, usb stick, flash etc. Trust me this can create extra hassle and aggravating when things go wrong. Last thing you want to worry about is rebuilding your os when stuff goes wrong and you have TBs of data on the line.

I would also go with another file system, there is nothign wrong with ext its just bleh. Personally I use xfs on my file server, real easy to grow, works with lvm perfectly, fast, stable and native support for ACL
 
Booting off GPT with grub really isn't that big of a deal but what you specified will work fine. Here is my layout which grub boots off a GPT partition table:

Code:
root@sabayonx86-64: 01:31 AM :~# parted /dev/sdc
Warning: GNU Parted has detected libreiserfs interface version mismatch.  Found 1-1, required 0. ReiserFS support will be disabled.
GNU Parted 1.7.1
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p

Disk /dev/sdc: 17.9TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      17.4kB  100MB   100MB   ext3         primary
 2      100MB   35.0GB  34.9GB  jfs          primary
 3      35.0GB  17.9TB  17.9TB  jfs          primary

(parted) quit
Information: Don't forget to update /etc/fstab, if necessary.

root@sabayonx86-64: 01:31 AM :~# df -H
Filesystem             Size   Used  Avail Use% Mounted on
/dev/sdc2               35G    20G    16G  56% /
udev                    26G   259k    26G   1% /dev
none                   2.1M   332k   1.8M  16% /lib/rcscripts/init.d
/dev/sdc1              100M    81M    20M  81% /boot
/dev/sda1               84G    58G    27G  68% /winxp
/dev/sdb1               21G   6.7G    15G  32% /mac
/dev/sdc3               18T    13T   5.4T  71% /data
tmpfs                   26G      0    26G   0% /dev/shm

And I would say go ahead and put the OS on the array. I think that whole 'use a separate disk' rule is more for windows and not for linux as not much I/O will be done on the OS volume (once the machine is up and running if acting as a server) and with linux its very easy to boot off a liveCD or USB stick to access the array/cli/utility if problems arise (which is not the case with windows).
 
Thank you for the replies.

I have been readiing theses forums and found an example with xfs directly in the block device.
This way when I want to expand my raidarray all I have to do is configure it on the controller and then grow the xfs. I would not need to worry about fdisk gpt or lvm.

I just need a large block of free space so I really so not need partitions. do you see any pitfalls for this method?
 
xfs directly on the block device is awesome, that's what I run on my file server. Well sortof I do use lvm so i can easily carve up the space for whatever reason but I do not have any physical partitions. I set up lvm directly on my areca raid 6 volume then formated the logical volume with xfs. Been running this setup for ~ 3 years, no problems.
 
That is great news to hear. Are you using lvm directly on too of the blockdevice? Not sure of this is possible. Or on top of gpt
If you wouldn't mine could you post about how you did this in a little more detail?

Thank you very much
 
That is great news to hear. Are you using lvm directly on too of the blockdevice? Not sure of this is possible.
It definitely is, and it works fine. Just do a pvcreate directly on your block device. I would recommend keeping some kind of volume management though, it does come in handy, but creating the filesystem directly on the block device without it works too.

Make sure you have a BBU and preferably a UPS (that can shut down the machine cleanly) as well if you're going to use XFS, as it is not very tolerant of unexpected power outages. I've nearly lost data a couple times.
 
Thanks again for the advice.
and lastly do you have any tips on aligning the partition. I have read several tutorials on the mkfs options but there does not seem to be any definitive rule.
I will start with raid 6 using 4 drives and expand to a fifth drive immediately so I can document tdhe exaxt steps before my array has real data on it
 
It depends on the filesystem you decide on and whether you partition the block device or not, as each has a different offset from the start of the device.
 
That is great news to hear. Are you using lvm directly on too of the blockdevice? Not sure of this is possible. Or on top of gpt
If you wouldn't mine could you post about how you did this in a little more detail?

Thank you very much

Yes using lvm directly on the raid raw device, no gpt or any other partitions. The only reason I run lvm between the array and xfs is to be able to carve up pieces of the volume for different tasks. Like I have dedicated 100gb to iscsi target.

so let's say your areca raid volume is /dev/sdb you just do
Code:
 pvcreate /dev/sdb
then create volume group
Code:
vgcreate vg_name /dev/sdb mine is vgcreate arecavg1 /dev/sdb
then logical volume

Code:
lvcreate (specify size if wanted) for example my iscsi target is:
lvcreate -L100G -iscsitarget  arecavg1

then you just format it with xfs with
Code:
mkfs.xfs -options /dev/arecavg1/name_of_logical_volume


Thanks again for the advice.
and lastly do you have any tips on aligning the partition. I have read several tutorials on the mkfs options but there does not seem to be any definitive rule.
I will start with raid 6 using 4 drives and expand to a fifth drive immediately so I can document tdhe exaxt steps before my array has real data on it

Here's how to setup xfs on raid array.

I've explained it in another thread (http://hardforum.com/showthread.php?t=1449908 btw decent read about raid volume and file system)

like i said the specific options depend on type of array, stripe size and # of disks but the general formula is
Code:
 mkfs.xfs -d su=<stripe_size>,sw=<nr_data_disks> -l version=2,su=<stripe_size> /dev/sdX
where nr_data_disks is number of data disks in the array so for raid 6 it would be # of disks -2, raid 5 #-1 etc
 
Last edited:
Thank you again, I am doing one last backup now before I begin to initialize the new raidset.

I am copying all the files to one of the 2TB drives and then I will make the raid6 array with 4 drives. After the OS is installed, I will put the fifth drive in as a passthrough, copy over the data to the raid6 array, and then expand the array onto the fifth drive.

This way I can get some practice and an established set to steps to expand so that I can refer to it in the future.

I will let everyone know how these Hitachi 2TB drives work.
 
Make sure your extent size in LVM is a multiple of your RAID block size too.
 
Thanks.

My volumeset is being created as we speak. In the mean time I was finalizing my XFS commands.

Here they are:

mkfs.xfs -d sunit=128,swidth=384 -i size=1k -l version=2,su=64k,lazy-count=1,size=128m

mount -o noatime,logbsize=256k,inode64,sunit=128,swidth=384


One thing that I found very reassuring was that when I expand the raid array, I can mount it with the corresponding 'swidth' parameter so that the impact of the extra drives will be accounted for.

I will run some benchmarks tomorrow and let you know how it goes.
 
So far the migration is going well, have all the files copied over on to the xfs filesystem. I decided not to go with LVM since if I ever need to expand past this raid card, i will have to get another and will probably migrate everything over again.

The initialization of the 4 * 2TB Raid 6 Volume Set took about 27 hours.

I now put the fifth 2TB drive in and I am expanding my raid set. The individual volume set is migrating and I suppose this could take upwards of 35 hours.



I am not sure what the "FreeCap" means. I would have expected it to say 2000GB but for some reason it says 3280.0GB I have two volume sets now, one that is 64GB and one that is 3946 GB for a total of 4TB:

Does anyone know why this is? (Maybe its taking into account Raid6 although I don't see why it should)


Code:
CLI> disk inf 
  # Ch# ModelName                       Capacity  Usage
===============================================================================
  1  1  Hitachi HDS722020ALA330         2000.4GB  Raid Set # 00   
  2  2  Hitachi HDS722020ALA330         2000.4GB  Raid Set # 00   
  3  3  Hitachi HDS722020ALA330         2000.4GB  Raid Set # 00   
  4  4  Hitachi HDS722020ALA330         2000.4GB  Raid Set # 00   
  5  5  Hitachi HDS722020ALA330         2000.4GB  Raid Set # 00   
  6  6  N.A.                               0.0GB  N.A.      
  7  7  N.A.                               0.0GB  N.A.      
  8  8  N.A.                               0.0GB  N.A.      
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> rsf info
 #  Name             Disks TotalCap  FreeCap DiskChannels       State          
===============================================================================
 1  Raid Set # 00        5 10000.0GB 3280.0GB 12345              Migrating
===============================================================================
GuiErrMsg<0x00>: Success.

CLI> vsf info
  # Name             Raid Name       Level   Capacity Ch/Id/Lun  State         
===============================================================================
  1 OS-VOL           Raid Set # 00   Raid6     64.0GB 00/00/00   Normal
  2 MONSTER-VOL      Raid Set # 00   Raid6   3936.0GB 00/00/01   Migrating(1.2%)
===============================================================================
GuiErrMsg<0x00>: Success.
 
Here is another piece of information showing the same thing:

Code:
./cli64 rsf info raid=1
Raid Set Information 
===========================================
Raid Set Name        : Raid Set # 00   
Member Disks         : 5
Total Raw Capacity   : 10000.0GB
Free Raw Capacity    : 3280.0GB
Min Member Disk Size : 2000.0GB
Raid Set State       : Migrating
===========================================
GuiErrMsg<0x00>: Success.


I have no idea where this 3280GB came from. I had 0GB free before adding the 2TB drive, and now I expect 2000GB free raw capacity but it says 3280GB?
 
you migrated the raid level from raid 6 to raid 5 hence the extra free space
 
that would not be good, i didn't select any options to migrate from raid 6 to raid 5 so this is unexpected

I will wait for this to complete and then verify again what raid levels are present

Hmm, the raid set expansion caused the volume set migration and the OS-VOL migration already completed. It is still at Raid 6 and the same settings were applied automatically to the bigger Monster-VOL.

Even if it migrated to raid 5, and the figure of 3280GB is accurate after the migration, i am still not sure how its getting that number. Raid5 should free up the space of 1 drive

thanks for the help

thanks for the help
 
Slight bump. I found this thread after googling for experiences with the HDS722020ALA330 in an areca array as I'm planning to build a new 4 drive one in RAID5, and apparently you even have the exact same model (ARC-1220) that I have :)

Usually I mix and match various drives with the same size to spread the risk a bit, but these days apparently not just samsung but also seagate and western digital drives may be risky attached to hardware RAID cards (due to odd firmware problems, TLER and whatnot).
I'm wondering if you've had any hickups or other odd experiences (for example during high data throughput situations) with these hitachi drives since installing them?
 
Back
Top