SAN Chat: iscsi vs. fiber

Manu

Limp Gawd
Joined
Mar 3, 2003
Messages
203
I am looking at moving into a SAN environment for my company. Who has one? Who manages one? Who knows a ton about them?

Our current setup is localized storage on various servers. These are mostly 10K SCSI drives in RAID 0/1 or RAID 5. We also have a NAS that is a main file store that is 7200RPM IDE drives.

We have about 1.5TB of data, about 150GB of exchange data, and about 10GB of SQL DBs (over about 4 different DBs)

We've been looking at a lot of the SMB SAN options, like from IBM and HP. They are basically some fiber cards, a fiber switch and an HP MSA1500. Anyone know about these or use these?

The other option thats been presented to us is an iscsi option from Lefthand Networks. Any experience or thoughts?

I am just in a major data gathering mode at this point, and any help would be appreciated.
 
I am looking at moving into a SAN environment for my company. Who has one? Who manages one? Who knows a ton about them?

Our current setup is localized storage on various servers. These are mostly 10K SCSI drives in RAID 0/1 or RAID 5. We also have a NAS that is a main file store that is 7200RPM IDE drives.

We have about 1.5TB of data, about 150GB of exchange data, and about 10GB of SQL DBs (over about 4 different DBs)

We've been looking at a lot of the SMB SAN options, like from IBM and HP. They are basically some fiber cards, a fiber switch and an HP MSA1500. Anyone know about these or use these?

The other option thats been presented to us is an iscsi option from Lefthand Networks. Any experience or thoughts?

I am just in a major data gathering mode at this point, and any help would be appreciated.

I looked at the HP 1500cs SAN bundle for a remote site recently. I had jumped through all the hoops when my order go canceled by our VP. But I had speced out the 1500cs with 2 HBA's a fiber switch and 3TB worth of SATA drives for a file and print cluster.

I would reccomend staying away from SCSIdrives as they are on there way out. either SATA or SAS. For the money that HP SAN was great. included everythign to get 2 servers started (which is all i needed).

Our enterprise cost on the bundle w/ 14-250GB SATA drives was just over $10K.
 
We;ve been running an ISCSI network for 4 years now. Running on our 2nd generation Netapp, It has been rock solid.

For ISCSI, there are alot of other vendors besides Netapp and Lefthand. Equalogic is also a well known brand.

When considering IP SAN's or FC San's, there are many things to consider. Including restore times, types of raid, recovery time, and some other neat things that each solution can do.

On the IBM, the NAS devices are OEM'ed Netapp's.

Unless you have some specific requirements, I think ISCSI would be the way to go.
 
Just recieved 2x SAN units last week for 2 clients of ours, I installed one already. HP has a good deal on their SAN 1000 units, they have a "starter kit" which includes the switch and 2x HBAs....about 10K without the drives.

They also have a smaller small biz version for about 7,500.
 
We;ve been running an ISCSI network for 4 years now. Running on our 2nd generation Netapp, It has been rock solid.

For ISCSI, there are alot of other vendors besides Netapp and Lefthand. Equalogic is also a well known brand.

When considering IP SAN's or FC San's, there are many things to consider. Including restore times, types of raid, recovery time, and some other neat things that each solution can do.

On the IBM, the NAS devices are OEM'ed Netapp's.

Unless you have some specific requirements, I think ISCSI would be the way to go.

I literally just ordered our SAN yesterday afternoon - iSCSI seemed the way to go for us. We got quotes from those three - NetApp, Lefthand, and Equalogic. Ended up going for the Equalogic as it was the cheapest by a few thousand and I really liked their "all-in-one" licensing where as NetApp licensed each feature individually which could nickel and dime a person to death.

 
I've been running iSCSI for a few years as well. It's been solid for us; we don't have crazy performance targets (SQL, Exchange moderately used + ESX 3). ESX has been a bit of a performance need, but it still runs pretty well.

I'm running Falconstor IPStor. I like it because you can throw a variety of back end disk cages/types on it. Additionally, I can just add fibre HBAs to my SAN front ends and hook them up to a fibre network if I need the performance in the future. I know a lot of people run equallogic very well, but I don't like the all in one nature - i'd prefer to just add dumb disk cages if I already have redundant front ends.

A buddy of mine just dropped BANK on a compellent san - those things are daaaaamn nice. I really like the auto-tiering that they've got (I believe 3par does as well, but not at the block level).
 
We looked at various SAN options for our company and endend up deciding on an iSCSI solution. It saves money by being able to not have to buy fibre switches, HBA's, etc, etc. I personally like it b/c if you need or want to connect more machines to the iSCSI SAN all you need is an iSCSI initiator (free for pretty much any OS) point it at the SAN and bam, you're off and away. From my point of view for pretty much any small/med bus. iSCSI is the way to go.
 
OakFan-

The nice thing about the SCSI drives is that they are readily available and cheap. They may not be the latest and greatest in server storage technology, but they are still (and from what I see for the foreseeable future) readily available.

That is a very similar setup to what we are looking to implement, what you had speced.


DarthKim-

Can you talk a bit about what some of the prominent features I should be looking at are? What are some of the 'cool features' I may be missing in this SMB HP solution that a lefthand or other iscsi solutoon will have?


Party2Go-

What model did you end up going with? What were some of the deciding factors to go iscsi vs. fiber and specifically with the solution you chose?


StarTrek-

I've not seen much in the way of cost savings on the iscsi side, when comparing quotes of various solutions...
 
(i am gonna sound like a netapp whore... anyways)

One of the major differences they point out is that the "snapshot" capability, is not the same between all vendors. Regardless of whether each vendor claims their snapshot capability is like netapp, they are quite different (ones that I evaluated are lefthand and emc).

The thing with netapp, is their power is in the OS of their product, not the actual hardware itself. It's very easy to use, most SA/NA will pick it up very quickly.

We use alot of CIFS/NFS on our IP SAN, so those type of capabilities are very important to us. The snapshotting is not a big deal on our block level volumes.

Just as a reference, ours is about 6 TB, with one shelf ATA and 4 FC drives. We run everything from Exchange to virtual server 2005 to SQL 2000/2005.
 
Can you talk about the snapshot feature a bit? What does it do exactly, how does it work?
 
Can you talk about the snapshot feature a bit? What does it do exactly, how does it work?

You may want to read up about it on google...

http://www.google.com/search?q=san+snapshots

edit: not saying that to be an ass - you're asking the right questions (i.e. what are the differences in offerings, etc), but some things like 'what are snapshots' can be found by googling instead of imposing on someone to type up a summary for you.
 
The initial cost savings may not be that great depending on how many systems you plan to hook up to the SAN, however future connections are free for any additional systems that you want to hook up. When we initially bought our Netapp SAN we only planned to use it for storage and VM's for a particular system that we were rolling out. That was at the beginning of the year, now we have 3 ESX servers connected to it running nearly 30 VM's as well as having all our databases reside on it in addition to our disk-based backup data. We plan to migrate our exchange server to it next year as well. I'm not sure what the size/growth is of your company but I can tell you that when we first put this in we had no idea how useful it would become and how much we would actually utilize it. If we had a fibre channel solution all the additional physical machines that we've since connected to it would have cost us thousands of dollars in HBA's and fibre, etc. (Whereas the iSCSI initiator from Microsoft is free, and there's one included in most Linux Distros)

I will also echo the sentiment about drive types as well, we have two shelves on our SAN, one with 1.4TB of SAS drives ("Fast Disk" if you will) and 3.5TB of SATA drives. While SCSI is available our thought was that production of these types of drives would be declining over the next years while SAS and SATA seem to have become the new standard and should be around for many years to come. Plus if I remember correctly our SAS drives are actually a bit cheaper than their SCSI counterparts.

It is up to you to know what's best for your company, although I would strongly recommend that you take into account future uses, especially with Virtualization becoming the way to go now days, your SAN could very well be used for more than you think.
 
StarTrek-

Thanks for the writeup. It is definitely another push in the iSCSI direction. I am getting a demo from Lefthand today and probably going to be making a decision later in the week, will let you guys know how it goes.

If anyone has any other words of wisdom or experience, I'd love to hear it.
 
Check out Siafu and iStor. Also there is Infortrend and Promise.

I work in storage so if you need to buy something, let me know.
 
I'm a StorageTek FC guy, but I have to say I'm starting to like our Netapp cab.
I have never ran iSCSI, but I noticed it could be added.
 
I really liked the left hand solution, but I couldn't make a real cost justification for it. The Day 1 price was very similar, but expansion was very costly. We're going to go FC MSA route.
 
I really liked the left hand solution, but I couldn't make a real cost justification for it. The Day 1 price was very similar, but expansion was very costly. We're going to go FC MSA route.

Why not be able to do both. :p What was your budget again? Since NetApp's new 2000 series are just barely out, you may not have had a chance to see what they offer, or the starting prices (as low as 12k). NetApp boxes are not only very easy to use, but can do cifs, nfs, iscsi, fcp, http, fbi, cia, irs, you get the picture. :D They are extremely versatile, including you can just swap out the head for a higher-end model when you outgrow it, leaving existing disk shelves. If you haven't talked to a NetApp sales guy, seriously give them a shout. I can even hook you up with one I know probably. :cool:

http://www.netapp.com/products/storage-systems/enterprise-storage/fas2000.html
http://www.pcworld.com/article/id,137016-c,storage/article.html
 
If you're still looking at HP I would say stay away from the MSA series if you want to grow this to anything. We are experiencing some growing pains with the MSA and the performance is quite crappy when hooked up to our 3 ESX boxes. We will be looking at an EVA rig hopefully beginning of the year for our storage.
 
If you're still looking at HP I would say stay away from the MSA series if you want to grow this to anything. We are experiencing some growing pains with the MSA and the performance is quite crappy when hooked up to our 3 ESX boxes. We will be looking at an EVA rig hopefully beginning of the year for our storage.

Thats to be expected the MSA is not a premieum Tier1 storage soltuion. It a Tier2 cost effective solution for centeralizing data.

The upside to choosing any FC SAN is you can add an ISCSI gateway for pretty cheap and give servers that don't need 4GB SAN connective access without spenidng money on FC cards.
 
I have not worked with any of the lower-end/cheap/SMB SAN or NAS solutions, but here is some food for thought on SAN in general:

- What is your workload like? Is it very high? Go with FC. Is it relatively low? go with ISCSI. Are you booting from SAN? If so, make sure your IP network has the bandwidth to support all the servers booting at once. Also, some devices (ie: netapp) have a limit (artificial or not) on the # of servers booting from ISCSI that they will support.

- What is your RTO and RPO for the various applications? This will have more bearing on the actual storage device and its features ("snapshots and snapmirrors" in netapp terminology) than on the technology you use to connect servers to the storage, however.

- Do you need to replicate off-site? Again, this is a feature mainly implemented at the device level, but has a bearing on your connectivity choice too. If you have an all-FC solution and need to replicate offsite, then you're going to be getting some expensive gear and expensive metro-area fiber. Alternatively, you could get FC switches with FC-IP gateway blades in them. With an all IP solution like ISCSI, your options for offsite replication increase (but you may have to live with async instead of sync replication). A lot of this depends on the SAN/NAS device though, and the features available on it.

- With exchange, keep in mind it is no longer written in stone that you need to use a SAN. With Exchange 2007, they've solved a lot of the HA issues in software. Microsoft IT has been running Exchange 2007 without any kind of SAN (data is stored on DAS now) for over a year and have nothing but glowing results. It was not a popular decision at first, primarily with the storage guys, but they have been impressed with the performance and won't go back (at least, that's what I hear out of the MS camp).

I work in a large IT environment. We have around 2PB of data on our SAN and NAS infrastructure (99% netapp, 1% EMC, 0.5% 3Par). So far, we've deployed only FC SANs to production. However, we have done some test implementations of ISCSI and it is promising. We will be looking to deploy more soon.

If I was in a small IT shop, I would have a lot of trouble justifying to myself and the business an FC SAN, personally. But I haven't worked in one of those environments in a long time, so ultimately it depends on the value of the data and the applications running on the SAN.

One of the bigger questions with storage technology today is who owns the infrastructure. Networking guys want to own the IP network (and sometimes the FC network), but the storage guys have a very large stake in that game when storage is attached. In larger organizations, this political fight is obviously more real. In your environment, if you're the only IT guy, you have a lot more flexibility and I envy you =)
 
Anyone ever use the HP Proliant Storage Server lineup with the Microsoft iSCSI Target software? I'm also looking into implementing a pilot iSCSI san in my environment.
 
SAS is the way to go.[/QUOTE


The SCSI protocol is still alive and kicking. However, the SCSI parrallel interface and the low level stuff is pretty much on the way out.Serial Attached SCSI (SAS) is now being used by a majortiy of server vendors for direct attached storage (DAS)


As for iSCSI vs FC...well, it should really be SCSI over Fibre Channel vs SCSI over IP....

1.) Use FC if the storage is used for something that is critical , i/o intensive, (database, your exchange server DB etc. etc.)

2.) Use iSCSI for simple network attached storage which isn't heavily loaded. i.e. fileserver,
 
Back
Top