Virtualizing Exchange and SQL

Keiichi

[H]ard|Gawd
Joined
Jun 10, 2004
Messages
1,491
I'm planning on putting Exchange07 and SQL2K8 server on the same physical host using Hyper-V. The machine in question would most likely by a dual quad core Poweredge with 8G of Ram and external storage for the DB store, logs, DB etc. Exchange would handle roughly 120 mailboxes and SQL would be used for doing reports by 10 people at the most and really light DB work. I was wondering if this setup would be adequate or is it just a bad idea to have sql and exchange share the same physical host?
 
Both of those apps are more memory & storage I/O dependent than processor dependent. You'll want a fast RAID 10 for your disks and lots of memory to avoid actually using the RAID 10.

As to 8GB of RAM, that being enough or not really depends on your usage. Do you have both of these apps already running on physical servers? If so, what are the specs? Are they well utilized?
 
Our email is currently hosted by a thrid party. We do not have a SQL server. As part of the host machine setup It will most likely be RAID10 using four drives for the system volume. As for the external storage it will most likely be a dell MD1120 attached via SAS using high speed drives.
 
Exchange 07 can be virtulized pretty easy.

SQL i have had issues with it running on virtual servers, may have just been the VM software though. You will need a decent box depending on your SQL db size.
 
Our email is currently hosted by a thrid party. We do not have a SQL server. As part of the host machine setup It will most likely be RAID10 using four drives for the system volume. As for the external storage it will most likely be a dell MD1120 attached via SAS using high speed drives.

So you don't have any of the hardware yet? If so, consider a simple pair of drives in RAID 1 rather than four in RAID 10 as for your system disk it doesn't much matter.

As to the memory, since you aren't currently running SQL or Exchange you aren't really going to know how much to get. I'd say that 8GB is a nice start. If you're getting a machine with 8 DIMM slots just make sure you get it as (4) 2GB so you'll have room to grow if it turns out that your users are a generating more load than you expected.

Exchange 07 can be virtulized pretty easy.

SQL i have had issues with it running on virtual servers, may have just been the VM software though. You will need a decent box depending on your SQL db size.

What were you using? I'm about a week away from moving a SQL Server 2005 install to an ESXi 3.5U2 server.
 
Exchagne and SQl

depends on the size of the datstores/databases and how heavy transaction wise they are used.

I usually never recommed virtualizing either exchange and SQL is a definete big no on the virtualization end.
 
So you don't have any of the hardware yet? If so, consider a simple pair of drives in RAID 1 rather than four in RAID 10 as for your system disk it doesn't much matter.

As to the memory, since you aren't currently running SQL or Exchange you aren't really going to know how much to get. I'd say that 8GB is a nice start. If you're getting a machine with 8 DIMM slots just make sure you get it as (4) 2GB so you'll have room to grow if it turns out that your users are a generating more load than you expected.



What were you using? I'm about a week away from moving a SQL Server 2005 install to an ESXi 3.5U2 server.

It was something an old IT vendor did, he used VMware free, esxi should be find
 
Exchagne and SQl

depends on the size of the datstores/databases and how heavy transaction wise they are used.

I usually never recommed virtualizing either exchange and SQL is a definete big no on the virtualization end.

Why is this? Exchange 2007 scales in a VM exceedingly well.

http://www.dell.com/downloads/global/power/ps4q07-20080147-Muirhead.pdf

SQL is dependent on load/nature of DB activity, but a colleague of mine works with a BI/Analysis company and the data cubes they build are on the order of 100+GB. Their virtualized SQL servers handle it far better than they did naively on previous gen hardware, and within 10% of bare metal performance on the same esx hosts.
 
Why is this? Exchange 2007 scales in a VM exceedingly well.

http://www.dell.com/downloads/global/power/ps4q07-20080147-Muirhead.pdf

SQL is dependent on load/nature of DB activity, but a colleague of mine works with a BI/Analysis company and the data cubes they build are on the order of 100+GB. Their virtualized SQL servers handle it far better than they did naively on previous gen hardware, and within 10% of bare metal performance on the same esx hosts.

100GB?

Only :)

I deal in the 10TB area for Datawarehousing/Datamining hence my apprehension of recommending virtualization for any of those applications. Last exchange cluster I mucked with had no less than 5 TB for its datastores.
 
100GB?

Only :)

I deal in the 10TB area for Datawarehousing/Datamining hence my apprehension of recommending virtualization for any of those applications. Last exchange cluster I mucked with had no less than 5 TB for its datastores.

There are very few cases where you shouldn't be able to virtulize either. I'd have to question the 8GB of RAM becuase its so cheap. However, you actual memory needs depend on a lot of factors.

SQL and Exchange can be virtualized you just need to take the same care as building physical boxes. Sperate RAID volumes on sperate raid arrays(or storage groups on a SAN).

You can do some proactive capacity planning for exchange see.....
http://msexchangeteam.com/archive/2007/01/15/432207.aspx

If you running 5TB of exchange databases you must have a lot of hardware to run E2003. DB store for exchange isn't a determining factore for virtualization. Its all disk I/O and needs to be addressed the same as physical servers.
 
With the amount of email we generate I don't think we'll have a need for a 5TB exchange store. Using best practices, the store group handling the mailboxes is probably going to be around 600GB maxed out. My SQL DB is about 24GB, and will be just used to generate reports. (It's a copy of a DB that is used in our production environment.) I'm not a DBA myself so I don't know what impact data analysis services would have on the system.
 
With the amount of email we generate I don't think we'll have a need for a 5TB exchange store. Using best practices, the store group handling the mailboxes is probably going to be around 600GB maxed out. My SQL DB is about 24GB, and will be just used to generate reports. (It's a copy of a DB that is used in our production environment.) I'm not a DBA myself so I don't know what impact data analysis services would have on the system.

E2007 best practices is to have 1DB per SG. And shouldn't exceed 100GB per DB.
 
There are very few cases where you shouldn't be able to virtulize either. I'd have to question the 8GB of RAM becuase its so cheap. However, you actual memory needs depend on a lot of factors.

SQL and Exchange can be virtualized you just need to take the same care as building physical boxes. Sperate RAID volumes on sperate raid arrays(or storage groups on a SAN).

You can do some proactive capacity planning for exchange see.....
http://msexchangeteam.com/archive/2007/01/15/432207.aspx

If you running 5TB of exchange databases you must have a lot of hardware to run E2003. DB store for exchange isn't a determining factore for virtualization. Its all disk I/O and needs to be addressed the same as physical servers.


I think having a SAN is definitely a plus when you are doing the whole virtualization thing.
 
I think having a SAN is definitely a plut when you are doing the whole virtualization thing.

Yes if your going to have a larger ESX or Hyper-V farm. However smaller SMB shops that want to virtulize and take atvange of clusters can use iscsi or NFS as a cheaper storage solution. You will not get the same throughput as FC SAN but it costs a lot less.
 
Yes if your going to have a larger ESX or Hyper-V farm. However smaller SMB shops that want to virtulize and take atvange of clusters can use iscsi or NFS as a cheaper storage solution. You will not get the same throughput as FC SAN but it costs a lot less.

We've done the iSCSI SAN at work - namely because we could put it together piecemeal, staying under our capital budget cap each year. Once you go through the trouble of making it fully redundant (storage arrays, storage processors, dedicated switches, multiple HBAs), you really get up there in price - close to a decent FC starter SAN. If you're spending in one lump sum, I really like the all-in-one of a Compellent SAN, vs some software based iSCSI SAN.
 
I think having a SAN is definitely a plus when you are doing the whole virtualization thing.

It's not a plus, it's pretty much a requirement. If you want to use the "good" features of VMware you have to do shared storage. If I'm virtualizing Exchange I'm going to use a SAN so that I can failover the VM to another box.
 
We've done the iSCSI SAN at work - namely because we could put it together piecemeal, staying under our capital budget cap each year. Once you go through the trouble of making it fully redundant (storage arrays, storage processors, dedicated switches, multiple HBAs), you really get up there in price - close to a decent FC starter SAN. If you're spending in one lump sum, I really like the all-in-one of a Compellent SAN, vs some software based iSCSI SAN.

That's why you don't do it in bits and pieces. Companies need to figure out how to build infrastructure. Stop being cheap. Yes, good gear is expensive. They lease other expensive gear they need to learn to lease things like this. There is no reason not to do it.

You don't need dedicated iSCSI switches. You just need dedicated VLANs. That's easy. Multiple HBAs? Good NICs. The problem with a "starter FC SAN" is that as you scale the expense scales. More FC switches, pricey. More FC HBAs on each new server, pricey. With iSCSI the price stays pretty flat as you scale. You don't need someone that knows how to manage a separate FC fabric. The downside to iSCSI comes when you need more than 1Gb of throughput from a single box to a single iSCSI target, which is rare. Even big Exchange boxes don't do that. Exchange and SQL are high-transaction but low throughput apps. They do 4K and 8K transactions and depend on low latency. That's easy. iSCSI, NFS, and FC all have very comparable latency characteristics.
 
That's why you don't do it in bits and pieces. Companies need to figure out how to build infrastructure. Stop being cheap. Yes, good gear is expensive. They lease other expensive gear they need to learn to lease things like this. There is no reason not to do it.

In our case, bank loan agreements for an ESOP purchase hard limited capital expenditures. That's a valid reason ;) Bits and pieces was the only way to make it happen.
 
Back
Top