Hi All,
Relevant Hardware
Core 2 Quad @ 2.5GHz
Supermicro Q35 Motherboard
8GB RAM
Adaptec 6405
Intel SAS Expander
Norco 4020 case
Two 16TB RAID 0 Arrays of 4 x 4TB Drives each.
Software
Windows Server 2008 R2
Microsoft Data Protection Manager (DPM)
Hyper-V (prereq of DPM for restoring VMs)
MS SQL Server (lite edition prereq for DPM)
Problem
In a nutshell, if I replace the boot drive with a clean install of Windows Server 2008 R2 with all the same drivers but none of the additional software (DPM, etc.), I see data transfer rates of 450 MB/s between the two RAID Zero Arrays.
When I boot to the production DPM server that will soon be using these arrays, with the exact same hardware, I see data transfer rates of only 235 MB/s between the two RAID Zero Arrays.
I have made the faster system stack as similar to the production server as I possibly can short of also installing DPM on it. Same drivers, using the same arrays, everything. On the production server, I have tried disabling the DPM, SQL, and Hyper-V services and yet the performance delta remains.
Can anyone think of any registry setting, policy, or other buried setting that could account for a Windows Server with this software stack installed behaving so differently from a fresh Windows Server install on the same hardware? Write caching is enabled on the Adaptec controller at the hardware level and it's disabled in Windows on both systems.
Sorry, but this mystery is getting the best of me. And it's driving me a bit nutty! =P
Relevant Hardware
Core 2 Quad @ 2.5GHz
Supermicro Q35 Motherboard
8GB RAM
Adaptec 6405
Intel SAS Expander
Norco 4020 case
Two 16TB RAID 0 Arrays of 4 x 4TB Drives each.
Software
Windows Server 2008 R2
Microsoft Data Protection Manager (DPM)
Hyper-V (prereq of DPM for restoring VMs)
MS SQL Server (lite edition prereq for DPM)
Problem
In a nutshell, if I replace the boot drive with a clean install of Windows Server 2008 R2 with all the same drivers but none of the additional software (DPM, etc.), I see data transfer rates of 450 MB/s between the two RAID Zero Arrays.
When I boot to the production DPM server that will soon be using these arrays, with the exact same hardware, I see data transfer rates of only 235 MB/s between the two RAID Zero Arrays.
I have made the faster system stack as similar to the production server as I possibly can short of also installing DPM on it. Same drivers, using the same arrays, everything. On the production server, I have tried disabling the DPM, SQL, and Hyper-V services and yet the performance delta remains.
Can anyone think of any registry setting, policy, or other buried setting that could account for a Windows Server with this software stack installed behaving so differently from a fresh Windows Server install on the same hardware? Write caching is enabled on the Adaptec controller at the hardware level and it's disabled in Windows on both systems.
Sorry, but this mystery is getting the best of me. And it's driving me a bit nutty! =P