Help me think out a NAS build--do PCIe lanes matter?

Joined
Aug 28, 2021
Messages
620
I took advantage of the newegg sale like ten days ago and grabbed four of the 20GB drives to replace the four 8TB drives in my Emby box as they are getting full.

Present system:
Uses are for hosting Emby and hosting a minecraft server running a bunch of mods
Gigabyte X99-UD5
E5-2696 V3
16 or 32 GB of Ram (can't remember)
1080ti for transcodes
500GB boot drive
4 x 8TB HDDs in a Raid 5 using MS storage spaces

What I want to do is use my scraps and build a TrueNAS system and house all the storage. Then use a 10Gb ethernet connection into the network to provide storage to the other machines. As far as Emby, either host it on TrueNAS box if I can figure it out or just use a windows box and a 10Gb link. For my minecraft, I'll probably stick with windows. Mine OS with Forge mod loader looks more complicated than I care to deal with.

So the box I want to build in TrueNAS:

4 x 20TB HDDs in Raid 5 with a 500GB SSD cache drive
5 x 8TB in Raid 5 with a 500GB SSD cache drive
Either sata or M.2 NVME boot drive

Here is where I need the help...

In this configuration, I count 11 sata devices plus the boot device. When do PCIe lanes start to matter or do they?

If I plug everything into the onboard sata ports, it depends on the chipset, right? How is the chipset going to keep up with that many drives?

With that all in mind, when I out this thing together, it is my intention to use scraps I already have lying around...


X99 with E5-2696 V3 with I think the onboard capacity for 10-12 sata devices (depends on NVME). Also, sporting 40 PCIe lanes with three (four counting one for GPU) x8 and x16 gen 3 PCIe slots for sata expansion cards and a 10Gb network card.

GA-990FXA-UD3 Ultra with a FX-8350. This has an onboard capacity for six sata devices and then one M.2 NVME. FX handled all PCIe lanes through the chipset, so not sure what the impact will be on drive performance. As for slots, all are gen 2 PCIe and after the GPU, I'd have one x16 and on x4.

If this PCIe lane and gen 2 stuff isn't really going to cause an issue, I'd like to use the FX 8350 for the NAS.

Thanks for making it this far....
 
Roughly, one PCIe 2.0 lane is a maxed out SATA port (port, not a drive), and you double for each successive PCIe generation. A 10gbe link is 1 GB/s ish which is about 2 SATA ports. So, handwaving frantically, I'm going to guess that you'll be network limited before you're PCIe lane limited.
 
An LSI SAS HBA (like 9200/9300 series) will do just fine with a PCIe 8x 2.0 port if you aren't hammering everything all at once. You can get them with 8/12/16 drive capacities using standard forward breakout cables to SATA.
 
I think a gen 1 PCI-e x4 can support 6 sata3 6Gbps port (specially for non SSD):
https://www.newegg.ca/riitop-pcets6g-6pbk-pci-express-to-sata-card/p/17Z-0061-00085?Description=sata expansion card&cm_re=sata_expansion card-_-9SIAFMXBN59262-_-Product

Will not be able to max them all at the same time but for something as slow as Emby it should not matter (a really big movie file will be 100mbs)

10-11 sata drives is really not a lot for your xeon system, I have 12 on my regular entry level otherboard i5-3570k, you on board start count is a lot and 40 lanes is still a lot today.
 
Last edited:
I'll try and get into the math a bit more of my concern...

First off, the X99 xeon, I know that isn't a problem. Gen 3 combined with a bunch of lanes and the connection from the CPU to the chipset is still plenty good today.

But for the old FX-8350 with the gen 2...that is where I'm unsure.

Sata cranks out at 6Gbps. So if I had one of those expansion cards that is a PCIe x4 that gives me six ports, that works out to a total of 36Gbps. Equating to gen 2 lanes at 4Gbps, would mean I'd need nine lanes if I had SSDs in a raid just humming along with a max transfer. But then the 10Gbps network connection would kick in....

If I have five of the 8tb drives cranking at say 100MB/s and then the SSD cache at 500MB/s that works out to 1 GB/s or 8Gbps. Then you factor in the sata overhead you are at 10Gbps, which would work out to 2.5 lanes...

Never mind , I think it will work with the old FX.
 
Well, I got it all up and running. I probably did everything wrong. It seems like I have cables everywhere...

I did some musical motherboards and got a z97 with a 4790k as my mobo cpu and then 32gb ram.

I got a sata expansion card and plugged it into slot 1.

Drives are 4x 20tb with a 500gb ssd cache
Then 5x8tb with a 500gb ssd cache.

Everything crammed into an old antec sx 1030 case.

20230724_095216.jpg
 
Well, I got it all up and running. I probably did everything wrong. It seems like I have cables everywhere...

I did some musical motherboards and got a z97 with a 4790k as my mobo cpu and then 32gb ram.

I got a sata expansion card and plugged it into slot 1.

Drives are 4x 20tb with a 500gb ssd cache
Then 5x8tb with a 500gb ssd cache.

Everything crammed into an old antec sx 1030 case.

View attachment 585471

That case is an oldie but goody for sure.
 
That case is an oldie but goody for sure.
It really is...an old server case relic.

I can fit 12 hdds in there if I want to.

The only complaints I have are that it was built for 80mm fans, so I wish it had a bit more airflow. That and it uses clips to mount the 5.25 stuff in...clips that I didn't have, so I had to get creative.
 
It really is...an old server case relic.

I can fit 12 hdds in there if I want to.

The only complaints I have are that it was built for 80mm fans, so I wish it had a bit more airflow. That and it uses clips to mount the 5.25 stuff in...clips that I didn't have, so I had to get creative.

As I recall, it's quite heavy too.
 
Back
Top