Need 16x-8x configuration at same time. Are there any consumer boards?


My recollection from Threadripper 7000/TRX50 launch is that non-PRO CPU's can go into Pro motherboards (but get fewer PCIe lanes than the PRO CPU's in the same motherboards), but Pro CPU's cannot go into non-Pro motherboards.

I'm not 100% on this as it has left my active brain L3 cache at this point, but I vaguely remember that being a thing.
 
My recollection from Threadripper 7000/TRX50 launch is that non-PRO CPU's can go into Pro motherboards (but get fewer PCIe lanes than the PRO CPU's in the same motherboards), but Pro CPU's cannot go into non-Pro motherboards.

I'm not 100% on this as it has left my active brain L3 cache at this point, but I vaguely remember that being a thing.

I think it's the other way around, you can use non-pro and pro in TRX50 but only pro in WRX90.

1715095591329.png
 
Bandwidth yes. Amount - no. My 3960X will gladly do 256G of RAM (at JDEC speeds, mind you). Show me how to get 256G on a 7950.
Would trying to do this on a 7950 instead of threadripper/xeon enter the not wanting to pay that much for people that do not need it that much ? (Which is perfectly fine, it was nice to have an entry point for those who did not put them on company expense or were just on the edge of a regular computer being enough, just needing a little bit more like 256GB of ram)

Gotta have slots to build the NAS
Outside very large need, less and less of them, the cheapest hard drive (by TB) on pcpart picker right now is a 16TB, 20 are not that much more expensive (by TB), even the simplest 4 sata board-1nvme could be enough for a lot of people buying new, with 4x20TB or 24TB drives (which give you a bit over 60tb of space in 4-wide Z1 mode, my DYU NAS has sata to pcie cards, a large case, etc... only because it is full of old 4TB and 6TB, a could change it for a simple 4slot Synology if those were not so highly priced).
 
Bandwidth yes. Amount - no. My 3960X will gladly do 256G of RAM (at JDEC speeds, mind you). Show me how to get 256G on a 7950.

Ah yes. Luckily I don't need that much RAM on my desktop.

I've had 64GB in it since 2014 when I decommissioned my last consumer hardware based server (FX-8350 with 32GB RAM) and decided to shove the extra RAM in my x79 Core i7-3930k just because I had it, but I've never really needed it.

I bought the same amount of RAM when I upgraded to the Threadripper 3960x because I am allergic to downgrades, and figured - who knows, I may need it some day.

I do run a few VM's on the desktop, but they don't need more than 8GB of RAM each, which means I still have plenty.

For a while there I would run a RAMdisk in Windows and shove entire games into RAM (before they got too big) which was fun and funny, and got me loaded up into multiplayer games before anyone else :p

if I needed that much RAM, I'd be in a different shopping situation than I currently am.

I have 512GB in the server, and 256GB in my testbench machine (built with old server board) but the desktop is only 64GB.
 
Ah yes. Luckily I don't need that much RAM on my desktop.

I've had 64GB in it since 2014 when I decommissioned my last consumer hardware based server (FX-8350 with 32GB RAM) and decided to shove the extra RAM in my x79 Core i7-3930k just because I had it, but I've never really needed it.

I bought the same amount of RAM when I upgraded to the Threadripper 3960x because I am allergic to downgrades, and figured - who knows, I may need it some day.

I do run a few VM's on the desktop, but they don't need more than 8GB of RAM each, which means I still have plenty.

For a while there I would run a RAMdisk in Windows and shove entire games into RAM (before they got too big) which was fun and funny, and got me loaded up into multiplayer games before anyone else :p

if I needed that much RAM, I'd be in a different shopping situation than I currently am.

I have 512GB in the server, and 256GB in my testbench machine (built with old server board) but the desktop is only 64GB.
I'm running an average of 75% used on 128G right now. :p VMs burn through it FAST if you're doing enterprise ones.

The servers have 2T or 3T depending on the box. I aim for half of maximum on any workstation or consumer system.
 
Would trying to do this on a 7950 instead of threadripper/xeon enter the not wanting to pay that much for people that do not need it that much ? (Which is perfectly fine, it was nice to have an entry point for those who did not put them on company expense or were just on the edge of a regular computer being enough, just needing a little bit more like 256GB of ram)


Outside very large need, less and less of them, the cheapest hard drive (by TB) on pcpart picker right now is a 16TB, 20 are not that much more expensive (by TB), even the simplest 4 sata board-1nvme could be enough for a lot of people buying new, with 4x20TB or 24TB drives (which give you a bit over 60tb of space in 4-wide Z1 mode, my DYU NAS has sata to pcie cards, a large case, etc... only because it is full of old 4TB and 6TB, a could change it for a simple 4slot Synology if those were not so highly priced).
60T? Uh... yeah, I'm thinking a different scale. My current archive build will have 10 of those 20T drives (starting with 10 8T, will swap them out over time). :D I do weird shit.
Asus has added support for 64GB UDIMMs on AM5 and LGA 1700, you just need to find them for sale: https://www.anandtech.com/show/2130...-256gb-of-memory-by-intel-600700-motherboards
So you'd need 4 of them, and we know that's an issue on DDR5 period - so... still not there :-/ But it's getting there.
 

I think it's the other way around, you can use non-pro and pro in TRX50 but only pro in WRX90.

View attachment 652333

Ah,

Thank you for setting me straight. I researched the shit out of the platform when I was planning on buying it at launch, but after seeing the disappointing consumer performance I mostly purged it from my memory.
Dammit, same. I also purged it. Took one last glance yesterday, and sighed - waiting on Ryzen 8k, and I'll move away from HEDT on the primary.
 
I'm going to be honest. I build and mess with server stuff all the time, but I don't even know what TB4 is?

Thunderbolt?
Yup. Lots of thunderbolt to fast nic for macs (especially since the new mac pro is kinda a joke), but they're weird on most OSes.
I have honestly yet to use anything thunderbolt. I tend to think of that as an Apple standard... Or no. Was that Firewire? I can't remember. I've never used any of them, favoring internal discrete components.

I tend to minimize my reliance on things like USB, Thunderbolt, etc., other than mouse, keyboard, DAC and microphone and on the extremely rare occasion, camera. ( I don't video chat ever, even when I have a camera it remains off)
I've tried - and USB just doesn't cut it for large scale things. Tbolt is better, but... still buggy for some things.
To be fair, if you care about your data (and if you don't why even bother hoarding it?) a NAS system should probably be built on server hardware. Xeon/EPYC with double fault tolerant registered ECC RAM, as well as redundant drives and backups to a second storage media (even better if that second media is offline, but that gets tricky) would be ideal, otherwise bit rot or catastrophic failure is just a matter of time.
Yup. The important stuff goes on boxes with large sets of ECC and enterprise kit.
I never understood IT departments obsession with Synology and other solutions in a box. I'd always prefer a good open source solution like ZFS. Dealing with proprietary hardware is inflexible and limiting. And if/when something goes wrong, knowing you can just pop the drive into any machine that can connect to it to try to rescue data or fix it rather than looking around for a second expensive box that speaks the same proprietary protocol is huge.
The goal on Synology/etc is it's easy for medium importance stuff, and has some neat features (SSD cache) that ZFS doesn't have for the workloads that really care about that (SLOG isn't the same). It's also small, compact, and has support if you need it. Great for idiots and remote offices that way.
I definitely understand that man-hours are a bigger driver than system cost in IT, but still, it seems short sighted. Putting together a good ZFS NAS may cost a few extra hours up front, but will save a ton of man hours if/when the shit hits the fan. It may even save the company if that data is mission critical. It doesnt even have to be a custom server. A Dell/HPE server with an HBA and TrueNAS core could even be a great enterprise NAS application. It can even talk to Active Directory! You could even run it as a VM with HBA passthrough if your needs are not big enough to require a dedicated physical server.
that goes both ways on support - liability can be shuffled to another company if there's a failure, to a point, which you can't on a home built box. But that's where enterprise support contracts come in.
But I have come to understand that this is pretty much corporate IT in a nutshell. Shortsightedness and minimizing of cost effort until shit hits the fan, and then shrugging and saying "oh well".

it's really stupid.
Shifting of liability. CYA all the way down.
 
My current archive build will have 10 of those 20T
Which could still be just a very standard 6sata+4sata to pci express, or 4SATA+a $50 8xsata to pci, not so long ago it would have been a you need slots and something complicated for 200TB, now it fit in just 10 drive in a perfectly regular full tower case with the cheapest of motherboards (and with 40-50TB coming out, the little 4 slot Synology will soon be able to do that...),your start point of starting with 10x8T, will swap them out over time, that significantly less than the 4x24TB drive mentionned....
 
Last edited:
Which could still be just a very standard 6sata+4sata to pci express, or 4SATA+a $50 8xsata to pci, not so long ago it would have been a you need slots and something complicated for 200TB, now it fit in just 10 drive in a perfectly regular full tower case with the cheapest of motherboards (and with 40-50TB coming out, the little 4 slot Synology will soon be able to do that...),your start point of starting with 10x8T, will swap them out over time, that significantly less than the 4x24TB drive mentionned....
sure, but I have all the 8T drives :p And rebuilding off an 8T drive is WAY faster than a 24.
 
sure, but I have all the 8T drives :p And rebuilding off an 8T drive is WAY faster than a 24.
Just because I am curious, how long does it take to do a rebuild for a replacement drive on those?

Can they achieve native single drive write speeds?

My ZFS pool has 12x 16TB drives in main storage configured in two RAIDz2 vdevs (So essentially ZFS equivalent of RAID 60)

Last time I resilvered a single drive that was failing it took me ~13h:15m:57s to resilver 6.74T to the new drive.

I guess that is an average of ~148MB/s, which I found a little bit disappointing considering the sequential write speed on the 16TB Seagate x18's is supposed to be 258MB/s, but I guess it is configured to not go full out as to not hamper system performance during resilver.

I don't really know much about that side of it, and how that is configured.
 
Last edited:
Just because I am curious, how long does it take to do a rebuild for a replacement drive on those?

Can they achieve native single drive write speeds?

My ZFS pool has 12x 16TB drives in main storage configured in two RAIDz2 vdevs (So essentially ZFS equivalent of RAID 60)

Last time I resilvered a single drive that was failing it took me ~13h:15m:57s to resilver 6.74T to the new drive.

I guess that is an average of ~148MB/s, which I found a little bit disappointing considering the sequential write speed on the 16TB Seagate x18's is supposed to be 258MB/s, but I guess it is configured to not go full out as to not hamper system performance during resilver.

I don't really know much about that side of it, and how that is configured.

About 6 hours, I think. I had no front-end IO going though.
 
I think resilver follows the usual pattern of ZFS deprioritizing non-interactive use. If you're doing any sort of real reads and writes, the resilver will pause or at least limit concurrency while those are in progress (more or less). Scrub and resilver also got a big improvement in OpenZFS 0.8.0 from May 2019, if that is newer than your last resilver, you may be in for a happy change.
 
Back
Top