Need 16x-8x configuration at same time. Are there any consumer boards?

Over on LinkedIn, there is a market forecast that predicts

PCIe Switches Market Size is Growing at a CAGR of 11.9% with Worth ~ US$ 1764.6 million by 2032 [Exclusive 98+ Pages Report] https://www.linkedin.com/pulse/pcie-switches-market-size-growing-cagr-119-vysfcv


If you believe this forecast, then there is no way that Broadcom can defend its market share. Customers simply won't stand for it, and they will go out of their way to second-source or multi-source these parts. Look at the list of competitors.

But, but, but, the report only covers up to Gen3 :eek: . Hard to believe.
 
A lot of boards just come down to how certain I/O decisions are made. Do I think i'm really going to make use of USB4? So far, I haven't. Though being able to run a display off the iGPU directly from one of the Type C ports is pretty nice.

It doesn't necessarily have to be the iGPU running over the Intel thunderbolt controller that they call USB4. I run my 7900XTX through the thunderbolt controller to my thunderbolt monitor using the displayport input on the motherboard, one cord and my keyboard and mouse plug into the monitor. Works great for KVM functionality with another computer hooked up to the displayport input on the monitor.
 
It doesn't necessarily have to be the iGPU running over the Intel thunderbolt controller that they call USB4. I run my 7900XTX through the thunderbolt controller to my thunderbolt monitor using the displayport input on the motherboard, one cord and my keyboard and mouse plug into the monitor. Works great for KVM functionality with another computer hooked up to the displayport input on the monitor.
No, but it depends on how you're accomplishing that. The X670E Taichi doesn't have DP input to passthrough video over USB4/TB so you'd have to rely on just telling Windows to use the XTX for rendering but outputting video via the iGPU.

Some boards do have DP inputs to purely passthrough the dGPU via that output. Asus's Creator boards come to mind.

1712921428857.png
 
Last edited:
  • Like
Reactions: Nobu
like this
Looks like nobody's run any 3dmark benches with a 7945wx yet. There's some 5945wx results, they get a little over half of a 7900's CPU score in timespy, with the same GPU and clocked a good bit lower. A 7945wx might get +75% CPU score? Not sure of the exaxt generational improvement, I know it's a good bit, plus the clock improvement.

Here's a couple I picked to compare, tried to match and avoid huge overclocks.
 
No, but it depends on how you're accomplishing that. The X670E Taichi doesn't have DP input to passthrough video over USB4/TB so you'd have to rely on just telling Windows to use the XTX for rendering but outputting video via the iGPU.

Some boards do have DP inputs to purely passthrough the dGPU via that output. Asus's Creator boards come to mind.

View attachment 647210

Correct, that's exactly why I bought the X670-E Creator. But it doesn't just pass the dGPU, it also gives you a full bandwidth thunderbolt connection to any thunderbolt devices you want to connect. One active thunderbolt cable connected and you have access to all the peripherals you want.
 
After reading this thread, I'm afraid that any hope for future HEDT boards is just pissing into the wind. Don't mean to be a downer, but that's how I feel.
 
Last edited:
It doesn't necessarily have to be the iGPU running over the Intel thunderbolt controller that they call USB4. I run my 7900XTX through the thunderbolt controller to my thunderbolt monitor using the displayport input on the motherboard, one cord and my keyboard and mouse plug into the monitor. Works great for KVM functionality with another computer hooked up to the displayport input on the monitor.

What motherboard do you use? Mine has a DisplayPort in but I haven’t messed with using it. I’m guessing the ProArt X670e?

Zarathustra[H] I did find an Atto product that claimed to use TB4 to some type of 40Gb fiber but it’s discontinued and seemed to be pretty expensive second hand still. And I didn’t find any solid confirmation that it actually runs at the full 40Gb. Lots of 25Gb options through TB3/4 though i think that would be slower than just running your card on a limited x4 slot.
 
What motherboard do you use? Mine has a DisplayPort in but I haven’t messed with using it. I’m guessing the ProArt X670e?

That's the one, I always bought HEDT boards in the past and that was about as close as I could get this time around.
 
Kind of why I jumped to AMD was to carry on with Crossfire thinking they may keep it going building GPU's also, B350 couldn't compete with x58 with the same cards and driver because of the x8 by x8 setup, times have changed with cards like the RX6600 being an x8 card if AMD made a working driver I would love to see them scale in an x8 by x8 setup.
 
That's the one, I always bought HEDT boards in the past and that was about as close as I could get this time around.
So after reading a bit about the ASUS Creator X670e, I wish I had gotten this board instead of the ROG Strix E-A x670e.
 


Well, yeah, but it is a Threadripper Pro. We know we can get many PCIe lanes on workstation boards, but the problem with workstation boards is that they need workstation CPU's, and these days workstation CPU's suck for anything but workstation loads.

What I am looking for is the old concept of HEDT. An all in one no compromises build. It can be pro-sumer workstation-like AND be a consumer game machine at the same time and excel at both. Essentially a high end consumer desktop that can both attain top low threaded speeds in consumer workloads and has some light workstation-like features like many cores and large numbers of PCIe lanes. Sandly this concept appears to be dead for about 3-5 years now.

When I bought my Threadripper 3960x in 2019 (and its predecessor, my Core i7 3930k x79 build in 2011, or any x99 or x299 system) it performed equivalently to similar consumer CPU's in consumer loads, but also had the capability of adding a ton of RAM, had lots of PCIe lanes for expansion and some extra cores. It could do everything.

Now the problem is that now I have to choose one or the other. Either consumer or workstation. Or I have to build two systems (which I really don't want to do). One for each task.

The whole point of this thread is that we want 16x-8x in a consumer board. AM5/LGA1700 (or LGA1851 for next gen Intel). This is not because of cost, but because current workstation chips are MUCH slower at lightly threaded loads than consumer chips currently are. I'd happily pay extra for a motherboard that supported the type of lane layout that I need.

There are three things working against current workstation designs compared to consumer chips. Lower clocks, poor core layouts/NUMA and the fact that with DDR5 registered and unregistered RAM is no longer pin compatible, meaning you are robbed of a lot of RAM latency when you are forced to use Registered/Buffered ECC Ram in these systems.

And the result is miserable. Anything current Threadripper (pro or not) EPYC or Xeon just plain sucks for anyhting but Server/Workstation loads.

If you try to run a game on the beastly $5000 Threadripper 7980x on a $1500 motherboard you are going to be bested - performance wise - by a $200 Ryzen 5 7600 on a $150 motherboard.

Honestly, the concept of the "no compromises" HEDT system is dead unless we start getting better consumer motherboard with better slot options, which might save it just a little bit.
 
Well, yeah, but it is a Threadripper Pro. We know we can get many PCIe lanes on workstation boards, but the problem with workstation boards is that they need workstation CPU's, and these days workstation CPU's suck for anything but workstation loads.

That sucks royally.

Sandly this concept appears to be dead for about 3-5 years now.

Sad.Can anyone speculate as to the reasons?
Now the problem is that now I have to choose one or the other. Either consumer or workstation. Or I have to build two systems (which I really don't want to do). One for each task.

I think a lot of guys don't have workstation job needs, but just want a powerful consumer board. That is, HEDT.

The whole point of this thread is that we want 16x-8x in a consumer board.

Yeah, me among them.
Honestly, the concept of the "no compromises" HEDT system is dead unless we start getting better consumer motherboard with better slot options, which might save it just a little bit.
But we can't just "get" these boards. ASUS, MSI, etc have to start making them and then we will buy them.
 
The whole point of this thread is that we want 16x-8x in a consumer board.
Nobody wants this except you. Probably an exaggeration but very few people desire this.

I don't think you ever responded to this, thoughts?
A PCIe 3.0 x4 slot can deliver 4GB/s bandwidth, theoretically giving you up to 32Gbps for your QSFP+ NIC. Wouldn't that suit your needs just fine? It might not reach that full theoretical speed but I'd expect to be somewhat close. Hopefully soon we'll see PCIe 4.0 x4 NICs come out since this is the same bandwidth as PCIe 3.0 x8 and that will invalidate your need for the mythical x16/x8. Or... simply eat a 2% - 3% performance loss on your GPU to run x8/x8; which is a perfectly acceptable trade-off to get very high speed networking on a consumer platform.
 
Sad.Can anyone speculate as to the reasons?

I think it is a combination of reasons.

HEDT is really a combination of three things.

1.) More cores than regular consumer chips
2.) More PCIe lanes than regular consumer chips
3.) More RAM channels than regular consumer chips
4.) ...without the performance penalties in lightly threaded loads that workstation products have.

In the past the HEDT had its own socket. Think 9xx series Bloomfield vs 8xx series Nehalem, or Sandy Bridge-E vs Sandy Bridge, or Ivy Bridge-E vs Ivy Bridge, etc. etc.

The HEDT socket had more pins allowing it to fit more PCIe lanes and more RAM channels.

Then AMD started cramming more cores into consumer AM4 sockets, and intel followed suit.

Now all of a sudden, you've already satisfied the core-heads with #1 above without needing a specialized socket. This makes it more difficult to justify the added expense of developing a specialized socket. AMD and Intel probably figured there was no need for that interim socket when they already had workstation sockets.

And this worked for a while. My Threadripper 3960x could be either a HEDT product or a Workstation product depending on how you configured it. Stick Registered ECD RAM in it and you essentially have a workstation product. Stick high performance non-ECC unbuffered RAM in it and you essentially have a HEDT platform. You can also configure the use of cores appropriately with "game mode" to avoid NUMA issues involved with temultiple CCD's

But then DDR5 launched, and Registered and Unbuffered RAM is no longer pin compatible. So if you are making a motherboard you have to choose one or the other. Now you are forced to put Registered ram in a Threadripper build, hampering performance.

And then AMD also slowly stopped optimizing "game modes" on Threadrippers, further reducing performance in more consumer workloads,

Essentially this has turned Threadrippers (both pro and non-Pro) into workstation only platforms. Xeon is mostly the same.
 
Nobody wants this except you. Probably an exaggeration but very few people desire this.

I've had others chime in in this thread agreeing with me.

I don't think you ever responded to this, thoughts?

A PCIe 3.0 x4 slot can deliver 4GB/s bandwidth, theoretically giving you up to 32Gbps for your QSFP+ NIC. Wouldn't that suit your needs just fine? It might not reach that full theoretical speed but I'd expect to be somewhat close. Hopefully soon we'll see PCIe 4.0 x4 NICs come out since this is the same bandwidth as PCIe 3.0 x8 and that will invalidate your need for the mythical x16/x8. Or... simply eat a 2% - 3% performance loss on your GPU to run x8/x8; which is a perfectly acceptable trade-off to get very high speed networking on a consumer platform.

I've tried a few times, but I have found that most enterprise NIC's behave badly when they are not given the full number of lanes they were designed for. Not quite sure why, but it doesn't just scale with raw bandwidth figures. For instance my old first gen 10gig Intel AT2 adapters were the worst. In an 8x slot they worked fine. In a 4x slot downstream worked fine, but upstream was a horrible mess of poor and intermittent performance, even though the bandwidth of the one 4x slot should in theory have been sufficient to support one port.

There is something more going on there than just adding up raw available bandwidth. I wonder if Intel hand optimized what data goes over what lane or something like that, rather than just pooling all the available PCIe bandwidth and using it.

So, I think at the very least you'd need some sort of PCIe switching to make sure the NIC sees all 8 lanes locally, even if they are later pooled and combined to just 4 lanes upstream.
 
Nobody wants this except you. Probably an exaggeration but very few people desire this.
I'm not so sure. When I got an AM5/X6570e chipset motherboard, I was very annoyed because there were only 2 16 slots and 1 1x. From previous generations of motherboards I was accustomed to a lot more card adapter slots.
 
I think it is a combination of reasons.

HEDT is really a combination of three things.

1.) More cores than regular consumer chips
2.) More PCIe lanes than regular consumer chips
3.) More RAM channels than regular consumer chips
4.) ...without the performance penalties in lightly threaded loads that workstation products have.

In the past the HEDT had its own socket. Think 9xx series Bloomfield vs 8xx series Nehalem, or Sandy Bridge-E vs Sandy Bridge, or Ivy Bridge-E vs Ivy Bridge, etc. etc.

The HEDT socket had more pins allowing it to fit more PCIe lanes and more RAM channels.

Then AMD started cramming more cores into consumer AM4 sockets, and intel followed suit.

Now all of a sudden, you've already satisfied the core-heads with #1 above without needing a specialized socket. This makes it more difficult to justify the added expense of developing a specialized socket. AMD and Intel probably figured there was no need for that interim socket when they already had workstation sockets.

And this worked for a while. My Threadripper 3960x could be either a HEDT product or a Workstation product depending on how you configured it. Stick Registered ECD RAM in it and you essentially have a workstation product. Stick high performance non-ECC unbuffered RAM in it and you essentially have a HEDT platform. You can also configure the use of cores appropriately with "game mode" to avoid NUMA issues involved with temultiple CCD's

But then DDR5 launched, and Registered and Unbuffered RAM is no longer pin compatible. So if you are making a motherboard you have to choose one or the other. Now you are forced to put Registered ram in a Threadripper build, hampering performance.

And then AMD also slowly stopped optimizing "game modes" on Threadrippers, further reducing performance in more consumer workloads,

Essentially this has turned Threadrippers (both pro and non-Pro) into workstation only platforms. Xeon is mostly the same.
So, are we doomed to never see any future HEDT motherboards (or chipsets?)
 
I'm not so sure. When I got an AM5/X6570e chipset motherboard, I was very annoyed because there were only 2 16 slots and 1 1x. From previous generations of motherboards I was accustomed to a lot more card adapter slots.
Why were you annoyed after you bought the motherboard? You knew very well going into it what the expansion slots were. You didn't even have to look at the manual, simply a picture of the motherboard could do. But I would always suggest looking at the manual before a motherboard purchase, since sometimes things happen when you populate other slots (e.g. populate this one slot you lose 2 SATA ports or whatever...)

There are plenty of AM5 boards that have similar slot layouts to any previous generation consumer boards...
 
Why were you annoyed after you bought the motherboard?
Yes.
You knew very well going into it what the expansion slots were.
Actually not. I had this real issue witth my X570 board and CPU, and I needed to replace them, pronto. So what I did, and I am not proud of it now, is that I simply ordered the "new generation" of the same model motherboard.

You didn't even have to look at the manual, simply a picture of the motherboard could do.
Now I realize that. And it's a lesson to myself. I won't make that mistake again. Adm I'll look much more carefully at all specs.
But I would always suggest looking at the manual before a motherboard purchase, since sometimes things happen when you populate other slots (e.g. populate this one slot you lose 2 SATA ports or whatever...)
Yeah. We lost 4 SATA ports, which is why I would have wanted more PCIE slots, so i could add in a SATA port card adapter.
There are plenty of AM5 boards that have similar slot layouts to any previous generation consumer boards...
Yes, I have looked at a lot of X670e boards, by ASUS, MSI, and Gigabyte/
 
So, are we doomed to never see any future HEDT motherboards (or chipsets?)

Who knows. I suspect the era of dedicated HEDT sockets might be over (but what do I know anyway).

Regular consumer sockets have become powerful enough, that the remaining number of people (like us) who crave a dedicated HEDT platform for PCIe lanes, memory channels and the like are probably too few to make it financially worth it.

They were able to hold on for a while when workstation platforms could serve both purposes by offering either registered or consuemr desktop RAM in the same platform, but now that DDR5 has ended that pin-compatibility, I feel it is unlikely.

That said, it is not all doom and gloom.

As PCIe generations increase, the 4 lanes that AMD run from the CPU to the chipset can theoretically keep providing more and more bandwidth. If the chipsets are designed with PCIe switching that can provide flexible outputs with higher lane-count previous generation slots, we could just see PCIe configurations that make them more HEDT like if you buy the right motherboard.

Who knows if the numbers work out to make this a financially viable product for board makers though. There may just not be enough of us who want this. But I am trying to be hopeful.

They could just also use the PCIe lane issue as a differentiating factor to force those who really need more lanes onto more expensive workstation platforms, and giving those of us who want a HEDT-like experience the shaft, but who knows.
 
One of the primary reasons for x16/x8 and similar configurations have been multi-GPU setups. You will notice that the transition to only a primary x16 slot and little else happened around the same time multi-GPU setups went by the wayside. Gamers aren't demanding multiple x16 slots anymore and the vast majority of gamers aren't building systems for multiple duties. A gaming rig is purely a gaming rig (sometimes workstation), a NAS is purely NAS, and so on and so forth. There is very little demand for mass expansion capabilities on the consumer market.
 
Who knows. I suspect the era of dedicated HEDT sockets might be over



They could just also use the PCIe lane issue as a differentiating factor to force those who really need more lanes onto more expensive workstation platforms, and giving those of us who want a HEDT-like experience the shaft, but who knows.
^^^^
 
One of the primary reasons for x16/x8 and similar configurations have been multi-GPU setups. You will notice that the transition to only a primary x16 slot and little else happened around the same time multi-GPU setups went by the wayside. Gamers aren't demanding multiple x16 slots anymore and the vast majority of gamers aren't building systems for multiple duties. A gaming rig is purely a gaming rig (sometimes workstation), a NAS is purely NAS, and so on and so forth. There is very little demand for mass expansion capabilities on the consumer market.
I think that part of it, NAS and 10GB getting somewhat affordable make for a lot of sata port on your device something that was already rare to rarer, now with the 3 to 5 M.2 slot and the price of drive in that format being similar to ssd sata disk even more so.

The I need a lot of PCI-lane and memory channel at the same time I do not like the performance hit on single thread workload that threadripper/xeon has because I cannot have 2 different computer for each task.... also getting rare and with how fast threadripper 7000 can go (above 5.2 ghz...) how much is there nowaday, seem a lot just now wanting to pay that much for people that do not need it that much.

And people that wanted 4 channel DDR-4 bandwith, now has easily access to it via dual channel DDR-5.

The cost that add to the motherboard to have many pci-e 4/5 lane, the cost on the cpu to have a larger memory controller/pci lane vs how many people will never use the extra, will probably make sure that for a long time we will have a distinction (add the differentiating factor commercial factor that push in the same direction)

The in-between of the past, the Xeon 4 memory channel made with the latest fastest core instead of 12000 gen one type, will see if some competition + big enough market for it push them to do so and justify the hurt it causes to the fully priced pro line.
 
Last edited:
  • Like
Reactions: DPI
like this
The I need a lot of PCI-lane and memory channel at the same time I do not like the performance hit on single thread workload that threadripper/xeon has because I cannot have 2 different computer for each task.... also getting rare and with how fast threadripper 7000 can go (above 5.2 ghz...) how much is there nowaday, seem a lot just now wanting to pay that much for people that do not need it that much.

There is more to it than just the clocks though. Threadripper 7000 consistently performs worse in client workloads and games than a Ryzen 5 7600
 
There is more to it than just the clocks though. Threadripper 7000 consistently performs worse in client workloads and games than a Ryzen 5 7600
That need a rare level of insistence to play game on the same machine, specially with the x3d option that kind of detach would they want it or not the best cpu for them with the best for most other workload. Feel normal to just fully remove them of the equation, very few people would mind.

the gap between them and a 7600x at most ST workload is quite thin (even in games about 90% of an 7600x, in some of them faster):
CB_Single-p.jpg
Photshop-p.jpg
BG3-p.jpg


They are quite expensive too.
 
Last edited:
That need a rare level of insistence to play game on the same machine, specially with the x3d option that kind of detach would they want it or not the best cpu for them with the best for most other lockdown. Feel normal to just fully remove them of the equation, very few people would mind.

the gap between them and a 7600x at most ST workload is quite thin (even in games about 90% of an 7600x, in some of them faster):
View attachment 652147View attachment 652148View attachment 652149

They are quite expensive too.

Well that's kind of it.

That is huge. 35-40% compared to a 7950X3D.

As an enthusiast who doesn't necessarily need the workstation platform (other than an additional 8x slot) I'm not really willing to pay over 5x more compared to a 7950X3D and wind up losing 35-40% performance for what I do the most.

I'm not opposed to paying more, even a lot more, for if it means I get unparalleled performance, but not to take a huge performance hit.

Certainly there has to be more people like me.
 
Certainly there has to be more people like me.
Would they do not buy a cheap 5800x3d platform just to game ? Or if they have good money a 7800x3d and a threadripper/Xeon 4 memory channel system for the workstation.... For games you can go small cheap motherboard, you need almost nothing (one x16, 3 nvme, that's it), cheap cooling, etc.... splitting the 2 seem to make sense.

With the current tech if you do not take the x3d option (on AMD side) you take a big gaming hit and if you take the 3d option you take a big hit on most others workload...

The big issue to me, is the price tag more than the gaming performance, would they be cheaper, one could simply build 2 machines.
 
Would they do not buy a cheap 5800x3d platform just to game ? Or if they have good money a 7800x3d and a threadripper/Xeon 4 memory channel system for the workstation.... For games you can go small cheap motherboard, you need almost nothing (one x16, 3 nvme, that's it), cheap cooling, etc.... splitting the 2 seem to make sense.

With the current tech if you do not take the x3d option (on AMD side) you take a big gaming hit and if you take the 3d option you take a big hit on most others workload...

The big issue to me, is the price tag more than the gaming performance, would they be cheaper, one could simply build 2 machines.

I guess.

I have thought about doing this. It would certainly be cheaper to stay on top of platform upgrades id I have one workstation platform which is infrequently upgraded and a game platform with much cheaper parts that gets more frequent upgrades...

But I have to say, this really bothers me. Like almost irrationally so. It's like it goes against my religion. PC's are supposed to be general purpose and do everything.

If its a PC just for games, it's really just a console, and that gives me major "yuck" vibes.
 
PC's are supposed to be general purpose
I know people that hated GPUs in the 90s for that reason, fully general compute ran on fully realistically Turing complete chips (and not just infinite time to do it like GPUs) struggle to increase performance by watt/$ for the competition to not become too attractive despite the inconvenience (the moment it get good enough for game like physic, sound, etc... people jump back in)
 
With the current tech if you do not take the x3d option (on AMD side) you take a big gaming hit and if you take the 3d option you take a big hit on most others workload...
Why is that? I thought (I was wrong, maybe?) that x3d was all upside for performance. BTW I'm not a gamer, but I do a lot with Adobe software.

The big issue to me, is the price tag more than the gaming performance, would they be cheaper, one could simply build 2 machines.
That would be OK, if SWMBO would be Ok with all the space needed. I know in advance that "my" SWMBO would not be OK. Maybe if I got one of those really fat cases that can hold 2 motherboards? :eek:
 
But I have to say, this really bothers me. Like almost irrationally so. It's like it goes against my religion. PC's are supposed to be general purpose and do everything.

If its a PC just for games, it's really just a console, and that gives me major "yuck" vibes.
What he said.
 
Maybe if I got one of those really fat cases
Or pick 2 case that look like a big one if they are touching each other.

Why is that? I thought (I was wrong, maybe?) that x3d was all upside for performance. BTW I'm not a gamer, but I do a lot with Adobe software.
I think in part it is because the cache over them make them harder too cool so they are run at lower frequency to use less watt.

Not that big of a deal (usually a small difference), but in something's a 7800x3d can even get a tiny bit lower than a 7600/7700 non X. If it was all upside I imagine they would have made a 7950x3d that was all-in on added cache instead of a mix design
 
What motherboard do you use? Mine has a DisplayPort in but I haven’t messed with using it. I’m guessing the ProArt X670e?

Zarathustra[H] I did find an Atto product that claimed to use TB4 to some type of 40Gb fiber but it’s discontinued and seemed to be pretty expensive second hand still. And I didn’t find any solid confirmation that it actually runs at the full 40Gb. Lots of 25Gb options through TB3/4 though i think that would be slower than just running your card on a limited x4 slot.
The TB4 to QSFP adapters were not popular, buggy as shit, and best handled with an external PCIE enclosure with an actual ~card~ in it. They also get hot as shit (like hot enough to melt things), which meant that "external" device either weighed a few pounds (heat sinks) or cooked itself. Many of those cards are either actively cooled, or need REAL airflow too. Even 10G cards are that way.
Yeah, unfortunately that board alone is more than a Asus Creator x670e/7950X combo and then you need to buy a Threadripper Pro.
You can get non-pro for it.
Well, yeah, but it is a Threadripper Pro. We know we can get many PCIe lanes on workstation boards, but the problem with workstation boards is that they need workstation CPU's, and these days workstation CPU's suck for anything but workstation loads.

What I am looking for is the old concept of HEDT. An all in one no compromises build. It can be pro-sumer workstation-like AND be a consumer game machine at the same time and excel at both. Essentially a high end consumer desktop that can both attain top low threaded speeds in consumer workloads and has some light workstation-like features like many cores and large numbers of PCIe lanes. Sandly this concept appears to be dead for about 3-5 years now.
Yup. I still want my "it does everything very well, but nothing excellent" build. I had them for years. As my needs grew, I finally found the "perfect" system in my 3960X box just like Zara did.

And now, 4 years old, and it's starting to flake out in ways. I want to replace it. I'm exceedingly torn, as the "consumer" side of what I do would be happy on a 7950x3d. But the "crunchy workload" side wants >128G of RAM, and 192G of DDR5 is EXPENSIVE as shit. Way more than 128-192 on 4 slots, since 96G dimm are nuts.
When I bought my Threadripper 3960x in 2019 (and its predecessor, my Core i7 3930k x79 build in 2011, or any x99 or x299 system) it performed equivalently to similar consumer CPU's in consumer loads, but also had the capability of adding a ton of RAM, had lots of PCIe lanes for expansion and some extra cores. It could do everything.
Yup. My x299 box is just a bit behind my 10900K box. The 3960X box is about as fast as my 3950X box (the Threadripper takes OCs like a damned champ).
Now the problem is that now I have to choose one or the other. Either consumer or workstation. Or I have to build two systems (which I really don't want to do). One for each task.
I have a dedicated gaming system - but it's a SHARED gaming system. No work stuff, no personal stuff - the wife, friends, whoever gets to use it. It's a pure consumer entertainment machine.
The whole point of this thread is that we want 16x-8x in a consumer board. AM5/LGA1700 (or LGA1851 for next gen Intel). This is not because of cost, but because current workstation chips are MUCH slower at lightly threaded loads than consumer chips currently are. I'd happily pay extra for a motherboard that supported the type of lane layout that I need.

There are three things working against current workstation designs compared to consumer chips. Lower clocks, poor core layouts/NUMA and the fact that with DDR5 registered and unregistered RAM is no longer pin compatible, meaning you are robbed of a lot of RAM latency when you are forced to use Registered/Buffered ECC Ram in these systems.
Seconded
And the result is miserable. Anything current Threadripper (pro or not) EPYC or Xeon just plain sucks for anyhting but Server/Workstation loads.

If you try to run a game on the beastly $5000 Threadripper 7980x on a $1500 motherboard you are going to be bested - performance wise - by a $200 Ryzen 5 7600 on a $150 motherboard.

Honestly, the concept of the "no compromises" HEDT system is dead unless we start getting better consumer motherboard with better slot options, which might save it just a little bit.
Thirded.
I think it is a combination of reasons.

HEDT is really a combination of three things.

1.) More cores than regular consumer chips
2.) More PCIe lanes than regular consumer chips
3.) More RAM channels than regular consumer chips
4.) ...without the performance penalties in lightly threaded loads that workstation products have.
Yup.
In the past the HEDT had its own socket. Think 9xx series Bloomfield vs 8xx series Nehalem, or Sandy Bridge-E vs Sandy Bridge, or Ivy Bridge-E vs Ivy Bridge, etc. etc.

The HEDT socket had more pins allowing it to fit more PCIe lanes and more RAM channels.

Then AMD started cramming more cores into consumer AM4 sockets, and intel followed suit.

Now all of a sudden, you've already satisfied the core-heads with #1 above without needing a specialized socket. This makes it more difficult to justify the added expense of developing a specialized socket. AMD and Intel probably figured there was no need for that interim socket when they already had workstation sockets.
And you satisfied most ram heads with the larger DDR5 Dimm - just leaving PCIE and the extremes on both.
And this worked for a while. My Threadripper 3960x could be either a HEDT product or a Workstation product depending on how you configured it. Stick Registered ECD RAM in it and you essentially have a workstation product. Stick high performance non-ECC unbuffered RAM in it and you essentially have a HEDT platform. You can also configure the use of cores appropriately with "game mode" to avoid NUMA issues involved with temultiple CCD's

But then DDR5 launched, and Registered and Unbuffered RAM is no longer pin compatible. So if you are making a motherboard you have to choose one or the other. Now you are forced to put Registered ram in a Threadripper build, hampering performance.

And then AMD also slowly stopped optimizing "game modes" on Threadrippers, further reducing performance in more consumer workloads,

Essentially this has turned Threadrippers (both pro and non-Pro) into workstation only platforms. Xeon is mostly the same.
Sigh. Sapphire rapids was the first disappointment. Storm peak... the second.
I've had others chime in in this thread agreeing with me.





I've tried a few times, but I have found that most enterprise NIC's behave badly when they are not given the full number of lanes they were designed for. Not quite sure why, but it doesn't just scale with raw bandwidth figures. For instance my old first gen 10gig Intel AT2 adapters were the worst. In an 8x slot they worked fine. In a 4x slot downstream worked fine, but upstream was a horrible mess of poor and intermittent performance, even though the bandwidth of the one 4x slot should in theory have been sufficient to support one port.
EXTREMELY true. These cards were designed to work one way. Same as using a server board for consumer stuff - it acts WEIRD, as those boards REALLY expect to be used with server OSes. I've done supermicro server boards with Windows 10/7/XP. It was... funny.

They have specific expectations because the vendor can say "get fucked, that's not how it was meant to be used" if it acts weird in any other way. So not tested, not validated, and won't be fixed if buggy.
There is something more going on there than just adding up raw available bandwidth. I wonder if Intel hand optimized what data goes over what lane or something like that, rather than just pooling all the available PCIe bandwidth and using it.

So, I think at the very least you'd need some sort of PCIe switching to make sure the NIC sees all 8 lanes locally, even if they are later pooled and combined to just 4 lanes upstream.
Possible and likely. The PLX chip would work fine.
Yes.

Actually not. I had this real issue witth my X570 board and CPU, and I needed to replace them, pronto. So what I did, and I am not proud of it now, is that I simply ordered the "new generation" of the same model motherboard.


Now I realize that. And it's a lesson to myself. I won't make that mistake again. Adm I'll look much more carefully at all specs.

Yeah. We lost 4 SATA ports, which is why I would have wanted more PCIE slots, so i could add in a SATA port card adapter.

Yes, I have looked at a lot of X670e boards, by ASUS, MSI, and Gigabyte/
The thing that gets me here too - options on 10G are getting more limited. MSI has it on the Ace/Godlike ($800/1200), Gigabyte on the Extreme (1000), and only Asus has it on a reasonable board (Proart - 500). Every other maker/board? No 10G. So I either spend HEDT cost on a board, deal with the ProArt, or have to convince a USB 10G NIC to play nice (ugh). So that's a slot I lose right there.
Who knows. I suspect the era of dedicated HEDT sockets might be over (but what do I know anyway).

Regular consumer sockets have become powerful enough, that the remaining number of people (like us) who crave a dedicated HEDT platform for PCIe lanes, memory channels and the like are probably too few to make it financially worth it.
Sad but true.

I think that part of it, NAS and 10GB getting somewhat affordable make for a lot of sata port on your device something that was already rare to rarer, now with the 3 to 5 M.2 slot and the price of drive in that format being similar to ssd sata disk even more so.
Gotta have slots to build the NAS. I like Synology/etc, but it's cheaper to BYO using older parts if you have them - and the skill - vs buying it. I have a synology. I also have home-built NAS.
The I need a lot of PCI-lane and memory channel at the same time I do not like the performance hit on single thread workload that threadripper/xeon has because I cannot have 2 different computer for each task.... also getting rare and with how fast threadripper 7000 can go (above 5.2 ghz...) how much is there nowaday, seem a lot just now wanting to pay that much for people that do not need it that much.

And people that wanted 4 channel DDR-4 bandwith, now has easily access to it via dual channel DDR-5.
Bandwidth yes. Amount - no. My 3960X will gladly do 256G of RAM (at JDEC speeds, mind you). Show me how to get 256G on a 7950.
 
You can get non-pro for it.

Not according to Asrock: https://www.asrock.com/mb/AMD/WRX90 WS EVO/index.asp#CPU

CPU Support List​

SocketFamilyModelPowerCoreFrequencyCacheValidated BIOSBootable since BIOS
TR5Ryzen Threadripper Pro7995WX(100-000000884)350WStorm Peak2.5GHz96MBAllAll
TR5Ryzen Threadripper Pro7985WX(100-000000454)350WStorm Peak3.2GHz64MBAllAll
TR5Ryzen Threadripper Pro7975WX(100-000000453)350WStorm Peak4.0GHz32MBAllAll
TR5Ryzen Threadripper Pro7965WX(100-000000885)350WStorm Peak3.8GHz24MBAllAll
 
Why is that? I thought (I was wrong, maybe?) that x3d was all upside for performance. BTW I'm not a gamer, but I do a lot with Adobe software.

If you look at the spec pages for X3D models, they tend to be clocked lower than the standard variants.

This is likely due to the massive amount of cache makes them harder to cool.

The end result is that the X3D variants excel at tasks where larger CPU caches are beneficial (games are among them) but are slower at tasks that have less of a CPU cache is beneficial, but more raw compute power is beneficial (small datasets that hit the CPU repeatedly)

It's tough for me to say exactly which workloads fall in each camp without testing, but gaming is fairly well known to perform well on X3D variants.
 
Zarathustra[H] I did find an Atto product that claimed to use TB4 to some type of 40Gb fiber but it’s discontinued and seemed to be pretty expensive second hand still. And I didn’t find any solid confirmation that it actually runs at the full 40Gb. Lots of 25Gb options through TB3/4 though i think that would be slower than just running your card on a limited x4 slot.

The TB4 to QSFP adapters were not popular, buggy as shit, and best handled with an external PCIE enclosure with an actual ~card~ in it. They also get hot as shit (like hot enough to melt things), which meant that "external" device either weighed a few pounds (heat sinks) or cooked itself. Many of those cards are either actively cooled, or need REAL airflow too. Even 10G cards are that way.


I'm going to be honest. I build and mess with server stuff all the time, but I don't even know what TB4 is?

Thunderbolt?

I have honestly yet to use anything thunderbolt. I tend to think of that as an Apple standard... Or no. Was that Firewire? I can't remember. I've never used any of them, favoring internal discrete components.

I tend to minimize my reliance on things like USB, Thunderbolt, etc., other than mouse, keyboard, DAC and microphone and on the extremely rare occasion, camera. ( I don't video chat ever, even when I have a camera it remains off)
 
I think that part of it, NAS and 10GB getting somewhat affordable make for a lot of sata port on your device something that was already rare to rarer, now with the 3 to 5 M.2 slot and the price of drive in that format being similar to ssd sata disk even more so.

To be fair, if you care about your data (and if you don't why even bother hoarding it?) a NAS system should probably be built on server hardware. Xeon/EPYC with double fault tolerant registered ECC RAM, as well as redundant drives and backups to a second storage media (even better if that second media is offline, but that gets tricky) would be ideal, otherwise bit rot or catastrophic failure is just a matter of time.

Gotta have slots to build the NAS. I like Synology/etc, but it's cheaper to BYO using older parts if you have them - and the skill - vs buying it. I have a synology. I also have home-built NAS.

I never understood IT departments obsession with Synology and other solutions in a box. I'd always prefer a good open source solution like ZFS. Dealing with proprietary hardware is inflexible and limiting. And if/when something goes wrong, knowing you can just pop the drive into any machine that can connect to it to try to rescue data or fix it rather than looking around for a second expensive box that speaks the same proprietary protocol is huge.

I definitely understand that man-hours are a bigger driver than system cost in IT, but still, it seems short sighted. Putting together a good ZFS NAS may cost a few extra hours up front, but will save a ton of man hours if/when the shit hits the fan. It may even save the company if that data is mission critical. It doesnt even have to be a custom server. A Dell/HPE server with an HBA and TrueNAS core could even be a great enterprise NAS application. It can even talk to Active Directory! You could even run it as a VM with HBA passthrough if your needs are not big enough to require a dedicated physical server.

But I have come to understand that this is pretty much corporate IT in a nutshell. Shortsightedness and minimizing of cost effort until shit hits the fan, and then shrugging and saying "oh well".

it's really stupid.
 
Back
Top