Intel Core i9-13900KS Review - The Empire Strikes Back

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
11,007
Impressive

"Cooling such high heat output isn't easy on the CPU cooler either. Moving 350 W from a tiny silicon die, through a thick heatspreader is no easy task. Even with a large AIO sitting on top of the IHS you'll be close to 100°C when heavily loaded. With an air cooler you'll regularly thermal throttle. This is not a huge deal, as modern processors are very good at keeping a certain target temperature by slightly reducing clocks, without performance falling off a cliff—it's still not what you've spent all that money for. No doubt, you can undervolt the 13900KS, and dial the power limits back, but then why buy a KS in the first place? Things aren't much better on the Zen 4 side, because AMD wanted to keep cooler compatibility with Socket AM4, so they had to install an extra thick heatspreader on the AM5 CPUs, which make them difficult to cool, too, but it's easier due to the lower overall heat output.

Although the unlocked multiplier makes overclocking technically easy, it is limited by the cooling system. Even when the thermal limit is raised from 100°C to the maximum of 115°C, it is difficult to push voltage much further, even with an AIO. At least Intel is giving us the option to adjust the temperature limit, AMD has no such feature. Manual overclocking is also complicated by the fact that the low-thread count clocks on two cores are so high (6.0 GHz), while the other cores run at lower speeds. My highest all-core OC was 5.6 GHz, which results in minimal performance gains, because the CPU runs at 5.6 almost all the time anyway at stock. I think it's about time that Intel provided us with better overclocking controls for precision adjustments. Additionally, there is need for an overhaul of Intel XTU, as AMD's Ryzen Master provides a superior user experience. It's high time for Intel to update their boosting technologies as they are still relying on almost decade-old algorithms. In contrast, AMD has been consistently refining their methods every few years and is achieving significant improvements by incorporating more fine-grained mechanisms that consider many more variables on the running system.

Intel has announced an MSRP of $700 for the 13900KS, which matches the AMD Ryzen 9 7950X3D. Compared to the 13900K, the price increase is $130—for a few percent in performance and some extra power consumption—very hard to justify. On the other hand you're getting a processor that impresses with fantastic performance in all areas—applications and gaming. With the Ryzen offerings you'll have to make more compromises: the 7800X3D is the best processor for gaming, and only $450, but it's a bit slower in applications. The 7950X3D is among the fastest in gaming and applications, but requires custom AMD software for game detection and processor thread management that doesn't always do the right thing. For a more application-focused experience you could opt for the 7950X, but you'll be losing out on a bit of gaming performance due to lack of 3DV cache and the dual CCD design. At the end of the day all these processors are really really good at everything and you'll have a hard time noticing much of a subjective difference. On the other hand, I can imagine a lot of people out there with deep pockets that want "the fastest," especially for content creation, who don't care much about power and also like the bragging rights for having the Limited Edition KS. For the vast majority of gamers and users out there, a 13700K or 7700X would be the much better choice—not much slower, but the motherboard and memory can be bought for the price difference to the 13900KS. Last but not least, 5800X3D is a great gaming option, too. Let's hope that Intel's Meteor Lake can make progress with power efficiency, and then everything else will come together, too—they have the IPC and performance."

package4.jpg


Source: https://www.techpowerup.com/forums/threads/intel-core-i9-13900ks.307608/
 
Impressive

"Cooling such high heat output isn't easy on the CPU cooler either. Moving 350 W from a tiny silicon die, through a thick heatspreader is no easy task. Even with a large AIO sitting on top of the IHS you'll be close to 100°C when heavily loaded. With an air cooler you'll regularly thermal throttle. This is not a huge deal, as modern processors are very good at keeping a certain target temperature by slightly reducing clocks, without performance falling off a cliff—it's still not what you've spent all that money for. No doubt, you can undervolt the 13900KS, and dial the power limits back, but then why buy a KS in the first place? Things aren't much better on the Zen 4 side, because AMD wanted to keep cooler compatibility with Socket AM4, so they had to install an extra thick heatspreader on the AM5 CPUs, which make them difficult to cool, too, but it's easier due to the lower overall heat output.

Although the unlocked multiplier makes overclocking technically easy, it is limited by the cooling system. Even when the thermal limit is raised from 100°C to the maximum of 115°C, it is difficult to push voltage much further, even with an AIO. At least Intel is giving us the option to adjust the temperature limit, AMD has no such feature. Manual overclocking is also complicated by the fact that the low-thread count clocks on two cores are so high (6.0 GHz), while the other cores run at lower speeds. My highest all-core OC was 5.6 GHz, which results in minimal performance gains, because the CPU runs at 5.6 almost all the time anyway at stock. I think it's about time that Intel provided us with better overclocking controls for precision adjustments. Additionally, there is need for an overhaul of Intel XTU, as AMD's Ryzen Master provides a superior user experience. It's high time for Intel to update their boosting technologies as they are still relying on almost decade-old algorithms. In contrast, AMD has been consistently refining their methods every few years and is achieving significant improvements by incorporating more fine-grained mechanisms that consider many more variables on the running system.

Intel has announced an MSRP of $700 for the 13900KS, which matches the AMD Ryzen 9 7950X3D. Compared to the 13900K, the price increase is $130—for a few percent in performance and some extra power consumption—very hard to justify. On the other hand you're getting a processor that impresses with fantastic performance in all areas—applications and gaming. With the Ryzen offerings you'll have to make more compromises: the 7800X3D is the best processor for gaming, and only $450, but it's a bit slower in applications. The 7950X3D is among the fastest in gaming and applications, but requires custom AMD software for game detection and processor thread management that doesn't always do the right thing. For a more application-focused experience you could opt for the 7950X, but you'll be losing out on a bit of gaming performance due to lack of 3DV cache and the dual CCD design. At the end of the day all these processors are really really good at everything and you'll have a hard time noticing much of a subjective difference. On the other hand, I can imagine a lot of people out there with deep pockets that want "the fastest," especially for content creation, who don't care much about power and also like the bragging rights for having the Limited Edition KS. For the vast majority of gamers and users out there, a 13700K or 7700X would be the much better choice—not much slower, but the motherboard and memory can be bought for the price difference to the 13900KS. Last but not least, 5800X3D is a great gaming option, too. Let's hope that Intel's Meteor Lake can make progress with power efficiency, and then everything else will come together, too—they have the IPC and performance."

View attachment 565744

Source: https://www.techpowerup.com/forums/threads/intel-core-i9-13900ks.307608/
You think 350w is impressive, the new Xeon when OC’d managed just under 2k…

https://videocardz.com/newz/intel-xeon-w9-3495x-hits-1881-watts-during-overclocking-session

Not sure what they are feeding their processing node but it is impressive.

Yeah you can do the whole but AMD…
That is a TSMC victory not AMD’s.

How the Intel CPU’s don’t just ignite and instantly turn to a puff of smoke and burnt silicon is beyond me. My brain can’t wrap around putting that much juice through such a tiny thing.
 
Lol. Okay. Pretty sure the design of how the chip counts for something.
Yeah but TSMC built the tools AMD uses to design its floor plans, TSMC designed the transistors, and the cache structures. NVidia designed the AI that TSMC uses to optimize their tracing patterns… as TSMC uses the cuLitho packages that NVidia has been working with ASML on for the past few years to fine tune.

It’s a very complicated relationship.

I am not downplaying AMD’s involvement but AMD can’t do anything they are doing with out TSMC and their tools or processes.
 
Last edited:
I have the 13900KS running stock clocks 12 cores only 5.6p 4.3e and it runs amazing. Low voltages and runs cool considering it's super high clocks. If I tune it down 100mhz to 5.5 I think It might even run in the high 1.1v. this KS runs on very low voltage if you don't run all the 16 e-cores.
 
Last edited:
I have the 13900KS running stock clocks 12 cores only 5.6p 4.3e and it runs amazing. Low voltages and runs cool considering it's super high clocks. If I tune it down 100mhz to 5.5 I think It might even run in the high 1.1v. this KS runs on very low voltage if you don't run all the 16 e-cores.
Why even buy a KS if you just going to kneecap it. Buy a cheaper CPU. You not even running it at near its max clock speed.
 
Last edited:
Why even buy a KS if you just going to kneecap it. But a cheaper CPU. You not even running it at near its max clock speed.
Because no games need the speed right now. Like all my previous CPUs I end up overclocking them down the line. I tend to keep my systems for a while. I already have the 6GHz all core settings written down, don't worry about that one bit lol it's just that I don't need to pump that amount of voltage into the chip at this very moment.
Also I love the binned CPU that does these super high clock speeds at a much lower voltage. Runs cooler in my PC room.
 
Why even buy a KS if you just going to kneecap it. But a cheaper CPU. You not even running it at near its max clock speed.

Because no games need the speed right now. Like all my previous CPUs I end up overclocking them down the line. I tend to keep my systems for a while. I already have the 6GHz all core settings written down, don't worry about that one bit lol it's just that I don't need to pump that amount of voltage into the chip at this very moment.
Also I love the binned CPU that does these super high clock speeds at a much lower voltage. Runs cooler in my PC room.

qi4aabkdaxxz2m6ehfn3gxz4w3xgi57un497lfmr&rid=giphy.gif
 
You are. You did.

Sorry but AMD crushed intel and no amount of attempts to discredit them will change that. Good day sir.
It ain't over until it's over. AMD hasn't even managed to crush shit yet. Their market penetration is a fraction of what Intel's is. Widespread adoption hasn't happened yet.

AMD will be like a fart in the wind if they don't start dominating soon. I haven't seen a single product come out of them that indicates that they have "crushed" Intel.

14th Gen intel might be shit, but they're already implementing tiles based on the TSMC 4 & 5 nm nodes. Gonna be real interesting in the next couple years. You had better hope that the tables don't turn.

AMD is a fabless company that is tiny by comparison to Intel.

Always been a big fan of AMD, I run a bunch of their parts and Graphics cards in my systems but... I don't see them crushing anyone, yet.
 
Lol, why? Who actually cares at this point other than people who are overtly attached to what company they buy? If intel makes a good enough product at a good price that doesn't consume twice the power for the same work that would be a welcome change.
I guess because I keep seeing people talking like Intel has been blown off the face of the Earth. While AMD has this more efficient architecture and have had a more efficient architecture for years now, they're still struggling to claw market share away from Intel.

I agree with you, may the best architecture win... But I hope that no architecture wins, definitively. If one does, there will be zero competition.
 
But I hope that no architecture wins, definitively. If one does, there will be zero competition.
This is true. I like to weigh my options. I don't like when people actively cheer on the failure of the competition like during the Bulldozer days. Why would anyone ever want to have a single choice?

As for the "blown away" stuff, I think that had more to do with responding to the silly notion that if a company uses the silicon of someone else, it suddenly doesn't count. I mean almost everyone uses TSMC, Samsung, etc. Should we discredit all of the other chips, too?
 
While AMD has this more efficient architecture and have had a more efficient architecture for years now, they're still struggling to claw market share away from Intel.
While desktop on both sides has been struggling for sales - AMD has no problem taking the high margin Server share away from intel. For this exact reason. Efficiency, working it's way into total ownership costs.
 
Why, we better hope that the tables turn every 5-7 year's, no ? To keep competition while alive in the x86 space.

I imagine every one whishe that Intel get a win soon and then AMD retake the lead soon after and so on.
It was more a response to the individual's post (I was responding to) of Intel's demise than any wish for less competition. I agree with you.
 
While desktop on both sides has been struggling for sales - AMD has no problem taking the high margin Server share away from intel. For this exact reason. Efficiency, working it's way into total ownership costs.
Maybe, but it was only an increase of 2% as of the end of 2022. With Intel retaining 74% an the rest on ARM. AMD was at 20% Server market share and they had lost something to the tune of 5% of their Desktop market Share (from 2021) to get there.

They may be taking away market share from Intel but Intel is far from dead.
 
It ain't over until it's over. AMD hasn't even managed to crush shit yet. Their market penetration is a fraction of what Intel's is. Widespread adoption hasn't happened yet.

AMD will be like a fart in the wind if they don't start dominating soon. I haven't seen a single product come out of them that indicates that they have "crushed" Intel.

14th Gen intel might be shit, but they're already implementing tiles based on the TSMC 4 & 5 nm nodes. Gonna be real interesting in the next couple years. You had better hope that the tables don't turn.

AMD is a fabless company that is tiny by comparison to Intel.

Always been a big fan of AMD, I run a bunch of their parts and Graphics cards in my systems but... I don't see them crushing anyone, yet.
Intel 7 isn’t even as dense as TSMC 7 and has much worse power efficiency. It doesn’t come close to touching TSMC 5N or 4N.

I look forward to when Intel and AMD are on comparable nodes, as it stands Intel is a full 2 generations behind there.
 
Maybe, but it was only an increase of 2% as of the end of 2022. With Intel retaining 74% an the rest on ARM. AMD was at 20% Server market share and they had lost something to the tune of 5% of their Desktop market Share (from 2021) to get there.

They may be taking away market share from Intel but Intel is far from dead.
AMD can’t take more market share with out cutting out from other places. AMD already buys as much silicon as TSMC can supply them with, if they want to sell more of X they need to cut Y because their supply and time are finite.
 
Impressive performance in a sense in a other sense seem almost a virtually useless offer.

Other than going all-in max out, it seem impossible for the 13900ks to be a better option than spending the same amount on a 13900k system and using the money elsewhere, even without taking into account the extra PSU-electricity-cooling cost (not an issue if your are just gaming, but at the moment the performance for that scenario seem to be about the same anyway and that tested withh a 420mm water cooler).

$160 more to have the comfort of a fancy 4tb nvme extra leg room (or a second 2tb to add to the previous one you have) or going from a 7900xt instead of the 6950xt (or the xtx instead of the xt) or go for the very fast DDR-5 instead of a 6000mhz type kit.

hard to see going for this particularly to play games over a 7800x3d and:
Averaged over the 45 tests in our application test suite, the Core i9-13900KS can impress. It's clearly the fastest CPU in the test group, but the differences are small. Compared to the 13900K, the performance uplift is 3.5%

Obviously could make sense for a non compromise build for which that extra money over the 13900K is not, not spent on something else, but that would be quite niche.
 
Last edited:
AMD can’t take more market share with out cutting out from other places. AMD already buys as much silicon as TSMC can supply them with, if they want to sell more of X they need to cut Y because their supply and time are finite.
Hard to make up what's true or not, but was there not some mention from TSMC of clients like AMD cutting back orders post crypto world:
https://www.digitaltrends.com/computing/nvidia-amd-apple-try-to-cut-their-tsmc-orders/
https://www.tweaktown.com/news/9001...ract-companies-like-nvidia-and-amd/index.html
 
Hard to make up what's true or not, but was there not some mention from TSMC of clients like AMD cutting back orders post crypto world:
https://www.digitaltrends.com/computing/nvidia-amd-apple-try-to-cut-their-tsmc-orders/
https://www.tweaktown.com/news/9001...ract-companies-like-nvidia-and-amd/index.html
TSMC doesn’t actually have a lot of capacity for their 5,4, and 3 nodes.
Apple alone can max out 3’s capacity and Apple moves a relatively small amount of silicon.
 
Impressive performance in a sense in a other sense seem almost a virtually useless offer.

Other than going all-in max out, it seem impossible for the 13900ks to be a better option than spending the same amount on a 13900k system and using the money elsewhere, even without taking into account the extra PSU-electricity-cooling cost (not an issue if your are just gaming, but at the moment the performance for that scenario seem to be about the same anyway and that tested withh a 420mm water cooler).

$160 more to have the comfort of a fancy 4tb nvme extra leg (or a second 2tb to add to the previous one you have) or going from a 7900xt instead of the 6950xt (or the xtx instead of the xt) or go for the very fast DDR-5 instead of a 6000mhz type kit.

hard to see going for this particularly to play games over a 7800x3d and:
Averaged over the 45 tests in our application test suite, the Core i9-13900KS can impress. It's clearly the fastest CPU in the test group, but the differences are small. Compared to the 13900K, the performance uplift is 3.5%

Obviously could make sense for a non compromise build for which that extra money over the 13900K is not, not spent on something else.
I had my 13900K before the 7800X3D came out. I agree, there's no reason to get a 13900KS as I was able to narrow the performance difference between my chip and the KS to 1-2% on DDR4!

We will see what they do with the raptor Lake Refresh at the end of the year. I'm curious if they will re-release an up clocked variant of the K/KS, add more cache or move it to the new 14th Gen Platform and the new socket and call it new.
 
Intel 7 isn’t even as dense as TSMC 7 and has much worse power efficiency. It doesn’t come close to touching TSMC 5N or 4N.

I look forward to when Intel and AMD are on comparable nodes, as it stands Intel is a full 2 generations behind there.
Now, I have been getting flack over the Intel 7 process from all the density scholars on here.... I'm surprised one hasn't emerged to come after you as well.

Intel's 14th Gen integrates TSMC 4/5nm Tiles on their chips. I think they will debut in mobile first and go from there. It will be interesting to see if they put Intel back into "parity" with AMD
 
Now, I have been getting flack over the Intel 7 process from all the density scholars on here.... I'm surprised one hasn't emerged to come after you as well.
Intel kind of deserves to get flack about density. Back when Intel 10nm was promised to be the best thing ever, that's all anyone rooting for intel talked about. Much like how efficiency used to be really big back vs Bulldozer. I personally never cared about density. We'll see the actual numbers when it's ready.
 
Intel kind of deserves to get flack about density. Back when Intel 10nm was promised to be the best thing ever, that's all anyone rooting for intel talked about. Much like how efficiency used to be really big back vs Bulldozer.
Dude, totally agree. I got shit on so many times buy people bragging about how 10nm Intel was the second coming of Christ. I thought computer chip were supposed to be getting more efficient and more powerful and they have just gotten less efficient and more powerful. across the board. AMD may hold the efficiency crown due to node advantage but do they really? Efficiency has been thrown out the window for diminishing returns on performance. The 7000 series just looks like an overclocked 5000 series on a smaller process node with some pipeline enhancements and tweaks... It reminds me of Intel, regurgitating the same shit with a die shrink
 
Now, I have been getting flack over the Intel 7 process from all the density scholars on here.... I'm surprised one hasn't emerged to come after you as well.

Intel's 14th Gen integrates TSMC 4/5nm Tiles on their chips. I think they will debut in mobile first and go from there. It will be interesting to see if they put Intel back into "parity" with AMD
The 4/5 tiles are for the GPU side for sure as TSMC already makes those for Intel Ponte Vecchio and ARC. I don’t know if they will be for the CPU part though.

I am not expecting Intel to reach an actual parity with TSMC for a few more years. ASML is behind on production but TSMC is having a really hard time with 3nm. From the reports their 3nm failures are a dead ringer for Intel’s 10nm failures and the two are working to collaborate on a fix. But there was something about Intel requesting a separate engineering team in a separate facility to minimize the chance of any of Intels stuff “accidentally” ending up in the hands of AMD, Nvidia, Apple, and the slew of others.

And Intel 7nm speaks for itself, on paper it is very close but in actual production it falls behind. But it is tough as nails, I mean I can’t imagine putting the sorts of voltages Intels lineup is capable of through any of TSMCs stuff right now, I mean some of the Xeons are doing 900+ watts when OC’d not even TSMc GPU optimized processes will handle those loads and not melt.
 
Last edited:
Dude, totally agree. I got shit on so many times buy people bragging about how 10nm Intel was the second coming of Christ. I thought computer chip were supposed to be getting more efficient and more powerful and they have just gotten less efficient and more powerful. across the board. AMD may hold the efficiency crown due to node advantage but do they really? Efficiency has been thrown out the window for diminishing returns on performance. The 7000 series just looks like an overclocked 5000 series on a smaller process node with some pipeline enhancements and tweaks... It reminds me of Intel, regurgitating the same shit with a die shrink
TSMC and Intel are well into the diminishing returns area of node shrinks. We are at a point where a 1.2x increase in density costs 1.5x as much and it’s going to get worse from here.

This is why Intel and AMD are looking to add specific silicon for specific jobs and will start making further use of accelerators to increase efficiency.
Intel’s tile approach will do a lot here as they can very easily mix and match different processes and different vendors going forward. I also expect to see an uptick in SoC solutions especially for laptops.
 
Efficiency has been thrown out the window for diminishing returns on performance.
depend for what I imagine ?
efficiency-gaming.png


7800x3d is almost twice as efficient than the 5800x released not so long ago, +40% a 5700x in gaming,

In multhithread workload a 7900 would be about the double of an 5900x, +50% an 5950x if you want efficacy, I am pretty sure if you run a 7950x with the same watts than a 5950x there a big gain, it is just that they are pushed farther in the diminishing return zone because they handled high heat very well.
 
Maybe, but it was only an increase of 2% as of the end of 2022. With Intel retaining 74% an the rest on ARM. AMD was at 20% Server market share and they had lost something to the tune of 5% of their Desktop market Share (from 2021) to get there.

Let's take a look at '22

"AMD’s total share of the CPU market (excluding IoT and custom silicon) rose from 23.3% in 2021 to 29.6%, while Intel’s share fell from 76.7% in 2021 to 70.4% in 2022.

In the server market, AMD’s total market share grew from 10.7% at the start of 2022 to 17.6% at the end of the year, while Intel fell from 89.3% at the start of the year to 82.4%."
 
Let's take a look at '22

"AMD’s total share of the CPU market (excluding IoT and custom silicon) rose from 23.3% in 2021 to 29.6%, while Intel’s share fell from 76.7% in 2021 to 70.4% in 2022.

In the server market, AMD’s total market share grew from 10.7% at the start of 2022 to 17.6% at the end of the year, while Intel fell from 89.3% at the start of the year to 82.4%."
And this clearly shows us that Intel is far from Dead.

So, I don't know what squabbling about a couple percentage points here or there actually shows. AMD made some ground and depending on which article you cite they actually lost ground in the desktop space to intel...
 
depend for what I imagine ?
View attachment 566235

7800x3d is almost twice as efficient than the 5800x released not so long ago, +40% a 5700x in gaming,

In multhithread workload a 7900 would be about the double of an 5900x, +50% an 5950x if you want efficacy, I am pretty sure if you run a 7950x with the same watts than a 5950x there a big gain, it is just that they are pushed farther in the diminishing return zone because they handled high heat very well
You are comparing CPUs not actual power consumption.

I am speaking from a very specific point of view. In the last 20 years the power consumption of CPUs hasn't budged. In fact, in this generation of AMD it's gone up since the last iteration. So, while you may be gaining performance per watt, the watts have increased and not gone down. This applies to Intel as well, they threw efficiency out the window for raw performance.
 
I wonder how good the 7000 series is vs the 5000 series at the same power constraints and same clock speeds. Probably not much more than 5-15% better.

If the 7000 series could pull off those impressive gains at say, 45-65 Watts I would say AMD had achieved something amazing. But they can't and neither can Intel.
 
I am comparing performance per watts, I assumed that what was meant by efficacy.
It's cool that the chips are more efficient per watt, but the watts keep going up. That's not efficiency.

Show me a 5950X and a 7950x at the same clock speeds and give the 7950 1/2 the power. I bet you it get's it's ass handed to it.
 
It's cool that the chips are more efficient per watt, but the watts keep going up. That's not efficiency.
Performance by watt, I feel that almost to the nose exactly what efficiency mean:
a) effective operation as measured by a comparison of production with cost (as in energy, time, and money)
b) the ratio of the useful energy delivered by a dynamic system to the energy supplied to it


Look at what it can do in eco-mode:

11_Eco_Cinebench_R23_nT.png


In 65 watt mode it does still beat a 170-195 watt 5950x
 
Last edited:
Back
Top