Intel CEO Doesn't See Arm-based Chips as Competition in the PC Sector

The difference here is that x86's death has been predicted many times before. Even Intel tried to replace x86 with Titanium, only to find that AMD created 64-bit instructions for x86 and the performance difference was huge.

“When we look at the future roadmaps projected out to mid-2006 and beyond,” Jobs said, “what we see is the PowerPC gives us sort of 15 units of performance per Watt, but the Intel roadmap in the future gives us 70. So this tells us what we have to do.” -- Steve Jobs

"I'm going to destroy Android, because it's a stolen product. I'm willing to go thermonuclear war on this," --Steve Jobs
I'm not sure this one is really predicting the death of x86... as much as the push x86 is likely to get out of specific markets.

I think Pat is still of the thinking that if x86 is still the better option for a desktop, or a high performance workstation that means windows stays x86 and holding onto a market like laptop is just a given.

It's not a given. In anyway. M1 should have scared the shit out of Pat but it didn't cause frankly he has his head up Intels (his own) ass. What should have scared him about M1 is that Apple was selling x86 for desktop and ARM for laptops... and it was not big thing. Users don't care. Who cares if Safari on one is compiled for X86 and one is compiled for ARM. It makes no difference to an end user. The thinking 10 years ago was ARM can't ever be a thing because companies aren't going to rewrite all their software for ARM... and if they do then what they going to abandon x86? Its no longer an either or situation. The market can support both side by side no issue.

MS is sort of slowly getting it right. We are basically at a point now where if you use the MS tool kits as a developer... the vast majority of devs don't have to do anything special to support ARM. So with a combo of most modern developers building in a way that software can go either way... with more and more users not really caring at all. Intel should be concerned about losing the mobile market completely. Pat is very laissez-faire about the entire thing thinking Windows people still care that they have Intel inside. I suspect in about 5 years Intels market share in the laptop market will drop under 50%. (they are currently at something like 77%). Within a decade they will either have to build their own ARM product or have to live with single digit market share in that market. No matter how much they customize, trim and scrap x86 down it will never match ARM on the same process in terms of efficiency and modularity. ARM as its currently built is just much easier hardware wise to mesh with co processing units like AI even things like Camera chips and basically anything else. Yes you can bolt that stuff onto X86 but not built it to share cache space and resources in the way you can with ARM.
https://learn.microsoft.com/en-us/windows/arm/overview
 
Last edited:
Everyone loves to mention the past 3 years, but also forgets that the market went nuts for the past 3 years. Yes Apple got more sales, but not more relative to other manufacturers.
I am not sure how you see a bit over 10% now, under 8% in 2019 and end up saying something like that.
That's the biggest factor at play going forward here, IMO. x86 is still supporting instruction sets from way back in the day. Ultimately the chips will get larger in a world where everything else is getting smaller. ARM is a remedy to that situation.

I'm not a computer engineer, but it's something I had been thinking about when comparing the technologies.
I am not either, but apparently legacy instruction is quite exaggerated in terms of issues or explanation for the difference.

Say an instruction that not being much use since the day of the pentium 200 MMX do exist, how many transistor would you need to support it (if it cannot be translated on the new one) to offer equal or better performance at it than a Pentium 200 did in the mid 90s ? You can have quite the translation-ermulation cost and still be faster with how ridiculously faster cpu are now vs back then.

If someone say we could find ways to save 3% if it is a really strong end of support because we save some look up process, tiny amount of silicon, etc... I could believe it and it is not nothing, but it is obviously not worth it and not the gap we see when there is one in either direction of Arm vs AMD-Intel.

A lot of cpu are cache and not instruction affair, a Pentium 200 had less than 6 millions relevant transistors, a 13900k has around 26,000 millions transistors, putting a full on Pentium in there would be less than 2% of the chip, and I imagine the legacy part is a tiny fraction of that and handled quite well.
 
I think it is aistake to associate ARM with "low power / mobile". That is certainly where it got its start, but there is nothing about the instruction set that makes it less suited to performance applications or more suited to mobile applications. ARM is just another instruction set. It is the underlying architexture that makes he big difference. Provided with the right underlying architecture, x86 can pretty much to everything ARM does (we saw this with Intel's Moorefield SoC's), and ARM can pretty much do everything x86 can do (as Apple has demonstrated with their M series chips)

Any migration to a new instruction set will of course have its challenges, but the truth is that modern CPU's have provided to be surprisingly receptive to translation layers. I'd imagine that IF the PC market shifts to ARM (I'm not yet convinced it will, but it might) then popular more recent titles will see ports, and older titles will likely run adequately using some sort of x86 to ARM wrapper.

It would likely not run as fast as native code, but since it would mostly apply to older titles, you probably wouldn't need it to run as fast either. Its sort of like running old DOS titles in DOSBox. Sure it loses some efficiency not running natively, but even a CPU in the late 90's was fast enough that this doesn't matter when running 80's and early 90's DOS games.

Heck, it would even make sense that at least initially ARM CPU's intended for PC use include some sort of hardware x86 decode stage allowing them to semi-natively handle that process when called upon to run x86 code.

It doesn't seem like this would be too far fetched because this is essentially what x86 chips already do, decoding x86 instructions to RISC-like micro-ops.

In fact, I vaguely remember this being discussed WAY back when ARM architectures were first hitting the market. There was talk of ARM CPU's serving as the back end, having any number of existing instruction sets decoded to run on them.

In the end, if a new instruction set comes, I just don't see it being a game changer either way. It will be more of an evolutionary transition that won't really stop you from doing anything you are currently doing.

And it could be a good thing. ARM could open up competition to more players resulting in more more options at higher performance. Or it could be a mishmash of incompatible boards and CPU's making things even more of a mess.

Something truly open, like RISC-V would be even better on the competition side, though some more controls on feature levels would be necessary to ensure binary compatibility.

The important part in this transition is to not allow vendors to go down the road of proprietizing everything resulting in lock-ins and lock-outs. It is incumbent upon us, the users to push for cross compatibility of things like RAM standards/ form factors, PCIe, USB, drive form factors, no locked boot loaders, etc. such that the vendors can't pull their bullshit.

Because they will with 100% certainty try. The PC market thrives because of croas-compatibility, but every single vendor out there, Intel, Nvidia, Microsoft and even AMD benefits from market manipulations that hurt the customer and force them into making purchasing choices based on unnecessary compatibility lock-ins (like how if you bought a G-Sync screen, you now pretty much have to buy a Nvidia GPU, etc.)

We have to fight the industry to prevent the PC market from sliding into a proprietizef dystopia. Based on history, I don't have too much confidence consumers will actually succeed...

I think it is aistake to associate ARM with "low power / mobile". That is certainly where it got its start, but there is nothing about the instruction set that makes it less suited to performance applications or more suited to mobile applications. ARM is just another instruction set. It is the underlying architexture that makes he big difference. Provided with the right underlying architecture, x86 can pretty much to everything ARM does (we saw this with Intel's Moorefield SoC's), and ARM can pretty much do everything x86 can do (as Apple has demonstrated with their M series chips)

Any migration to a new instruction set will of course have its challenges, but the truth is that modern CPU's have provided to be surprisingly receptive to translation layers. I'd imagine that IF the PC market shifts to ARM (I'm not yet convinced it will, but it might) then popular more recent titles will see ports, and older titles will likely run adequately using some sort of x86 to ARM wrapper.

It would likely not run as fast as native code, but since it would mostly apply to older titles, you probably wouldn't need it to run as fast either. Its sort of like running old DOS titles in DOSBox. Sure it loses some efficiency not running natively, but even a CPU in the late 90's was fast enough that this doesn't matter when running 80's and early 90's DOS games.

Heck, it would even make sense that at least initially ARM CPU's intended for PC use include some sort of hardware x86 decode stage allowing them to semi-natively handle that process when called upon to run x86 code.

It doesn't seem like this would be too far fetched because this is essentially what x86 chips already do, decoding x86 instructions to RISC-like micro-ops.

In fact, I vaguely remember this being discussed WAY back when ARM architectures were first hitting the market. There was talk of ARM CPU's serving as the back end, having any number of existing instruction sets decoded to run on them.

In the end, if a new instruction set comes, I just don't see it being a game changer either way. It will be more of an evolutionary transition that won't really stop you from doing anything you are currently doing.

And it could be a good thing. ARM could open up competition to more players resulting in more more options at higher performance. Or it could be a mishmash of incompatible boards and CPU's making things even more of a mess.

Something truly open, like RISC-V would be even better on the competition side, though some more controls on feature levels would be necessary to ensure binary compatibility.

The important part in this transition is to not allow vendors to go down the road of proprietizing everything resulting in lock-ins and lock-outs. It is incumbent upon us, the users to push for cross compatibility of things like RAM standards/ form factors, PCIe, USB, drive form factors, no locked boot loaders, etc. such that the vendors can't pull their bullshit.

Because they will with 100% certainty try. The PC market thrives because of croas-compatibility, but every single vendor out there, Intel, Nvidia, Microsoft and even AMD benefits from market manipulations that hurt the customer and force them into making purchasing choices based on unnecessary compatibility lock-ins (like how if you bought a G-Sync screen, you now pretty much have to buy a Nvidia GPU, etc.)

We have to fight the industry to prevent the PC market from sliding into a proprietizef dystopia. Based on history, I don't have too much confidence consumers will actually succeed...
I only brought up the low power angle is because most of the x86 vs ARM comparisons I've come across online always bring it up.

I've actually done a number of ARM projects for the last company I worked for (now retired) so I'm at least familiar with its' architecture. It's perfectly adequate and worked fine for what I was using it for (telephony audio processing). It was cheaper than any of the TI DSPs I was looking at for the projects.

Since I'm an old fart, I think an ARM PC is going to be a non-issue for me.
 
I only brought up the low power angle is because most of the x86 vs ARM comparisons I've come across online always bring it up.

I've actually done a number of ARM projects for the last company I worked for (now retired) so I'm at least familiar with its' architecture. It's perfectly adequate and worked fine for what I was using it for (telephony audio processing). It was cheaper than any of the TI DSPs I was looking at for the projects.

Since I'm an old fart, I think an ARM PC is going to be a non-issue for me.
Well in all fairness to that, where are the high-power ARM CPUs? Apple tops out at 65w, which is mid at best, you want the 200w and more you need to look at the big Enterprise chips and not many here have access to that kind of hardware.
So for the 99% ARM is in cellphones, 15w Chromebooks, Ultraportables, some SBC, and some developer SoC all low power usually in the sub 20w range so really Apple here is the outlier in terms of power and the consumer space for ARM where it has for the overwhelming majority of its life been used for ultra-low power applications.
But yeah ARM and the stupid high core counts for heavy transaction servers, telecom, SQL, web servers, payment gateways, and things where over-provisioning the CPU is a bad idea because the network cross-talk side effects are going to leave you with a bad time. ARM is great and does fantastic well into the 200+ watt range.
 
I am not sure how you see a bit over 10% now, under 8% in 2019 and end up saying something like that.

I am not either, but apparently legacy instruction is quite exaggerated in terms of issues or explanation for the difference.

Say an instruction that not being much use since the day of the pentium 200 MMX do exist, how many transistor would you need to support it (if it cannot be translated on the new one) to offer equal or better performance at it than a Pentium 200 did in the mid 90s ? You can have quite the translation-ermulation cost and still be faster with how ridiculously faster cpu are now vs back then.

If someone say we could find ways to save 3% if it is a really strong end of support because we save some look up process, tiny amount of silicon, etc... I could believe it and it is not nothing, but it is obviously not worth it and not the gap we see when there is one in either direction of Arm vs AMD-Intel.

A lot of cpu are cache and not instruction affair, a Pentium 200 had less than 6 millions relevant transistors, a 13900k has around 26,000 millions transistors, putting a full on Pentium in there would be less than 2% of the chip, and I imagine the legacy part is a tiny fraction of that and handled quite well.

You have a valid point, certainly, although I think regardless of how much additional space it requires to maintain those instruction sets, the bottom line is that it requires additional space to maintain those instruction sets, for better or worse. That's one advantage ARM would offer is you can theoretically keep the die size down, where as, at least theoretically to my understanding, x86 would essentially continue to grow in size as more instructions are added.
 
You have a valid point, certainly, although I think regardless of how much additional space it requires to maintain those instruction sets, the bottom line is that it requires additional space to maintain those instruction sets, for better or worse. That's one advantage ARM would offer is you can theoretically keep the die size down, where as, at least theoretically to my understanding, x86 would essentially continue to grow in size as more instructions are added.
From my very limited comprehension, the big revolution of the pentium versus the 80-81-82-83-8486 was to breakdown x86 instruction into sub Pentium like instruction with a translator.

At first it was a giant cost, like a third of chips used by this, by the Pentium 4 day it was quite reasonable and today almost nothing and once you have that concept, the support of the older instruction is probably almost all baked in. That translation step is probably more where the performance cost occur than the silicon wasted or the amount of instruction being supported.
 
Gelsinger isn't actually delusional. Listen to the whole earnings call and there's context to some of the quotes cherrypicked by the tech sites that is lost in the deliberation here.
 
The ISA overhead itself is not nearly what people think it is. There's a little up front cost, but then it immediately goes to operations which have zippo to do with the client-facing ISA.
 
I think he is right, at least for the foreseeable future.

For ARM to take over the desktop, it would need as good or better performance, and have compatibility with 100,000 or more applications.

Sure, Microsoft has builds of Windows on ARM, but if my apps don't work it's not even a choice. If the Windows makes the app compatible (I don't think it does) there's still the issue of performance.. Intel and AMD x86 CPU's can reach 5Ghz, what's the clock on the fastest Arm? Less.

Something amazing would have to occur to cause the ground to shift much. It's just another instruction set.
 
I think he is right, at least for the foreseeable future.

For ARM to take over the desktop, it would need as good or better performance, and have compatibility with 100,000 or more applications.

Sure, Microsoft has builds of Windows on ARM, but if my apps don't work it's not even a choice. If the Windows makes the app compatible (I don't think it does) there's still the issue of performance.. Intel and AMD x86 CPU's can reach 5Ghz, what's the clock on the fastest Arm? Less.

Something amazing would have to occur to cause the ground to shift much. It's just another instruction set.
Not even that it would need a cost/benefit ratio that is tricky to achieve. Apple is basically completely vertically integrated so they can say “fuck it we’re gonna wreck it” but Dell, HP, Lenovo, not so much.
 
Sure, Microsoft has builds of Windows on ARM, but if my apps don't work it's not even a choice. If the Windows makes the app compatible (I don't think it does) there's still the issue of performance.. Intel and AMD x86 CPU's can reach 5Ghz, what's the clock on the fastest Arm? Less.
Clock speed isn't everything, and translation layers have made emulation almost a thing of the past between ISAs.
 
I'm not sure this one is really predicting the death of x86... as much as the push x86 is likely to get out of specific markets.
The only market x86 has is the desktop/server market, and if pushed out then x86 is dead.
I think Pat is still of the thinking that if x86 is still the better option for a desktop, or a high performance workstation that means windows stays x86 and holding onto a market like laptop is just a given.
It's the same reason why ARM is the dominate CPU for Android, because of legacy.
It's not a given. In anyway. M1 should have scared the shit out of Pat but it didn't cause frankly he has his head up Intels (his own) ass. What should have scared him about M1 is that Apple was selling x86 for desktop and ARM for laptops... and it was not big thing. Users don't care. Who cares if Safari on one is compiled for X86 and one is compiled for ARM. It makes no difference to an end user.
The problem is that it does. You're not just asking developers to put in a lot of work, but also end users to deal with problems. If an application doesn't work on your computer, then what do you do? You make a video on the internet to try and appeal to Blizzard to port their game to ARM MacOS. There's a reason why Apple is losing sales and why they suddenly care about gaming on MacOS.

View: https://youtu.be/MkjnZpM3qAc?si=G_HqX8U3qj5S8pYn
The thinking 10 years ago was ARM can't ever be a thing because companies aren't going to rewrite all their software for ARM... and if they do then what they going to abandon x86? Its no longer an either or situation. The market can support both side by side no issue.
Keep in mind that 10 years ago ARM didn't even have 64-bit, which is a problem since a lot of Android apps refuse to work on armeabi-v7. You're asking for a chicken or egg situation. Apple's ARM based Macbooks aren't enough to sway the industry to suddenly retool thousands of lines of code to support a very small subset of MacOS users. Apple is trying but their end users are still dealing with incompatibilities and performance issues because of this. It's not like Microsoft hasn't tried to push for ARM in the past, and failed. Especially when the main advantage of ARM is power efficiency, and that advantage is lost. I said 3 years ago that if you give AMD 2 to 3 generations that they will catch up to Apple, and they did. Very likely Intel will as well, as soon as they release Meteor Lake. So who in their right mind is going to switch to ARM to deal with all the problems and incompatibilities?
MS is sort of slowly getting it right. We are basically at a point now where if you use the MS tool kits as a developer... the vast majority of devs don't have to do anything special to support ARM.
As a Linux user who deals with that sort of problem all the time, it's never that simple. If you really want to use ARM today, it isn't on Windows or Mac OSX but on Linux. You'll likely find more working software and tools to help make it a daily driver on Linux. The problem is that a lot of other stuff that's associated with ARM, like graphics, don't usually work so well. You're not just dealing with the ARM CPU, but also the Qualcomm graphics drivers, or Apple's lack of adoption for Vulkan. Intel is pretty much the standard with things like Thunderbolt, which a lot of Apple users are finding still works better on Intel based Macs instead of Apple Silicon Macs. Or how the base model M2 Macbooks don't support dual monitor but a 2012 Intel based Macbook does support dual monitor. It's easy to discredit Intel when you just look at them as x86, but they do a lot of stuff for the industry beyond x86. If you buy an Intel today, you're buying years of experience with networking, sound, and etc. Stuff you think Apple has gotten right on their M-Series, but lots of people are having problems.
I suspect in about 5 years Intels market share in the laptop market will drop under 50%. (they are currently at something like 77%).
My prediction is that by the end of the decade, you'll see Apple making laptops with Intel chips again. My prediction was right about AMD with their Dragon Range chips. There's no reason to buy an ARM based laptop in 2023. What we're going to see is Apple, AMD, and Intel play leap frog. When Apple does release their M3 based Macbooks, it'll be top dog. Until Intel releases Meteor Lake, which will be the new top dog. Then at some point, AMD will release a new chip and it'll be the new top dog. The problem is that Apple is already behind in things like graphics, which in my opinion matters far more than just the CPU. Apple now adding things like Mesh Shaders and Ray-Tracing is basically 2019 for the Windows PC industry. That difference in technology gap will just continue to grow because neither Apple nor Qualcomm are at the same level as AMD, Intel, and Nvidia. As dysfunctional as you think Intel is, they technically have better graphics than Apple does, and I don't mean just performance. Intel had AV1 encoding and now the M3 is going to get it. Nobody tested M2 based devices with AV1, but I bet they will now with the M3. Nobody tested AMD's Rembrandt or Dragon Range chips against Apple's M2 chips. Linus Tech Tips who seems to forgo proper testing will likely not do much of any battery testing with the new M3's against AMD's Dragon Range chips. LTT will just show you an image of a RTX 4090 running Furmark and remind you how bad PC power consumption is. The only source I can find that actually did any testing is notebookcheck.net, which is basically the dumpster bin of PC benchmarks. As far as I'm concerned, the only reason the industry still thinks ARM is worth the time on desktops and laptops is because people still think ARM has a battery advantage. The lack of testing, let alone proper testing is telling. Intel though will likely pay for tests to be done on Meteor Lake, which is sad that I'm looking forward to sponsored videos from Intel.
 
Last edited:
A lot of cpu are cache and not instruction affair, a Pentium 200 had less than 6 millions relevant transistors, a 13900k has around 26,000 millions transistors, putting a full on Pentium in there would be less than 2% of the chip, and I imagine the legacy part is a tiny fraction of that and handled quite well.
Not sure if i follow the math on this. P200=6million t. 13900k=26million t. So how is the P200 only 2%?
 
Not sure if i follow the math on this. P200=6million t. 13900k=26million t. So how is the P200 only 2%?
I don’t know the transistor counts myself. But based on Luke’s given math:
6/26,000 = .000023077 or .0023077%
Which is actually a tiny fraction of a single percent.

EDIT: a quick search estimates the 13900k at 26billion with a “b” transistors (25.9). Based on that the math seems right. Though I didn’t look up Pentium 200 transistors.

Not that my commentary is necessary: but I tend to think that this legacy code and how it’s integrated into x86 chips is a fair bit more complicated that this. But whatever.
 
Last edited:
Posting a anything Whoopi related should be a banable offense! (TNG is the only exception)
 
I don’t know the transistor counts myself. But based on Luke’s given math:
6/26,000 = .000023077 or .0023077%
Which is actually a tiny fraction of a single percent.

EDIT: a quick search estimates the 13900k at 26billion with a “b” transistors (25.9). Based on that the math seems right. Though I didn’t look up Pentium 200 transistors.

Not that my commentary is necessary: but I tend to think that this legacy code and how it’s integrated into x86 chips is a fair bit more complicated that this. But whatever.
Okay got it. He missed a bunch of zero's. Mill vs. Bill! All good.
 
Gelsinger isn't actually delusional. Listen to the whole earnings call and there's context to some of the quotes cherrypicked by the tech sites that is lost in the deliberation here.
Man all the tech sites are clickbait fests now that are bad for that.
Still someone should probably tell Pat to craft his words so he doesn't end up with too many Balmerisms slipping out. Steve often didn't sound as crazy in context either... but you have to steer clear of those 30s diatribes that get clipped. Then again if the alternative is everyone speaking word salad maybe we should cut him a bit of slack.

Pat has always come off to me as very high on Intels supply, he has been saying delusional sounding since the day he took the job. I mean he was also delusionally Intel is king in his thinking during his first run at Intel as well... really he is the architect of their face plant during the Ryzan years. He set their fabs back damn near a decade in his first run, then when everyone said get a CEO that knows the things they hired him?? I actually don't have much faith in Intel under Pat. I suspect their fab business is still going to face plant... and is going to suck up a lot of Gov funds to stay relevant. Thankfully for Pat and Intel the US Gov will not allow Intel to fail too hard.
 
Not sure if i follow the math on this. P200=6million t. 13900k=26million t. So how is the P200 only 2%?
I am not sure how I went from 6/26000 = 2% (instead of 0.02%), I probably made the conversion to percentage 2 time.
Not that my commentary is necessary: but I tend to think that this legacy code and how it’s integrated into x86 chips is a fair bit more complicated that this. But whatever.
Yes, what described would be a terrible worst case scenario to put things in perspective, from my understanding there a translation layer from instruction to a subset of instruction that occur legacy or not
 
Last edited:
Okay got it. He missed a bunch of zero's. Mill vs. Bill! All good.
Not to dig the trench further but.... I think you just didn't read what followed, because he did:

A lot of cpu are cache and not instruction affair, a Pentium 200 had less than 6 millions relevant transistors, a 13900k has around 26,000 millions transistors, putting a full on Pentium in there would be less than 2% of the chip, and I imagine the legacy part is a tiny fraction of that and handled quite well.
He states: "6 millions" vs "26,000 millions".
 
Brings back some memories. Only 15-17 years ago now, but still memories. I had one of the old iPod Nanos. Not enough capacity, but was a pretty good music player. Also one of my first smartphones as a Windows one iirc. It sucked... barely operable lol. Didn't have a keyboard either way, not that I know who can actually use one of those tiny keyboards on a Blackberry.
It's not comparable to typing on a keyboard but it was better than a touchscreen :(.
 

I unsubscribed from Coreteks when he got upset about the whole Linus TechTips vs Gamers Nexus situation where he had a run in with Gamers Nexus and didn't like what they had to say to him. His videos are more sensationalism than tech. Something to keep in mind is that AMD was working on ARM chips way back when they made Ryzen, and nothing happened. If AMD is working on ARM chips but then for what? Nvidia used to make ARM chips and then stopped, because the market was too fierce. Still got my 2014 Nvidia Shield running LineageOS 15.1 on it. In 2023 you won't see a new ARM chip without Apple, Samsung, and Qualcomm pushing out the competition. I guess everyone assumes this will be on the desktop/laptop market, where again Apple is losing sales. Qualcomm and Microsoft tried this in the past, and also failed. Coreteks also fails to mention Intel's upcoming Meteor Lake, as if he had a memory lapse.

To give you an idea, Coreteks said Apple gained 10% market share since they introduced of ARM, but that's also when the pandemic started. Meanwhile HP, Dell, and Lenovo gained much higher than that, with x86. Everyone grew in market share, but Apple for the past year has had the biggest decline in sales. Wendell from Level1techs is excited about AMD's upcoming AI driven chips, which is based on x86. Even Intel looks more impressive to him than Nvidia's ARM based Grace Hopper.


View: https://youtu.be/IhlL1_z8mCE?t=1231
 
I unsubscribed from Coreteks when he got upset about the whole Linus TechTips vs Gamers Nexus situation where he had a run in with Gamers Nexus and didn't like what they had to say to him. His videos are more sensationalism than tech. Something to keep in mind is that AMD was working on ARM chips way back when they made Ryzen, and nothing happened. If AMD is working on ARM chips but then for what? Nvidia used to make ARM chips and then stopped, because the market was too fierce. Still got my 2014 Nvidia Shield running LineageOS 15.1 on it. In 2023 you won't see a new ARM chip without Apple, Samsung, and Qualcomm pushing out the competition. I guess everyone assumes this will be on the desktop/laptop market, where again Apple is losing sales. Qualcomm and Microsoft tried this in the past, and also failed. Coreteks also fails to mention Intel's upcoming Meteor Lake, as if he had a memory lapse.

To give you an idea, Coreteks said Apple gained 10% market share since they introduced of ARM, but that's also when the pandemic started. Meanwhile HP, Dell, and Lenovo gained much higher than that, with x86. Everyone grew in market share, but Apple for the past year has had the biggest decline in sales. Wendell from Level1techs is excited about AMD's upcoming AI driven chips, which is based on x86. Even Intel looks more impressive to him than Nvidia's ARM based Grace Hopper.


View: https://youtu.be/IhlL1_z8mCE?t=1231

AMD embeds ARM cores into the EPYCs and Threadripper Pro series as it runs the embedded security and management functions which operate as its own mini OS within the chip.
And Nvidia still sells an absolute shitload of ARM cores, industrial automation is no joke and it's almost all Nvidia hardware.
But AMD and Intel are doing much cooler things than Nvidia with packaging, which is where the Blackwell stuff changes that, but shit on Nvidia's packaging all you want it's still 2-3 times faster than AMD or Intel in the bad situations, and still upwards of 100x faster in the best ones. AMD and Intel have demoed new hardware at Computex, but it's still not here and was talking about how it was going to get close to Nvidia only for Nvidia to say hold my beer do a software update and double performance, and Nvidia is already putting out the replacement to that hardware which is looking to be almost 2x faster again.

So here we are with AMD and Intel pushing an open but woefully incomplete software stack meanwhile Nvidia is churning along with an almost 2 generation lead on hardware and a complete software stack to support it with new features and functions being added frequently.

AMD and Intel need to do better.
 
Obviously Apple does not sell their silicon, but Apple commands a decent market share in the PC space, and their chips are quite amazing for what they can do in a very small power envelope. Intel needs to be really careful. ARM chip development is no joke.
 
And Nvidia still sells an absolute shitload of ARM cores, industrial automation is no joke and it's almost all Nvidia hardware.
Yes but not to end users. The server market is kinda different in that they'll go with whomever has the better deal, because they don't mind rewritten thousands of lines of code if it means they save money. Not to forget that they aren't after Nvidia for ARM, but for their AI hardware. ARM is just convenient because it does give Nvidia broader reach. Nvidia would have gone with x86, if they could get a license, but they can't.
But AMD and Intel are doing much cooler things than Nvidia with packaging, which is where the Blackwell stuff changes that, but shit on Nvidia's packaging all you want it's still 2-3 times faster than AMD or Intel in the bad situations, and still upwards of 100x faster in the best ones. AMD and Intel have demoed new hardware at Computex, but it's still not here and was talking about how it was going to get close to Nvidia only for Nvidia to say hold my beer do a software update and double performance, and Nvidia is already putting out the replacement to that hardware which is looking to be almost 2x faster again.
Yes but that's with AI and not specific to ARM. Nvidia boosted their AI performance with a software update, which they likely had laying around but was saving it for a rainy day, and that day had come. Meanwhile the industry doesn't even know if the GPU is gonna be the future of AI.
So here we are with AMD and Intel pushing an open but woefully incomplete software stack meanwhile Nvidia is churning along with an almost 2 generation lead on hardware and a complete software stack to support it with new features and functions being added frequently.
Keep in mind that software update from Nvidia wouldn't have been released if it wasn't for AMD and Intel pushing harder to catch up to Nvidia with AI.
AMD and Intel need to do better.
For AI, yes. For everything else, you won't see an Nvidia ARM based SoC on a Windows based device. If you do then nobody will buy one.
 
Yes but not to end users. The server market is kinda different in that they'll go with whomever has the better deal, because they don't mind rewritten thousands of lines of code if it means they save money. Not to forget that they aren't after Nvidia for ARM, but for their AI hardware. ARM is just convenient because it does give Nvidia broader reach. Nvidia would have gone with x86, if they could get a license, but they can't.
Don't write out Tablets, Chromebooks, and upcoming cheap Windows 11 ARM devices.
Consumers are overwhelmingly switching to ARM, more homes have cellphones and tablets than they do for x86 PCs or Laptops by far. The consumer need for a traditional computer has been shrinking annually for years and the market tends to reflect that.
Yes but that's with AI and not specific to ARM. Nvidia boosted their AI performance with a software update, which they likely had laying around but was saving it for a rainy day, and that day had come. Meanwhile the industry doesn't even know if the GPU is gonna be the future of AI.
Keep in mind that software update from Nvidia wouldn't have been released if it wasn't for AMD and Intel pushing harder to catch up to Nvidia with AI.
It was a pretty big update with features they had been advertising as in development and whatnot for a while along with optimizations they had similarly been working on and assuring customers were coming. It wasn't a case of them changing 2 variables in the code somewhere it was extensive changes to large parts of many of their LLM modules and CUDA libraries.
For AI, yes. For everything else, you won't see an Nvidia ARM based SoC on a Windows based device. If you do then nobody will buy one.
I'm not so sure about that, most people buy their laptops based on arbitrary numbers and the $$$ they see on the sticker next to them at a Costco, Walmart, or Staples and use it more for the form factor than the power behind it. Turn on fast...Check, works with email and word processor... check, Facebook/TikTok/Youtube/Twitch... check.

They make sure it does that and have the ability to do Fortnight, Minecraft, and a few other games from primarily 2013 and before and they have like 90% of the market covered if they can do it for under $600
 
I'm not so sure about that, most people buy their laptops based on arbitrary numbers and the $$$ they see on the sticker next to them at a Costco, Walmart, or Staples and use it more for the form factor than the power behind it. Turn on fast...Check, works with email and word processor... check, Facebook/TikTok/Youtube/Twitch... check.

Like many here, we are the help desk to our friends and family, and the above is 100% true in my circle if you add battery life.
 
Yes but that's with AI and not specific to ARM. Nvidia boosted their AI performance with a software update, which they likely had laying around but was saving it for a rainy day, and that day had come. Meanwhile the industry doesn't even know if the GPU is gonna be the future of AI.
Keep in mind that software update from Nvidia wouldn't have been released if it wasn't for AMD and Intel pushing harder to catch up to Nvidia with AI.
That is quite the speculation, if you are talking about that:
https://www.extremetech.com/computi...ference-performance-on-h100-with-new-software
https://developer.nvidia.com/blog/n...-language-model-inference-on-nvidia-h100-gpus

Look at the list of company involved (from facebook to MosaicML), maybe they still do it without Nvidia help and put it on github themselve:
https://github.com/NVIDIA/TensorRT-LLM

The idea they had that all along, lie to that long list of partner and letting work on something already done about tech that are incredibly new is a bit of a conspirationist reflex.

Take this statement for example:
When using the LLM Llama 2, made by Meta, Nvidia says TensorRT-LLM provides a 4.6X uplift compared with a single A100 or 2X uplift for an H100 without the LLM software.

LLaMA-2 was released to the public in july 2023, the H100 GPU were released in March 2022.

That the competition was from AMD-Intel that push that kind of collaborative effort and not from the competition of internal solution at said Facebook-Microsoft (that it would not be enough) is also quite speculative.
 
Don't write out Tablets, Chromebooks, and upcoming cheap Windows 11 ARM devices.
Tablets have slowly been phased out over phones with bigger screens or just laptops. Chromebooks are only good for web browsing, since that's all you get with a Chromebook. Windows 11 ARM won't have more success than previous Windows 10 with ARM. Also, calling Windows 11 ARM devices cheap is a stretch.
Consumers are overwhelmingly switching to ARM, more homes have cellphones and tablets than they do for x86 PCs or Laptops by far. The consumer need for a traditional computer has been shrinking annually for years and the market tends to reflect that.
Consumers don't need to replace older computers because they just work. Phones and tablets are volatile devices, meaning that they eventually break from drops, water, or other stupid things. You'll likely find someone still using an i5 2500K with a GTX 970 and are perfectly happy, while their phone is an iPhone 15 Pro because their iPhone 14 was running slow and the battery life wasn't good anymore. Just the nature of someone that's portable.
I'm not so sure about that, most people buy their laptops based on arbitrary numbers and the $$$ they see on the sticker next to them at a Costco, Walmart, or Staples and use it more for the form factor than the power behind it. Turn on fast...Check, works with email and word processor... check, Facebook/TikTok/Youtube/Twitch... check.
Everything you just listed can be done on a 2013 laptop or any cell phone made in the past several years. Chromebooks can do that, but Chromebook are meant for a student in middle school or an old person who wants to watch videos from some eastern European country that they're from. The moment you want to do anything of actual value, it's trash. That's when you get out your Ubuntu ISO and replace Chrome.
They make sure it does that and have the ability to do Fortnight, Minecraft, and a few other games from primarily 2013 and before and they have like 90% of the market covered if they can do it for under $600
Can you even play Fortnite and Minecraft on ChromeOS? You could install Minecraft from the Play Store, but it sounds like it's worse than installing it on GNU Linux. You can't play Fortnite on ARM Windows, while Minecraft is possible but also complicated. You'd think that Minecraft that's owned by Microsoft would work perfectly fine on ARM Windows. I know Fortnite and Minecraft works fine on ARM MacOS.


View: https://youtu.be/LvGKLeqZW6A?si=R9leX2GJvN-7940M
 
Chromebooks are only good for web browsing, since that's all you get with a Chromebook.
Don't you ever get tired of being wrong on this subject? ChromeOS has a remote desktop client, and on at least some Chromebooks you can install Linux and have access to a shell. That gives you the ability to do almost anything from almost anywhere. (I just tried the remote desktop, and it actually worked, although 5120x1440 doesn't work too well on an 11" screen, but temporarily turning one monitor off and reducing the resolution on the other would've made it usable).
 
Last edited:
Don't you ever get tired of being wrong on this subject? ChromeOS has a remote desktop client, and on at least some Chromebooks you can install Linux and have access to a shell. That gives you the ability to do almost anything from almost anywhere. (I just tried the remote desktop, and it actually worked, although 5120x1440 doesn't work too well on an 11" screen, but temporarily turning one monitor off and reducing the resolution on the other would've made it usable).
Not to mention the Google Play store and most of that library as well as some of the alternate stores.
 
Not to mention the Google Play store and most of that library as well as some of the alternate stores.
Yeah, although I wasn't going to mention that in case he was like "it's just games".

You can get a Linux shell in Android, too. I keep meaning to see if GCC or X servers exist for the Linux under Android.
 
Don't you ever get tired of being wrong on this subject? ChromeOS has a remote desktop client,
What you going to do with a remote desktop client? Connect to a real computer? There's lots of cheap computers that are real computers.
and on at least some Chromebooks you can install Linux and have access to a shell.
Yep, I've done it, and it isn't as easy in most cases. The Chromebook I did this to required me to flash a modified bios and then command ChromeOS through a shell to be able to install Ubuntu.
Not to mention the Google Play store and most of that library as well as some of the alternate stores.
Then you have a crappy tablet or phone, not a desktop computer. There's a reason why tablets have "Decline Sharply " for a while now. It's not like these Chrome devices have a lot of storage. Most Chromebooks are just tablets with a built in keyboard and runs a OS worse than Android.

https://www.idc.com/getdoc.jsp?containerId=prUS51117823
 
Last edited:
Then you have a crappy tablet or phone, not a desktop computer. There's a reason why tablets have "Decline Sharply " for a while now. It's not like these Chrome devices have a lot of storage. Most Chromebooks are just tablets with a built in keyboard and runs a OS worse than Android.

https://www.idc.com/getdoc.jsp?containerId=prUS51117823
All that shows is that during Covid when everybody bought a new one they haven’t had a reason to replace it yet. Which is true for pretty much the entire electronics industry right now which is why everything is “declining sharply” as you have posted repeatedly.
If you keep saying it though maybe it might become true at some point so keep at it.
 
Chromebooks are only good for web browsing, since that's all you get with a Chromebook.
Not quite, while no one will be doing serious gaming or video editing work on them, for a majority of remote workers who only need them for browsing, Google Docs, and a plethora of other work-at-home/remote apps, they more than get the job done, and on a budget.
They can also be setup and managed via Google's enterprise options, which makes deployment of them extremely quick and efficient compared to Windows laptops and workstations, especially for lighter workloads.

Windows 11 ARM won't have more success than previous Windows 10 with ARM. Also, calling Windows 11 ARM devices cheap is a stretch.
ARM has come a long way since Windows 10 released in 2015, and back then a majority of the ARM laptops were all 32-bit and ran with Microsoft's then-garbage emulation.
Considering the strides in general purpose processing and now being 64-bit, ARM has made great strides to easily overtake x86-64 on the low to medium spectrum of laptops, and that may very well change with the higher-end laptops and workloads as support widens for the ISA via software and OSes.

This is no longer an 'if' for ARM, it is a 'when' that will be much sooner than expected in numerous markets outside of ultra mobile and server/data center.
 
All that shows is that during Covid when everybody bought a new one they haven’t had a reason to replace it yet. Which is true for pretty much the entire electronics industry right now which is why everything is “declining sharply” as you have posted repeatedly.
If you keep saying it though maybe it might become true at some point so keep at it.
Yes but Apple is declining faster than any other manufacturer. About 34% decline year over year. You could attribute this to the M2 being a minor upgrade over the M1, which is probably why Apple is comparing the M3 to the M1, and not the M2. I think it's because users got sick of using a laptop that is not only less compatible with older 32-bit applications, but Windows applications as well.

m3 vs m1 not m2.jpg


ARM has come a long way since Windows 10 released in 2015, and back then a majority of the ARM laptops were all 32-bit and ran with Microsoft's then-garbage emulation.
Considering the strides in general purpose processing and now being 64-bit, ARM has made great strides to easily overtake x86-64 on the low to medium spectrum of laptops, and that may very well change with the higher-end laptops and workloads as support widens for the ISA via software and OSes.
Ok but why? What reason does a consumer have to go ARM on a Windows device? ARM being 32-bit only was never a problem, because most devices in 2012 rarely had more than 4 gigs of ram anyway. Windows still has garbage emulation, but now it's included. When Windows RT was released it only allowed you to get apps from the store, and there was zero x86 emulation. With Windows 10 and Surface pro devices you got side loading and emulation, but nobody bought them. Turns out few had working drivers for things like printers, scanners, and etc. This is absolutely a problem with Windows 11 ARM.

You could get an AMD 7840U and get the same battery efficiency as a M2 based device with 100% compatibly and performance. No emulation, no driver problems, and all applications just work.
This is no longer an 'if' for ARM, it is a 'when' that will be much sooner than expected in numerous markets outside of ultra mobile and server/data center.
I said 3 years ago that x86 would catch up to ARM in power efficiency and so far we have AMD's Dragon Range and soon Intel's Meteor Lake. Any chance ARM had in taking a slice out of the desktop market is now lost. Every new generation of x86 will just get better at this. Meanwhile you'll see the opposite with Apple's M-series. I bet the M3 will have less battery life than the M2, just as the M2 was worse than the M1. ARM lost it's chance in the desktop market.
 
Back
Top