Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.The apple haters will love this news, while ripping apple arm in the same breath.
If Apple can get some advances in their GPU that would be a great help to the platform.
Easier to sign a license agreement with Nvidia and get their GPUs.How about a source code compatible version of CUDA?
(not gonna happen...)
Apple has a huge advantage that generic platforms such as x86 or arm do not have.Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.
If Apple can get some advances in their GPU that would be a great help to the platform.
New feature of a new version of DLSS but existing features work for older cards in newer DLSS versions, DLSS Frame Generation != DLSS 3Apple has a huge advantage that generic platforms such as x86 or arm do not have.
They are a vertical company (another example is nvidia & cuda)
No generic arm design can hope to come within touching distance of a vertically integrated hardware/software
Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
I could see an ARM PC if I was only browsing, doing emails and word processing but I have way too many games to want to switch to an ARM CPU. I also hardly ever use a laptop so the energy efficiency of an ARM has no relevance to me.Nvidia begins developing Arm-based PC chips in challenge to Intel
NVIDIA Corporation (NASDAQ:NVDA) has quietly started designing central processing units, in a move that fires a shot across the bow at Intel Corporation (NASDAQ:INTC), according to reporting from Reuters.
The ARM to x86 translation layers are coming along nicely. X86 is very well documented and ARM is very flexible.I could see an ARM PC if I was only browsing, doing emails and word processing but I have way too many games to want to switch to an ARM CPU. I also hardly ever use a laptop so the energy efficiency of an ARM has no relevance to me.
Or the manufacturing or robotics industries.But NVidia and ARM is hardly new, if you count the server space.
Turns out, Apple can't just steal... I mean license Imaginations PowerVR GPU design and ride that for very long. Takes a lot of money and time to make a proper modern GPU.Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.
That's what happened when Apple first introduced their M1 chips where they used 5nm while AMD was on 7nm and Intel was on maybe 14nm? Turns out chip manufacturing matters a lot.Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.
Let me know when that vertical thing happens for Apple. Most developers still use MoltenVK for Apple's metal API. Also, CUDA for Nvidia is great so long as you buy Nvidia. The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.They are a vertical company (another example is nvidia & cuda)
No generic arm design can hope to come within touching distance of a vertically integrated hardware/software
Apple will keep on 'innovating' & create new hardware and new api that takes advantage of new hardware. They don't care for backward compatibility. The devs have absolutely no say on this. (It is just like nvidia forces you to buy a new gpu every rime a new dlss version is released )
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.
MoltenVK adds at worst a 2% overhead.Let me know when that vertical thing happens for Apple. Most developers still use MoltenVK for Apple's metal API. Also, CUDA for Nvidia is great so long as you buy Nvidia. The moment you want to jump on a competitors product, that CUDA advantage becomes a hindrance.
You don't buy a macbook to play videogames on. For everything other than videogames, they are completely superior.Apples biggest problem right now is their GPU architecture, the ARM cores are good enough rarely are they the current bottleneck but their GPU package is anemic for anything not tailored directly for it.
Apples performance advantages have come as a result of TSMCs processes and an OS designed and optimized as much as possible for that architecture. But stray from the path and things get murky and Tim forbid you find yourself out in the weeds, good ducking luck.
If Apple can get some advances in their GPU that would be a great help to the platform.
Well Blender, Adobe, numerous video tools, art tools, design tools, the GPU there does well enough, but it’s not what I would call great.You don't buy a macbook to play videogames on. For everything other than videogames, they are completely superior.
MoltenVK adds at worst a 2% overhead.
And most current development tools has Metal built in, because those multi-platform developer tools don't have you creating the same call in 3 different graphics API's you do it once and it's all translated in the toolsets' back end.
But similarly, almost nobody programs in Vulkan most use Logi, and most don't do DX12 either for that they use Link.
Very few developers are working in the native low-level APIs, they are using wrappers upon wrappers upon wrappers.
They were the popular Wrapper API’s for Vulkan and DX12 though it seems they have been supplanted since so I may have some updates to consider for a few of the labs.What the hell is Logi and Link
You'll find Vulkan and DX12 renderers everywhere. Go pick your favorite emulator and there's probably a Vulkan renderer at this point.
Logi and Link are the wrappers used by a lot of development studios so instead of writing Vulkan and DX12 directly your writing to those API.
Looks like Logi has been supplanted by VKFS.
https://github.com/MHDtA-dev/VKFS
Not interested until they show 100% compatibility with x86, AMD64, and Intel 64 with minimal performance loss in actual testing.The ARM to x86 translation layers are coming along nicely. X86 is very well documented and ARM is very flexible.
Microsoft’s own efforts here have something like an 80% mapping with a 2% overhead, such a far fetched idea that this could happen in the next couple of years in a near seamless manner.
Like the Gif but MediaTek makes some solid ARM SoCs that are well documented and have good compatibility.
Still maybe Nvidia can whip the Dimensity series into shape. Nvidia knows a thing or 3 about making ARM CPUs and GPUs MediaTek, brings the manufacturing and distribution. MediaTeks existing stuff may not be top of the line but they sell more of it than anybody else.Every mediatek or exynos or unisoc I’ve ever used is crap compared to Qualcomm counterparts in smoothness and performance (exynos would be 2nd best and the last are tied for last place)
I'm all for Nvidia bringing back their Tegra products, but they failed for a reason. They had games ported to their products, but most of them were from over a decade and half ago. Nvidia stepped out because Apple, Qualcomm, and Samsung were extremely competitive. Nvidia's ARM chips were slower and consumed more power. DLSS won't do anything since they can all use FSR. Also, if Valve wanted to use an ARM Soc for the Steam Deck, then they would have done so. There's a reason why Valve used AMD and not Qualcomm.If Nvidia manages to pull this off, then the nascent handheld gaming market, dominated by AMD, could see stiff competition.
Incentive for nvidia is to get technologies such as DLSS 3.5 into the handheld gaming market
In what way has Tegra failed?I'm all for Nvidia bringing back their Tegra products, but they failed for a reason. They had games ported to their products, but most of them were from over a decade and half ago. Nvidia stepped out because Apple, Qualcomm, and Samsung were extremely competitive. Nvidia's ARM chips were slower and consumed more power. DLSS won't do anything since they can all use FSR. Also, if Valve wanted to use an ARM Soc for the Steam Deck, then they would have done so. There's a reason why Valve used AMD and not Qualcomm.
When was the last time you saw a tablet or smart phone with an Nvidia SoC in it? Other than the Nintendo Switch and cars, it's mostly dead. Nvidia had to find niches for their SoC's because the ARM market is fierce and saturated. This is why AMD still hasn't made their own ARM based chips, because who would buy it? Apple makes their own chips. Samsung makes their own chips and sometimes buys from Qualcomm. Qualcomm has patents that are still causing issues.I’m what way has Tegra failed?
Nvidia sells a crapload of them, they absolutely dominate the market space.
I have 6 automatic floor burnishers that are powered by Tegra.When was the last time you saw a tablet or smart phone with an Nvidia SoC in it? Other than the Nintendo Switch and cars, it's mostly dead. Nvidia had to find niches for their SoC's because the ARM market is fierce and saturated. This is why AMD still hasn't made their own ARM based chips, because who would buy it? Apple makes their own chips. Samsung makes their own chips and sometimes buys from Qualcomm. Qualcomm has patents that are still causing issues.
But NVidia and ARM is hardly new, if you count the server space.
ARM and what we call RISK now do the same:You can build a power efficient x86 chip or you can build a bloated an inefficient ARM chip. Yes x86 does have a built in performance penalty compared to more RISC based designs because it needs a decode stage to decode the instructions and turn them into RISC-like micro-ops, but the impact of this decode stage is less and less relevant every day.
I mean, Nvidia has had ARM based development boards around forever. A decade? I forget what they call them. Jetson? Is that it?
I think moving the industry to ARM is a mistake, for a few reasons.
1.) Instruction set is technically irrelevant.
You can build a power efficient x86 chip or you can build a bloated an inefficient ARM chip. Yes x86 does have a built in performance penalty compared to more RISC based designs because it needs a decode stage to decode the instructions and turn them into RISC-like micro-ops, but the impact of this decode stage is less and less relevant every day.
Anyone remember the Asus Zenfone 2?
View attachment 653694
It was a pretty damn decent mid range Android phone.
Oh, and it was powered by an Intel Moorefield Atom x86 quad core SoC. I had one. It performed great, and power wise it had just as usable battery life as it's ARM-based brethren. I bought it as a disposable phone when I was in Brazil (just in case I got robbed, Gringos tend to get robbed in Brazil ) but I wound up keeping it when I got home from Brazil as I liked it more than my Motorola Droid Turbo I had at the time. The software was so-so and bloated, but custom ROM's fixed that. The hardware was excellent (except for maybe the camera)
And this was Intel's first foray into phones. Had they determined that it was worth their investment, I bet they could have improved it even more, but it turns out they really liked the profit margins in the PC space which they (at the time in 2014) still more or less controlled the market and could charge whatever they wanted to. It was a bit of a blunder on their part not pushing that harder (though knowing what we know now about the failure of their 10nm process) it would likely have died when they couldn't provide a competitive process anyway.
Instruction set pretty much only matters for binary compatibility these days. You can do almost anything, big or small, with almost any instruction set, provided you have enough bits for addressing.
What really matters is the underlying architecture of the chip design that supports that instruction set. That - more so than instruction set - is what determines if a chip performs well, or is efficient enough. From this technical ability perspective, Instruction set (x86 vs ARM vs RISC-V vs others) is really just a distraction.
2.) So why be interested in instruction set then?
Because it is all about control.
Instruction set is legally important. Who owns the intellectual property? Can they use it to lock the competition out, or lock their customer base in, or do any other borderline illegal market manipulations?
Intel has been doing this for ages, suing everyone and everything in order to maintain control over the PC market. They got so obsessed with it that they got distracted and really missed the whole mobile (or well, mobile smaller than laptops) and embedded IoT markets.
If the market as a whole is open to a shift in instruction set, using that opportunity to move from x86 to ARM is a terrible waste of an opportunity. At least from the consumers perspective. We'd be moving from one proprietary instruction set which has been the poster child for pseudo-legal market manipulations to harm consumers and competition, to another proprietary instruction set which while on the surface looks more free and open, is actually much worse.
Sure, ARM Holdings licenses their cores to anyone willing to pay. They have the potential to be a bad actor, but they haven't been thus far. Maybe we can trust them? As foolish and shortsighted as the concept of trust is when business and money are involved, the problem here isn't ARM itself, but its licensees. Let me explain.
The ARM license allows for customization. Licensees can and do customize the ARM designs such that it breaks binary compatibility, and et viola, instead of having a Intel/AMD duopoly, now every single little ARM licensee can make their own proprietary chip design that only runs their software and the consumer has no control over what so ever. Do as we say/want or go pound sand.
And while I like the concept of RISC-V being an open source instruction set, it does the same thing. Anyone can customize the instruction set as they see fit, and it can be used to proprietarize hardware and harm users and customers by not allowing them to use their own hardware that they own as they see fit.
3.) What to do about it?
I don't know. Things get worse every year. Every year we go further down the path of proprietary locked bullshit.
Personally, I would like a law that requires unlocked boot-loaders and binary compatibility for every device made available to consumers or off the shelf for enterprise, with the only exception being for highly specialized devices that place additional requirements on the hardware, such that a large market volume binary compatible CPU isn't possible.
Unfortunately this will - however - never pass the corrupt lobbyists who make their money from companies who rake in the billions by manipulating markets and abusing customers.
So, things are just going to get worse. Rather than a general purpose computer you can do whatever you want with, more and more you are going to be restricted, locked down and limited until you have no choices left at all.
Don't like telemetry and data collection on your fancy brand new laptop? Get pissed off that the manufacturer now forces you to watch ads every 5 minutes? Tough luck.
In the past you could at least run Linux or some other community software/OS, but now either your boot loader is cryptographically locked, or you have a custom ISA that prevents binary compatibility with anyhting else.
This is the way we are headed. If - that is - you will even be offered a computer to buy in the first place, and it doesn't all just become a cloud based subscription model.
The dystopia is real.
Didn't apple do this ? Not big but a short list of new custom instructionsNobody is going to be making arbitrary changes to established instruction sets for broad market general purpose use.
They fuck up the entire software ecosystem, hurt their own performance, and violently increase maintenance. It's pointless.
I did read like if it was about them adding stuff than the other would not be able to run anymore so people must buy their hardware to run their software, not for them to remove from themselve the ability to run application library.They don't break ARMv8 or ARMv9 compatibility, it'll natively consume either with zero qualms.
But why would they mind even at all if their binary run on other platform ?If they value combability,
But why would they mind even at all if their binary run on other platform ?
It is not like it is uncommon (that why people usually care about open source after all, how often you better compile specifically for your target...)
Why would they ever remove said instruction, it seem to be a very small set of extra one.subsequently removed at some point,
Even if we do not use any apple library (say a pure STL C project, nothing custom to MacOS), the compilers that target Apple Silicon could use those extra instruction, no ?I mean, if you're relying purely on an Apple library for Apple software, I don't think you have much expectation to run elsewhere in the immediate term.