Watch 28nm

Zachstar

[H]ard|Gawd
Joined
Sep 27, 2005
Messages
1,407
Sorry to post an opinion thread rather than a news update. However, I just wanted to make a topic to discuss my views about the future of video cards.

40nm has been quite the number. Not as exciting as the transition to 65nm and the resulting battle between ATI 3x series and Nvidia 8 and 9 series cards. However, we saw the start of DX11 and of course we will see Fermi start on the stage.

That tho, Is NOTHING compared to what we will see with 28nm in my opinion. The more that I look at it the more that says to me that this is going to be one HECK of a video card brand battle going into 2011 and lasting atleast a year.

The reason I say this opinion is because Globalfoundries is slated to be the ones to manufacture GPUs for ATIs 6x series. And almost the exact same time TSMC is going to spring 28nm parts for Nvidia. Likely die shrunk Fermis.

You know what this means? It means a fab battle that we have not seen in the graphics world like we have seen in the CPU world. Where such a battle has caused AMD and Intel to pull insane steps and squeeze processes for every scrap of performance and slashing prices like nobody's business.

I HIGHLY doubt we are going to see the insane prices we have seen with the 5x series. The battle isnt going to be for money its going to be for market. AMD wins the 5x round and will fill the market for 2010. However If they fall behind in 2011 they risk losing their image and having more companies focus on building games that work better on Nvidia cards.

This is going to be the Vikings Vs Saints battle of the GPU world. Whoever has the least issues wins. Whoever can pull the best respins and get the best speeds wins. Nvidia has got to show the world that its CUDA and Geometry can be scaled down to sane levels and not cost too much. They have got to be able to replace even the GT 210 with a Fermi Product. For ATI they have to be able to keep ahead of Fermi at every turn. If I were them I would spring my 6850 and 6830s first and not even worry about super power cards until the first respin. Far more focus needs to be made on getting out budget cards sooner. The 5570 and 5450 are taking FAR too long to hit the market and ATI cant do that and expect to hold fermi back next year.

People are going to sit on their 5850s 70s 90s and Fermis they are not likely at ALL to spring for the 28nm high expense parts like they did this era. The budget and mainstream battle is what is going to define this era.

Who else is excited? Look like I am going to be pulling more work hours next year :eek:
 
28nm will be just like every other die shrink. It amazes me how technology excites people into this exaggerated hopefulness. It's just a die shrink, I'm not sure how you could expect it to be anything better than any other node shrink. It will be far more problematic than 45 & 40nm, because tunneling will be happen that much more easily.

And its always the same battle between nvidia and ati. They will have a very similar back and forth exchange. Fermi will be faster, then the 5890 will catch up, then the Fermi x2 will be released, then ATI will come out with r970 or whatever their internal code will be, and so on. The excitement of Fermi being late is coming to a close. Next exciting period will be when ATI's future part is late.
 
You are forgetting that this is now a battle between fabs.

Before it was mainly TSMC. Who could really care less because they are almost guarunteed in their sales to ATI and Nvidia.

Now that they have serious competition the heat is on. I do believe this will be quite the battle.
 
I have to agree, with how you explain it seems pretty exciting. Didn't AMD move to buy almost all of the stake in global foundries as well? If i remembered that right, then AMD might have the upper hand if it comes down to a fab war as you suggest. With TSMC having to supply more than just Nvidia, their part might take a backseat to other companies unless a deal can be struck up. On that note, are you suggesting that 32nm is going to be skipped altogher? im not too familiar with GPU flow charts right now, but with intel going to 32nm and working on 28nm as we speak and with larrabee still in the works, things might get even more interesting with Intel having their own fabs and all.

I have to admit that Im not so much excited about 28nm than I am about the future of transistors though. Since this is a "future tech" thread, i'd like to point out the possible use of SW carbon nanotubes or graphene sheets that will take the place of the hi-k low-k transistors(I think thats right?).

Anyways, when i took a materials class last year we were focusing mainly on materials on the nano scale and how the use of carbon nanotubes and its similar brother graphene sheets were going to push electronics to a whole new level. This was last year and I wrote a research paper on how silicone-based fluid mixtures were being used to control nanotubes conductivity characteristics; in essence ridding the metallic tubes so that the semi-conducting tubes were the only ones left.

Fast forward to today and when I look up nanotube transistors in school research databases, there is at least 5 times as many documents and the tech has progressed beyond controlling conductivity and onto actually trying to mass produce semi conducting tubes for use in SoC silicon.

I'm pretty excited about the future of video cards as well, but im even more excited about the future of processors in general. Interesting stuff for sure.
 
Last edited:
Didn't AMD move to buy almost all of the stake in global foundries as well?

No, Global Foundries used to be AMD's manufacturing division. Now that AMD is no longer required to fab their own processors, they split off their fab plants in a joint venture with ATIC. AMD's role in Global Foundries is that of a customer, not an owner.
 
No, Global Foundries used to be AMD's manufacturing division. Now that AMD is no longer required to fab their own processors, they split off their fab plants in a joint venture with ATIC. AMD's role in Global Foundries is that of a customer, not an owner.

ahh yea thats right, since they won that lawsuit with intel, part of it was that they didn't need to have a fab anymore. thanks for the reminder.
 
X86 Cpu production is vastly different to GPU production. The average CPU line takes years to plan and design. That is why 32nm is up instead of 28nm. For GPUs they can afford to go on a "Half-Node" because they can take advantage of fab advancements far quicker and the designs are focusing more on shader cores and such.

And its even more exciting because 22nm Is going to take quite a bit longer to jump to meaning its really going to be a battle of prices and skills.
 
It'll be refreshing to just see a new process actually working as intended instead of having another repeat of the disaster that has been (is?) 40 nm. Seriously TSMC, was 45 nm not good enough for you?
 
28nm wil be the next GPU Node. 32nm or 22nm (since 32nm is out) for processors. Don't ask me why is it that way, but it is...
 
Not meaning to hijack this thread but can anyone shed some light about how and when CPU & GPU started diverging? There was the 65nm, then the 45nm and suddenly the path separates.
 
Not meaning to hijack this thread but can anyone shed some light about how and when CPU & GPU started diverging? There was the 65nm, then the 45nm and suddenly the path separates.

GPUs and CPUs feature rather different parts running at rather different speeds. CPUs have a (relatively) small amount of logic and very large local cache running at speeds >3GHz, and generally require fab technologies like SOI and high-k dielectrics and metal gates which significantly increase wafer costs. GPUs are primarily logic transistors which run <1.5GHz, and are manufactured on "bulk" processes. It might be that it's easier to "draw lines" on bulk processes enabling smaller feature sizes.
 
Let's hope no1 plays like the Viking's and turns over the ball (churns out unstable crap) 6+ times.
 
GPUs and CPUs feature rather different parts running at rather different speeds. CPUs have a (relatively) small amount of logic and very large local cache running at speeds >3GHz, and generally require fab technologies like SOI and high-k dielectrics and metal gates which significantly increase wafer costs. GPUs are primarily logic transistors which run <1.5GHz, and are manufactured on "bulk" processes. It might be that it's easier to "draw lines" on bulk processes enabling smaller feature sizes.

Lol, what?

The biggest difference between CPU's and GPU's up until a few years ago was that CPU's used a ton of custom logic, where engineers sat down and went over every trace to ensure everything was set up as optimally as possible. GPU's were largely built using more automated processes. That has changed afaik. And while CPU's have "less" logic (from a transistor standpoint), it's not because Intel or whatever manu doesn't have the experience with building more compicated chips. I'm not a ee, but I imagine that it has a heck of a lot to do with both the serial nature of CPU's, where chip frequency may play a more vital role in speeding up single task execution. It probably also has a lot to do with the difficulties of designing largely hand-drawn circuits. I certainly wouldn't want my engineering people running into cases where a few thousand transistors buried somewhere in a massive chip won't sync to the clock, and you have to spend half the development time just trying to find the piece, because you decided to have a 1ghz chip with 2B transistors worth of logic.

As far process nodes go, in response to Zachster: No, the "process battle" landscape hasn't changed significantly. With the introduction of Global Foundries, you will see the ATI subsidiary moving their requirements over there, but if anything that will introduce more delays, since tuning a new foundry to 28nm AND producing your first GPU's on that process and at that foundry will be a heck of a lot more complex than outsourcing to a good foundry company. TSMC's record with 65nm & 40nm processes hasn't been stellar, but its difficult to say how much they really screwed up. The only other companies that are really able to do this right are Intel and IBM, and both are far larger and more experienced. It's not an excuse for TSMC, but if Global Foundries does better in their first go, I'd be surprised. I also expect that TSMC realizes the change in the landscape, and will over-prepare for 28nm. This will not be any more of a "process battle" than the last 8 years, with ATi's 9700 / 150nm vs Geforce FX 130nm being the first chips that competed on process, at least to a significant extent.
 
Last edited by a moderator:
Wonder if ATi taking their business to GF would end up costing them more. Instead of having to share the cost to FAB up with nVidia like with TSMC. Unless they, GF woo nVidia away from TSMC also.
 
Wonder if ATi taking their business to GF would end up costing them more. Instead of having to share the cost to FAB up with nVidia like with TSMC. Unless they, GF woo nVidia away from TSMC also.

Yeah, they're obviously going to have to drum up a heck of a lot of business to justify their new fixed costs in the near term (3.9B to Chartered, and ~3.4B for Fab 8), if only to satisfy investors. After they pay it off though, it will be exactly what they need to be an Intel-killer and nVidia-killer (the latter which will either diversify into x86 or be acquired).
 

Serious competition by the way they are GREATLY expanding R&D..

Isnt it just wonderful what competition does to tech instead of one company dominating like many fanbois dream of?

Looks like TSMC is going to try for 22nm and try to cut GF off at the turn. Looks like a 2nd battle in the making. 28nm is going to be grueling enough but you can bet your butt that prices are going to be nice and low in my opinion.
 
Lol, what?

The biggest difference between CPU's and GPU's up until a few years ago was that CPU's used a ton of custom logic, where engineers sat down and went over every trace to ensure everything was set up as optimally as possible. GPU's were largely built using more automated processes. That has changed afaik. And while CPU's have "less" logic (from a transistor standpoint), it's not because Intel or whatever manu doesn't have the experience with building more compicated chips. I'm not a ee, but I imagine that it has a heck of a lot to do with both the serial nature of CPU's, where chip frequency may play a more vital role in speeding up single task execution. It probably also has a lot to do with the difficulties of designing largely hand-drawn circuits. I certainly wouldn't want my engineering people running into cases where a few thousand transistors buried somewhere in a massive chip won't sync to the clock, and you have to spend half the development time just trying to find the piece, because you decided to have a 1ghz chip with 2B transistors worth of logic.

As far process nodes go, in response to Zachster: No, the "process battle" landscape hasn't changed significantly. With the introduction of Global Foundries, you will see the ATI subsidiary moving their requirements over there, but if anything that will introduce more delays, since tuning a new foundry to 28nm AND producing your first GPU's on that process and at that foundry will be a heck of a lot more complex than outsourcing to a good foundry company. TSMC's record with 65nm & 40nm processes hasn't been stellar, but its difficult to say how much they really screwed up. The only other companies that are really able to do this right are Intel and IBM, and both are far larger and more experienced. It's not an excuse for TSMC, but if Global Foundries does better in their first go, I'd be surprised. I also expect that TSMC realizes the change in the landscape, and will over-prepare for 28nm. This will not be any more of a "process battle" than the last 8 years, with ATi's 9700 / 150nm vs Geforce FX 130nm being the first chips that competed on process, at least to a significant extent.

Are you kidding me?

Look at 40nm and how little things changed over time with it. High prices a "I dont care too much" attitude about MASSIVE die reject rates. Not to mention how only just now are entry level products hitting the scene.

This is not a battle in the least bit. ATI is just raking in money. There is almost nothing competing. So TSMC gets tons of money and only when ATI says they have had enough and want to go to another fab that TSMC all the sudden is all the rage again.

In my opinion you are greatly mistaken.
 
Back
Top