NVIDIA's GM204 & GM206 Maxwell Taped Out At 28nm?

20nm low power processes are humming along fine. Graphics cards aren't made on the low power process though, they are made on the high performance process. There are no 20nm HP processes because they were not seeing any gains over 28nm, thus the delay.

When will HP be deployed then?
 
so how much improvement over a 780/Titan can we expect with a 28nm Maxwell?...wasn't power savings at 20nm the main draw of Maxwell?
 
so how much improvement over a 780/Titan can we expect with a 28nm Maxwell?...wasn't power savings at 20nm the main draw of Maxwell?


Yes. I don't know what TSMC is saying referring to 20nm showing no real improvement on high-end parts. Call bullshit on that and I'm sure Nvidia/AMD would feel the same. It's basically a half-node, which isn't going to be as beneficial as a full node (16nm), but for GPU's it would still probably be very beneficial given the nature of how they work.

All we know for now is that Maxwell on 28nm can save an estimated 30% in power draw while maintaining the same performance as Kepler due to architecture changes. I'm assuming the real improvement was to come from the gains 20nm offered. The die is pretty damn large for Maxwell @ 28nm so whether or not we'll see a 30% performance boost or just a 30% power savings is yet to be seen. I can't imagine what the 870/880 size would be die wise.

The chances that 20nm will be short (1 GPU Generation) is impossible to imagine even if 16nm is ready to go by the end of this year (developed simultaneously with 20nm). The way TSMC is going with the next node, 10nm and below, is many years off if they can pull it off. Let alone in a timely manner. Despite TSMC putting down their own node, both GPU manufacturers have no choice but to milk it for 2 generations unless they want to screw up their timelines again in another 4 years or less.
 
It's hard to see how NVIDIA can hit their 2x "DP GFLOP per watt" charts for Maxwell if it's on 28nm.

AMD is wise to start negotiations with GF / Samsung in case TSMC becomes a stumbling block.
 
Disappointing but it is what it is. Trying to decide if to sink $300 into a tide me over 770, or just go with a 780 or R290, skip 28nm, and wait for 20 nm.
 
Yes. I don't know what TSMC is saying referring to 20nm showing no real improvement on high-end parts.

Not high-end parts, on it's high performance process. Typically on some node, eg 28nm, TSMC offers a few processes. They had eventually had 4 28nm processes: HP HPM HPL LP... they're tuned for different densities/leakage/whatever. AMD/Nvidia would use the High Performance process. Mobile stuff would be built on Low Power process. For 20nm TSMC again planned a few processes, including High Performance and Low Power processes. Awhile ago they announced they the HP process showed no real improvements over the LP process so it got canceled, and is just being skipped at this point. If AMD/Nvidia want to get their GPUs made on 20nm they need to switch to another process... but it won't have quite the same improvement that would have been expected of the HP 20nm process.
 
First CPU progress stalled and now it's time for GPUs to do the same thing. It blows my mind: My 3 year old $200 I5-2500k with a $20 Hyper 212 Evo at 4.4ghz is still within 5% of the fastest CPU on the market for most games. If this is where GPUs are headed then API optimization needs to happen right fuckin' now!

With the disaster that is 20nm at TSMC and the obvious slow down in GPU development, API optimization needs to kick it into high gear. Now that TSMC builds Iphones I bet we see more GPU shortages and a lack of focus on HP 16nm process development. AMD/Nvidia can't hope to pay anywhere near what Apple can and there is no higher profile launch than Iphone 6. If I was Nvidia I would make sure Maxwell came out long after the Iphone 6.

Microsoft should be ashamed of themselves, DX12 with lower overhead isn't coming until December 2015 (Assuming no delays) and probably only supports Win 8/9. They should have tackled the optimization problem years ago but PC gaming has never and will never be a big focus for MS.

I love Mantle so far. It made BF4 dramatically smoother for me at 1440P, so much so I'd call it BUTTERY smooth. I'm no AMD fanboy either, my last card was a GTX 670 and it was also very good. My laptop that I mostly use to play CIV5 on trips is also Nvidia. I pretty much alternate between Nvidia and ATI every generation. I've never used a driver from either company that I thought was very good. I have almost universal hate for video card drivers because my usage scenario is outrageously complicated and my expectations are probably unrealistic.
 
Things may have stalled from the standpoint of increasing clock speeds and/or raw performance to some extent, but there is still a lot of room to work on power efficiency. I am impressed with the GTX 750/750 Ti from a performance/watt standpoint. Regardless of how well the architecture scales up to higher-end parts, the fact that I can get near GTX 480 performance from a GTX 750 Ti using one quarter the power or less is awesome.

I have to keep stopping myself from buying a 750 Ti because I really want to wait and see what the rest of the series looks like. If they are capping the 750 Ti cards at 38.5W as discussed in a number of in-depth reviews, then that means there is the potential for an even faster card that doesn't require a 6-pin PCIe power connector that would still operate well below the 75W PCIe slot limit or a significantly less thirsty 760 replacement that only requires a single 6-pin PCIe power connector.

All of this appears possible while still at 28nm, so while the uber-peformance crowd may not be as happy, the power-conscious among us have something to look forward to. I didn't worry about it as much in the past, but with the last few months' electricity bills being $550 and $430 respectively, without an increase in usage (I actually used less the past two months than in the two months prior) the price/KWH is encouraging me to seek lower-power alternatives.
 
You people are going to ban me, but... I'm still waiting for a game that enthuses me enough to move on from Fermi. I thought I'd get a Maxwell but now I'm not so sure.

All I play is L4D2 which is ancient and if there are any other games that intrigue me they're indie games that aren't graphics intensive. I've been hearing about these "new engines" for years now but I don't really see any games in the pipeline that make me think anything other than "meh".

Then look at PC gaming on Steam... a lot of it is still older games like Dota2 or TF2 or whatever, and a lot of people game on Intel IGPs. We've had the OpenGL 4.x hotness for years now, but all the OGL games out there are 3.x, to cater to Intel and people playing on laptops. And I'm not getting the impression that the new console generation is going to push things as much as initially thought.

It's just hard to justify the purchase right now and I'm probably not completely alone in my thinking (perhaps in this crowd, but probably not in the overall market). We almost need a revolution in PC gaming. It's not even clear that it's even evolving.

It's no wonder NVIDIA is so interested in HPC and mobile markets.
 
I just want all the games I play currently to actually support multiple friggin cards. Grr.
 
You people are going to ban me, but... I'm still waiting for a game that enthuses me enough to move on from Fermi. I thought I'd get a Maxwell but now I'm not so sure.

All I play is L4D2 which is ancient and if there are any other games that intrigue me they're indie games that aren't graphics intensive. I've been hearing about these "new engines" for years now but I don't really see any games in the pipeline that make me think anything other than "meh".

Then look at PC gaming on Steam... a lot of it is still older games like Dota2 or TF2 or whatever, and a lot of people game on Intel IGPs. We've had the OpenGL 4.x hotness for years now, but all the OGL games out there are 3.x, to cater to Intel and people playing on laptops. And I'm not getting the impression that the new console generation is going to push things as much as initially thought.

It's just hard to justify the purchase right now and I'm probably not completely alone in my thinking (perhaps in this crowd, but probably not in the overall market). We almost need a revolution in PC gaming. It's not even clear that it's even evolving.

It's no wonder NVIDIA is so interested in HPC and mobile markets.


I'm kinda the same. I don't play many graphically intensive games out of the box. I think it's one of the reasons why GPU's almost always refer to Crysis/Battlefield. Crysis is the ultimate benchmark game, if only for that reason. Battlefield is one of those very few games where you can have an incredible online experience that isn't dwindled down to some niche group of gamers where the graphics also look insanely well.

We had that era in gaming where pretty games was all focused on single player and the multi-player looked like a completely different game engine. We are starting to get to a place where multi-player gamers can actually see things as great as the single player. I would love for some of my favorite games like L4D2 and TF2 to break my card. Then there are those developers who intentionally develop the multi-player aspect to not rock your socks off and only make it pretty "enough" that 80% of people can run it at 30fps or higher 24/7.
 
Unless you have the power and resources intelligence to come up with something better and faster =)

IF CPUS and GPUS are both stalled it saves people money =)

It might be a down cycle for PCs
 
Yeah, one of my computers just has a GTX 660 flashed to do a 1215 boost in it. I play with vsync enabled on a P-IPS 1920x1200 LCD so I'm not one of those guys that needs to be concerned with holding 120 FPS. I can't stand the gamma shift on TN panels, so I don't use them. In addition, I'm mostly an RPG nerd, not an FPS guy. So the titles I'm playing tend to be moderately paced and don't require split second reactions.

Often, some effects I don't even care for kill FPS, like DOF and motion blur. So I disable them. Sometimes simply dropping shadows from its highest possible setting to the one below it will boost FPS decently in some titles too. Personally, I'd be happy with high res textures in more games.
 
Last edited:
Hey, I like seeing what folks with uber rigs can do with their hardware. I don't hate on people for how they choose to spend their own money. Again, I enjoy seeing what beastly rigs can do.

I haunt the XS forums for basically the same reason; those guys are way out of my league and I like watching what they can do.

At one time, I ran overclocked GTX 260 Tri-SLi myself.
 
It's been almost eight months since NVIDIA came out with something new, well if you wanna count their Tick-Tock tweak. Guest you can thank TSMC for that one. :confused:
 
Makes one wonder how much TSMC capacity may be tied up now that Apple seems to be shifting their 20nm A8 over to them from Samsung.
 
It's been almost eight months since NVIDIA came out with something new, well if you wanna count their Tick-Tock tweak. Guest you can thank TSMC for that one. :confused:


It's been almost 12 months since the launch of the 700 Series with the GTX 780. Man time flies. Add another 2-3 months if you consider the Titan the launch.
 
http://semiaccurate.com/2014/04/17/global-foundries-samsung-sync-14nm-processes/



It's coming soon to an AMD CPU/GPU near you!

Probably late 2015 is my guess.

14 nm is not coming in 2015. They are going to go for 20 nm all of 2015. Could also be 2016. 14 nm is coming in late 2016 at the earliest, look for Q1 2017. It's like Intel's tick-tock model. One year you get new architecture, the next you get die shrink.

Something was wrong with TSMC and 20nm. Can't remember where I read up on it though.

Which sucks because I was REALLY looking forward to 20nm high end Maxwell...

Who says high-end Maxwell isn't coming to 20 nm? They are not going for GM200, which is the successor to GK110. They are going to put out GM204 and GM206 first, which are the budget- to midrange cards, and both of them are going for 28 nm.

Didn't Nvidia say that some of their DX12 features won't make it into the next series of cards awhile back? Think it was when Microsoft showed off Forza running on Nvidia hardware, but I could be mistaken of course. :) I bet they save 20nm tech for the full DX12 card release in 2015.

DX12 is going to be fully enabled on GPUs going back to Fermi. There may be a few one-off features in Maxwell 20 nm that isn't in the other cards, but by and large it is going to minor. If Microsoft wants DX12 to be widely used then that means that it needs to be compatible with most people's GPUs, which it will. Don't look for any Maxwell/DX12 hail mary's.

Couldn't Nvidia and AMD just build their own facilities? Or is it too expensive?

Too expensive. AMD tried, but spun it off to Global Foundries.

We'll just have to "settle" for 28nm Maxwell, which (if we're honest), isn't the worst thing in the world. :)

No you don't have to settle for anything. 20 nm Maxwell is coming out early next year.

There's so much misinformation even on a supposedly hardcore forum like HARDOCP. It's hilarious.
 
There's so much misinformation even on a supposedly hardcore forum like HARDOCP. It's hilarious.
The only misinformation I see here is your's, unfortunately.
At least other sites spreading rumors have sources. I admire your optimism though.
 
All this conjecture is fine and all, but aren't we forgetting an important junction that we haven't looked at...like WHEN IS PAPA BRASH GETTING HIS NEW TOY???
 
I say bring on the rebadges. I'm set with my 290-level performance in my 270 CFX rig for a nice while. :D
 
More news.

http://www.techpowerup.com/200505/g...n-route-testing-lab-features-8-gb-memory.html

edit: Could someone specify how they know this is a "GTX 880" and not some other mid-range part? Not that the label means anything.

More here:
http://videocardz.com/50448/nvidia-geforce-gtx-880-feature-8gb-gddr5-memory


They don't know. Part of rumorville is the fact nothing is concrete until release day. Judging by the 600/700 Series though and what we've seen the Titan, 780 and 780 Ti, Nvidia is shifting product lines so now the regular high-end card is going to be some Intel style Enthusiast chip heavily marketed up because suckers will buy it. Just like the 780, they could have sold it starting @ $499 and made normal profits, but they decided to have it set for $649 for 6 months instead. The GTX 780 Ti could be sold for the same $499 or even $549. Things are just getting overly superficial now so why would anyone really doubt a 10% improvement card for $649 or $699 in the 880 as per the leak?

With multiple sources reporting similar rumors over the course of the last 2 1/2 months I think that card is closer to actual than rumor now. Problem is if it's a 870 or 880. As an 880 it would be a joke. Shader cores are barely offsetting things as we saw with the 780, Titan, and 780 Ti that a small overclock can't match. Adding 300 does nothing, especially when those are only 90% capable compared to Kepler.

Looks like 5 years in the waiting Nvidia is going to write off the Maxwell hype by releasing a half-baked product after all, despite swearing they wouldn't.
 
Complaining over a price we don't know. Cool.

In any case, if it's the fastest halo GPU on the market it's an option to buy. My take is, whenever the 880 is released it will probably have a commanding performance lead over AMD's stuff (whereas the 780ti is fastest now, but it is very close / competetive with the 290X being a great alternative if you don't mind AMD) and AMD won't have an answer for a year except for 290X price cuts. Hawaii is still a fairly fresh release for AMD so I really doubt they will have something up their sleeve so soon, they've been on a 2 year schedule with new flagship GPUs. How many times has that happened between AMD / nv? Quite a few by my recollection. I believe that if it is indeed named a GTX 880, it will perform as a GTX 880. Meaning it will have a meaningful increase over the prior generation. What I don't think will happen is NV releasing an x80 that performs only slightly more than the prior generation. The 680 was a big increase over the 580 (30-40%) and the 780 was a big increase over the 680. If it's named the 880 this time around, it will perform as an 880, period. I'm pretty sure it won't be named an 880 unless it's a meaningful and large increase over the 780ti. NV has never done that and it would be pretty stupid for them to start - their x80 SKUs generally perform up to the name at launch.

Price wise, we just don't know yet. Even if we did know it's not like anyone forces you to buy it. When the Titan launched, I was intrigued but I knew I wouldn't buy a 1000$ GPU regardless of the flagship performance. F that. My 680s at the time were killing it in every game I had - It's that simple. Just stick with something else if it's an issue, it isn't like games are really pushing GPUs at 1080p. At 1600 and surround, yeah you'll want more, but since nearly everyone is at 1080p i'd guess mostly everyone has a GPU right now that is playing games just fine. The preemptive complaining is just funny to me because a GPU purchase is never compulsory. If you don't want it, don't buy it. Or stick with the budget story with an AMD GPU. Or better yet, stick with the GPUs you currently have, which i'm guessing most PC gamers have a GPU setup RIGHT NOW that is just fine and that won't change when the 880 hits. If it performs fine then, then there isn't a reason to even buy the 880. That probably will be the case for me - When the 880 launches i'll probably ignore it initially because I really won't need them. Also, we don't even know the price or release date yet. Or performance for that matter. But I do believe if it's an x80 part it will perform up to that name. It's not like NV is going to release an x80 that is the same performance as the prior generation x80.
 
Last edited:
I agree with the "why bother" thought because 10% would be a waste of time. It definitely would be a waste of time to have a new flagship 10% faster - I really don't think it will happen that way, but we'll see. I personally think it will be a good deal more of an increase if it's named an 880. Certainly none of those chiphell dudes have benchmarked the 880 yet, so, yeah.

I really think the bottom line is that if it's named an 880 - it will perform as an 880. Meaning a meaningful and sizable increase over the prior generation flagship. It has always been this way, and I really doubt it would be named an 880 with a mere 10% increase. I just can't see NV doing that since their flagships have always performed up to the name, compared to their prior generations at least. But, we'll see. I'm sure rumorville will continue as the months go on.
 
Looks like 5 years in the waiting Nvidia is going to write off the Maxwell hype by releasing a half-baked product after all, despite swearing they wouldn't.

Seems a bit premature, don't you think?
 
If Maxwell ends up being something of a Pascal bridge or stepping stone, that might be an okay situation. I'm more interested in seeing what can be done with this architecture at the very low end of the scale and in the GTX x60 sort of range anyway. I'm feeling fairly satisfied at the high end as it is, and NVIDIA's just fortunate to be in a software-as-a-bottleneck position right now.
 
if anything all this speculation and the lack of any information from nvidia tells me a few things and why there is no point waiting for 880:

-card is still in development prolly because a forced 20nm to 28 nm shift
-256bit bus, geforce 680 deja vu
-no virtual unified memory no arm
-end of 2014 release

going to wait a month for possible prices drop and will buy two 290s
 
if anything all this speculation and the lack of any information from nvidia tells me a few things and why there is no point waiting for 880:

-card is still in development prolly because a forced 20nm to 28 nm shift
-256bit bus, geforce 680 deja vu
-no virtual unified memory no arm
-end of 2014 release

going to wait a month for possible prices drop and will buy two 290s
From a gaming perspective I'm not sure why the lack of ARM inclusion matters.
 
Back
Top