Are we in a video card slump?

atom

Gawd
Joined
May 3, 2005
Messages
858
What do you guys think? Everyone on these forums seems to be really excited about the 290x but I am not that impressed. I think the progress of graphics on PC has been hindered by consoles over the last decade. The new console generation... the graphics cores in terms of teraflops is similar to a HD 7790 albeit with a lot more memory available. The 290x is hardly a new generation of video cards they are just finally matching the Titan or 780's performance. Please don't bash this is just a theoretical question.
 
I agree with you. There has been no reason for game makers to push the limits really since most games are made for both PC and the consoles. lets face it the video on the 360 and PS3 were obsolete when they released them.
 
I agree. My last setup was 2x 4870x2 in Quad Crossfire; cards that date to 2008. I got a GTX680 brand new last year to replace that setup. To my surprise, the single GTX680 turned out to not be an upgrade. Sure it was a hair smoother not using 4-way crossfire, and I liked new options like adaptive VSync, but in terms of benchmarks it was actually slower in many cases. I added a 2nd GTX680 and that fixed the "problem" but wow, it was pretty sad to spend $500 on that first GTX680 and have it get bested by 5 year old videocards :rolleyes:
 
It's really simple, there isn't as much gains from die shrinks as there used to be. The days of massive performance increases are over until there is some sort of change in technology.
 
Part of the slump maybe the lack of competition. I think at one point we had five companies developing GPUs; ATi, Matrox, NVidia, PowerVR/STMicro and 3dfx. Now we only have two strong players AMD and NVidia.
 
There is nothing kicking video card makers in the ass right now. When 4k becomes more widely adopted I think we'll see that change.
 
It's really simple, there isn't as much gains from die shrinks as there used to be. The days of massive performance increases are over until there is some sort of change in technology.
Uh... hard to say that when there hasn't actually been a die shrink in a while.

All these "new" GPUs are still on the 28nm process size.

Nvidia's Maxwell-based GPUs are scheduled to launch next year on 20nm, we'll see what that does for them.
 
Agree as well. Not upgrading as what I have is more than enough to feed me over 144fps.
Maybe Maxwell will change that next year?

Regarding more smoothness and new tech have you checked out G-Sync? From what I understood is no need to have super high fps anymore as this will give you much smoother experience at lower than 60fps.
http://www.pcper.com/news/Graphics-Cards/PCPer-Live-NVIDIA-G-Sync-Discussion-Tom-Petersen-QA

The solution is called Mantle comes with BF4 in december for GCN cards.
DX is the limitation not the hardware.
 
DX is the limitation not the hardware.
That also remains to be seen.

The only thing AMD has stated that Mantle helps with, specifically, is overhead from draw calls. DX11 includes options for helping with that problem... though if you already have a fast enough CPU (capable of dealing with the overhead), then that particular improvement wont be noticeable for you.

And then Nvidia is talking about offloading draw-call processing off of the CPU and onto a dedicated unit on the GPU, also fixing the problem WITHOUT requiring moving to a new API that only works on a single vendor's GPU.
 
What do you guys think? Everyone on these forums seems to be really excited about the 290x but I am not that impressed. I think the progress of graphics on PC has been hindered by consoles over the last decade. The new console generation... the graphics cores in terms of teraflops is similar to a HD 7790 albeit with a lot more memory available. The 290x is hardly a new generation of video cards they are just finally matching the Titan or 780's performance. Please don't bash this is just a theoretical question.

It's just a slump for AMD. I expect Maxwell from NVIDIA to provide far greater performance without dimming the lights when you turn it on.
 
I think it has more to do with shift in the resource from developing a desktop product to a mobile product.
 
What do you guys think? Everyone on these forums seems to be really excited about the 290x but I am not that impressed. I think the progress of graphics on PC has been hindered by consoles over the last decade. The new console generation... the graphics cores in terms of teraflops is similar to a HD 7790 albeit with a lot more memory available. The 290x is hardly a new generation of video cards they are just finally matching the Titan or 780's performance. Please don't bash this is just a theoretical question.

Just a theoretical question so don't bash but why don't you see Titan level performance for damn near half the money as significant progress?
 
Just a theoretical question so don't bash but why don't you see Titan level performance for damn near half the money as significant progress?

Titan owners are just a little bit ticked at not being top of the hill anymore. ;)
 
Just a theoretical question so don't bash but why don't you see Titan level performance for damn near half the money as significant progress?

because it isnt progress. it's using way more energy, it's more inefficient. it's cheaper yes, but it's not better, if the titan or 780 came down in price, hands down they would be superior products.


I think the 28NM process is probably the biggest reason, until 20NM i don't think we will be seeing any major progress.
 
Just a theoretical question so don't bash but why don't you see Titan level performance for damn near half the money as significant progress?

Because a factory OC 780 does the same thing for 6/10ths the price?
 
That is "significant progress"? Not saying the card is bad, it is the best deal right now. But to the OP's point they all perform relatively similar (excluding 4K resolutions).
 
We're in a silicon process slump, shrinking is starting to become a pretty tricky business. Transistor density is getting so high that the cores are borderline turning into light bulb filaments when it comes to concentration of voltage.
 
I just think back to when Half-Life 2 and Doom 3 came out. These games were made for enthusiast PC's! PCI express was coming around and a whole new generation of cards were born. Our GPU's are a lot more powerful then those days but really the graphics haven't improved that much. We bog down our graphics cards with insane amounts of anti-aliasing, filtering, shadowing effects and we calculate things like PhysX (which defeats the purpose of the dedicated graphics board imo) all on our video card. I see people pointing out that 4k is being adopted but I have to ask, so? A higher resolution provides a much clearer picture but its still a picture without a significant increase in polygons.
 
The R9 does the same thing for 5/10ths... :confused:
The R9 does everything a GTX780/Titan does? :confused:

Supported GPU compute languages:
- Nvidia: DirectCompute, OpenCL, CUDA
- AMD: DirectCompute, OpenCL

Supported hardware decoding:
- Nvidia: DXVA, CUVID, CUDA
- AMD: DXVA

Supported hardware physics platforms:
- Nvidia: PhysX, Bullet
- AMD: Bullet

Supported graphics APIs:
- Nvidia: DirectX, OpenGL
- AMD: DirectX, OpenGL, Mantle

Frame-Pacing:
- Nvidia: Hardware (Universal)
- AMD: Software (Renderer-dependent)

TDP:
- Nvidia: 250w (Both the GTX 780 and Titan)
- AMD: 300w (R9 290x)

Reference cooler:
- Nvidia: One of the quietest they've ever produced + RPM stabilization algorithms to make it less noticeable
- AMD: Compared by Anadtech to "operating power-tools"
 
Last edited:
Not quite, it doesn't steal ALL your cash. ;)
GTX 780 only cost me $650, hardly all my cash... Also supports all of the features I listed in my previous post... Also comes with a stock blower cooler that's actually tolerable, and a lower TDP which compounds that advantage.

I usually use a Twin Turbo II when I have to swap GPU coolers to get acceptable noise figures, but that only supports up to 250w cards. Have to get an Accelero Extreme III in order to handle the R9 290x... but that's a $75 cooler.

An R9 290x + Accelero Extreme 3 = $625 (and this is assuming you have a case that can handle a cooler that massive and can evacuate all that heat).
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
With the exception of Titan... didn't Nvidia sacrifice the computing powers of their cards on purpose to cut down on costs? IIRC, that's what made AMD cards so popular for bitcoin mining.

Which is irrelevant for gamers, I suppose... unless the OpenCL physics (dependent upon the computing power of GPU) really takes off. Which, I suspect, has a larger chance than Physx, seeing as how it's an open source.

I actually think this might be a bigger deal than Mantle, since Nvidia can easily accomodate in their next reiteration of cards (and thus allow developers to serve both card owners).
 
Back to OP's post, I don't think it's anything here anyone said. I think we can't even take full advantage of what we already have now. And what games are even out trying to push cards? Not many. Plus, we've got all the bandwidth of PCI 2/3. It maybe isn't such a measure of being not impressed, but maybe of having nowhere to really go. 4k isn't here yet, and that'll take some doing.

I'd like to see light processing or quantum GPU's. Now your talking!
 
gotta figure out a new way to go faster. i think the farther we try and take this tech the less we get in return at least till a new way is figured out.
 
Add rims on the fan spokes. And fins.
Works for cars, if racing games have taught me anything about automobiles..
 
With the exception of Titan... didn't Nvidia sacrifice the computing powers of their cards on purpose to cut down on costs? IIRC, that's what made AMD cards so popular for bitcoin mining.
It's not really that Nvidia cards are slower at compute in-general, it's that AMD implements one specific function in-hardware that's heavily used in bitcoin mining.

[...]Bitcoin mining on AMD GPUs instead of Nvidia's is that the mining algorithm is based on SHA-256, which makes heavy use of the 32-bit integer right rotate operation. This operation can be implemented as a single hardware instruction on AMD GPUs, but requires three separate hardware instructions to be emulated on Nvidia GPUs (2 shifts + 1 add).
 
Video cards aren't being held back by consoles, they're being held back by the laws of physics. Every time a transistor flips states it generates heat, 6.5 billion of those things is a lot of heat. AMD's already going balls out with temps hot enough to boil water, there was no place left for them to go at 28nm. Intel won't share their 22nm tech so we have to wait for TSMC to catch up. This is taking a while.
 
What do you guys think? Everyone on these forums seems to be really excited about the 290x but I am not that impressed. I think the progress of graphics on PC has been hindered by consoles over the last decade. .... Please don't bash this is just a theoretical question.

Well, keep in mind, the 5870 was not that long ago. Maybe 6-7 years if I recall correctly and we were just starting to break...the 1 teraflop mark!! WOOOOOOOH! A TERAFLOP in a single gpu :eek:!

Now, we have the 290X with a theoretical performance of 5 teraflops so progress has been pretty decent over the last 6-7 years. 5x performance over those years is keeping up reasonably well with the old moore's law adage of double the IC every 2 years. If you follow video card or cpu development and have been for a good while, you'll notice there's a cycle referred to as 'tick' and 'tock' which refers to on a clock either the 'hour hand vs minute hand' or 'minute hand vs second' hand. Where one, denotes a large movement of time and the other a fairly small movement.

Every time the die process shrinks, we see a large-movement (Tock). Every time, silicon is improved for efficiency but the die remains the same, we see fairly smaller gains (Tic). People are fairly excited about the 290X because it was expected to be a 'very small to small gain' (tic); compared to the 7970GE we got a ~20% increase which is probably closer to a medium-gain and great as far as your average 'tic' goes.

So most people had a low expectation and our expectations were slightly exceeded. That combined with the price was pretty reasonable for that increase, gave back more competition/some levelness to the marketplace. IE. AMD is much more competitive now with nVidia than a few days ago.

The 'large gain' should come with the 20nm cards next year. That being said though, there's not a huge incentive game wise to upgrade right now for anything but enthusiasts. Sure, BF3 at max settings, high AA and 4k resolution does require you to almost have a 780/Titan/290X -- but the average (non-[h]) gamer might be playing with 2xaa and 1080p for which you could use a $200.00 card and maintain a good 60 fps. That's assuming average gamers want 60 fps as some might be satisfied with console-esque 30-40 fps.

You can occasionally convince an audience to buy a new video card if its sufficiently more advanced than the old card... even if they don't 'really' need it. So, its still in a manufacturer's best interest to try to make large gains. So that 5870 user at 1 teraflop looks at the shiny 5 teraflop number and goes... ohh... my card is only 1/5th as fast as the new flagship card. I should upgrade -- even though a 5870 probably does fine at 1080p for an average-joe.

If that number becomes 8 teraflops after the next die shrink to 20nm, all the more reason for that average-joe to upgrade and make either nVidia or AMD more profit. Overall though, progress has continued pretty steadily as of late and were almost at a tock point (next year) after 2-3 Tics.
 
I'm still comfy turning settings down to get 60@1440. Even with a single 670.
When games start to look significantly different at higher settings I will sort out a new gpu to put under water.
 
gotta figure out a new way to go faster. i think the farther we try and take this tech the less we get in return at least till a new way is figured out.
This. Every time you eliminate a bottleneck or create a more efficient process, be it in hardware process or architecture design, etc., it's rare to "do it again" the next go around and receive the same result. Simply, the more you improve, it's harder to find more ways to improve. I think as we run out of process shrinks, a lot of the duty of performance gains is going to fall on the software and its efficiency.
 
The way I see it not Only GPU but even worst CPU are in a slump for years now with barely any real progress.
At least AMD competes good in the GPU side and while I cannot use AMD (no 3D vision) the 290 is a nice product that will push Nvidia to lower prices so we all benefit. On the CPU side, I still run an OCed 920 and it still is a good match for newer CPUs..sadly, because I would expect a HUGE difference after so many years but thats mainly due to AMDs lack of CPU peformance thus Intel is happy to coast instead of pushing the envelope. :/
 
My thought is that with 4k monitors coming at some point (they are are just now showing up) amd and nvidia will have to make a jump up in performance to take full advantage of it. I can see where Intel will have to step up as well. Next year should be interesting I think hardware wise.
 
For some reason my mind translated this title from Video Card slump to Silicon Slump. I think the two go hand-in-hand given most components in a PC rely on it in some fashion. To me it looks like both CPU and GPU manufacturers have blown there load with the massive die-shrinks in recent decades that the doubling process of power is nearly gone because they failed to stager increases in technology. We are reaching the end of silicon and have yet to see anything to pave way for the future and hence I think the entire industry is slowing down to compensate.

Until we know where we're going it makes sense from an analytical point-of-view to see Video Cards be the final ones to follow suit with less and less increases per generation. Once whatever comes next becomes feasible and are able to look into the long-term (10-15 years out) expansion wise I think we're going to continue to see less increases from now until 2020 (when Intel claims Silicon will be dead). Video cards are more efficient just because of parallelization so it's quite possible we'll continue to see at least 25% increases each year (generation wise), but not much more. Just my theory.
 
because it isnt progress. it's using way more energy, it's more inefficient. it's cheaper yes, but it's not better, if the titan or 780 came down in price, hands down they would be superior products.


I think the 28NM process is probably the biggest reason, until 20NM i don't think we will be seeing any major progress.


People keep saying it's less efficient but it's also doing the same job on smaller die space. There are some of the increased thermals right there. And that directly relates to how much the chip can be overclocked since a cooler chip would allow for higher clocks before reaching 95c. I think the smaller die space with similar performance was the right trade off for lower costs.
 
Back
Top