RV870 has 1200 Stream Processors

so 1200/5= 240 SP's.. not bad.. now if only the 5 way shaders ATI used actually gave 5 times the performance.. then they would have one killer card.. cant wait to see the 5k series come out though.. i really want to get rid of my 8800GT's but dont want another nvidia card..

but the one big important number is the GFlops.. 2.16 TFlops for the 5870 and 4.56TFlops for the 5870 x2 is just friggin madness!!! i really hope these numbers are true and ATI can actually take advantage of it..

also note the rumored memory speeds.. if those are true.. then they might be using the new DDR5 5ghz chips.. if so then there will be some crazy overclocking going on..
 
Not to mention serious folding potential. If folding actually meant anything besides curious papers only people with PHDs can read.
 
Serious folding potential in ATI terms means... basically what the Nvidia cards are doing now because of the massively higher-clocked shaders. That's still the biggest hill they have to climb. Nvidia cards can sit back and relax and lower core speeds because the shaders are so much faster. So more stream processors... cool. Faster GDDR5... sweet. Better efficiency... all for it. Going to whoop Nvidia's ass? Probably not. My GTX 275 already produces between 3k-4k points in the 9 hours I'm at work.
 
If that chart ends up being correct and the cards are prices as reasonably as the 4000 series I will be glad that I waited to upgrade.
 
5870x2 looks drool-worthy. LOL.
with 4560 peak Gflops, it might finally be the video card to tame Crysis.
 
Last edited:
If this is true then I'm buying a 5870x2 even though I didn't plan on it.
 
If it's anywhere near there I'll be making my first "You spent HOW MUCH on that video card?!" purchase.
 
I'm not believing anything till the day there is a review of the RV870. ATI fooled us once, it may happen again.
 
Yes but it is VERY easy to seriously up the pixel count with double displays and even TH2G. Tri displays are likely to seriously tax any card.
 
I'm not believing anything till the day there is a review of the RV870. ATI fooled us once, it may happen again.

Somthing tells me they arent going to screw this one up, Nvidia is nipping at the heels and they seriously cant afford to lose this generation. 32nm is years away.
 
Somthing tells me they arent going to screw this one up, Nvidia is nipping at the heels and they seriously cant afford to lose this generation. 32nm is years away.

I agree. This really makes me want to switch video cards over to AMD/ATI. I love AMD chipsets (my beefy compy is using it) and would love to upgrade my beat-around machine (in my sig) to an AMD chipset and get some X-Fire going on with the next gen cards.

Otherwise I might have to get a....HD4850 just to make you mad!!!1 ahhhhh! j/k

Seriously, I'll get two HD4770's b/c they run really cool and my PSU can't handle everything in my machine + two HD4850's nor did I like the heat output of my last one. :p
 
The 4770s may drop in price later. AMD wants an excuse to keep selling 4x series products and clear stock so my guess is they will keep that part of the fab running flat out. Thats just my opinion tho.

In my view ATI is failing with the chipsets. The 785G and its debacle with lack of 7.1 LPCM shows they are being careless on that front.
 
freeforallG92G92bG80RV770GT200-1.jpg


300mm die size isn't really that big folks.
 
Serious folding potential in ATI terms means... basically what the Nvidia cards are doing now because of the massively higher-clocked shaders. That's still the biggest hill they have to climb. Nvidia cards can sit back and relax and lower core speeds because the shaders are so much faster. So more stream processors... cool. Faster GDDR5... sweet. Better efficiency... all for it. Going to whoop Nvidia's ass? Probably not. My GTX 275 already produces between 3k-4k points in the 9 hours I'm at work.


there is a big discrepency in FAH.. the ATI cards dont even run the same WU's the nvidia cards get.. nvidia pays out of their asses to FAH to make sure their cards look better.. basicly this is the way it goes.. if a WU performs 15% or more slower then what they consider optimal.. then it thrown at the ATI cards.. also the proteins the ATI cards run are 2-3 times larger then the WU's nvidia cards work on because of the advantage that the ATI card has with the 5 way shaders.. they can fold larger proteins then the nvidia cards can.. though the ATI shaders are much slower.. so while the points dont benefit the end user.. the ATI cards actually help in FAH more then the nvidia cards.. but blame Panda group for this.. the points differences have nothing to do with ATI its self nor the cards.. it has everything to do with nvidia and panda group..

its actually quite depressing that ATI was the first company to go in and support F@H with the Radeon cards.. and Panda group just basicly dumps them from the whole thing all because nvidia offered more money.. and anyone that claims nvidia isnt paying them is full of crap.. because its quite obvious just looking at it.. the ATI side of the GPU2 client hasnt changed at all since the GPU2 client was released.. while the nvidia side has jumped leaps and bounds over what it was in its first release.. not to mention the points system is skewed toward nvidia cards anyways..


but really where this new 5k series will shows its true potential is in rendering..
 
there is a big discrepency in FAH.. the ATI cards dont even run the same WU's the nvidia cards get.. nvidia pays out of their asses to FAH to make sure their cards look better.. basicly this is the way it goes.. if a WU performs 15% or more slower then what they consider optimal.. then it thrown at the ATI cards.. also the proteins the ATI cards run are 2-3 times larger then the WU's nvidia cards work on because of the advantage that the ATI card has with the 5 way shaders.. they can fold larger proteins then the nvidia cards can.. though the ATI shaders are much slower.. so while the points dont benefit the end user.. the ATI cards actually help in FAH more then the nvidia cards.. but blame Panda group for this.. the points differences have nothing to do with ATI its self nor the cards.. it has everything to do with nvidia and panda group..

its actually quite depressing that ATI was the first company to go in and support F@H with the Radeon cards.. and Panda group just basicly dumps them from the whole thing all because nvidia offered more money.. and anyone that claims nvidia isnt paying them is full of crap.. because its quite obvious just looking at it.. the ATI side of the GPU2 client hasnt changed at all since the GPU2 client was released.. while the nvidia side has jumped leaps and bounds over what it was in its first release.. not to mention the points system is skewed toward nvidia cards anyways..


but really where this new 5k series will shows its true potential is in rendering..

It's always nice to see someone make something up and then be "owned" with proper information.

http://foldingforum.org/viewtopic.php?f=51&t=10442

Do yourself a favor and READ instead of spreading FUD. NVIDIA pays no one for F@H. It just so happens that their architecture is much better for GPGPU.
 
What exactly was that thread supposed to show?

It said nothing about general purpose calculations, only about the specific ways that the folding client uses the nvidia hardware to it's advantage but that since ATI's hardware works differently it can not take advantage of the same method.

Nothing about which is better or a refutation of what sirmonkey posted.
 
What exactly was that thread supposed to show?

It said nothing about general purpose calculations, only about the specific ways that the folding client uses the nvidia hardware to it's advantage but that since ATI's hardware works differently it can not take advantage of the same method.

Nothing about which is better or a refutation of what sirmonkey posted.

Since I don't expect much from your reading skills, I'll humor you and post the very obvious info in the link I posted, that addresses almost everything you "said" it didn't:

This is what sirmonkey1985 wrote and I'm highlighting the fictional info he posted:

sirmonkey1985 said:
there is a big discrepency in FAH.. the ATI cards dont even run the same WU's the nvidia cards get.. nvidia pays out of their asses to FAH to make sure their cards look better.. basicly this is the way it goes.. if a WU performs 15% or more slower then what they consider optimal.. then it thrown at the ATI cards.. also the proteins the ATI cards run are 2-3 times larger then the WU's nvidia cards work on because of the advantage that the ATI card has with the 5 way shaders.. they can fold larger proteins then the nvidia cards can.. though the ATI shaders are much slower.. so while the points dont benefit the end user.. the ATI cards actually help in FAH more then the nvidia cards.. but blame Panda group for this.. the points differences have nothing to do with ATI its self nor the cards.. it has everything to do with nvidia and panda group..

its actually quite depressing that ATI was the first company to go in and support F@H with the Radeon cards.. and Panda group just basicly dumps them from the whole thing all because nvidia offered more money.. and anyone that claims nvidia isnt paying them is full of crap.. because its quite obvious just looking at it.. the ATI side of the GPU2 client hasnt changed at all since the GPU2 client was released.. while the nvidia side has jumped leaps and bounds over what it was in its first release.. not to mention the points system is skewed toward nvidia cards anyways..

Info in the link I gave:

The performance benchmarks are for an Nvidia GTX 280 and an ATI 4870. On paper, the 4870 has the advantage with higher theoretical peak FLOPS, but for this folding implementation the GTX 280 is

* 100% quicker for small proteins (~500 atoms)
* 40% quicker for medium (~1200 atoms, the largest we are currently folding)
* 20% quicker for large proteins (~5000 atoms)

This is despite the ATI card doing up to twice as many FLOPS during the calculations.

The primary architectural difference seems to be that Nvidia can store intermediate results in fast short-term memory (like a cache, but managed by the program rather than the hardware) while ATI cannot. For the ATI implementation it is quicker to repeat the calculations than to store to and retrieve from the GPU main memory.

Paper from where this is taken @ http://www3.interscience.wiley.com/cgi-bin/fulltext/121677402/HTMLSTART?CRETRY=1&SRETRY=0

There are no payoffs...just an architecture that has many advantages for GPGPU calculations, this is but one of those.
There's also no skewing of points, just that given the architectural advantages for some of the calculations, the NVIDIA cards produce more of them, because they are faster.

Both implementations are subject to changes and obviously, neither is taking full advantage of the hardware capabilities as is described in the paper, after the sections for "ATI implementation" and "NVIDIA implementation".

There's more info in the thread obviously, some of it from an AMD Development Team aswell and he points to the paper above. Do read it, It's good to read valuable info, written by those who actually know what they're talking about, instead of creating untrue and unfounded conspiracy theories...
 
Nope, you're rambling on but not making any headway here.

I read the thread and it had nothing to do with what sirmonkey was going on about, it did not address those points directly nor had it to do with gpgpu in general.

Again, it was a specific part of folding@home and the way it works and how it utilises a certain aspect of the nvidia architecture.

You are going to have to explain how what that link was on about had anything to do with the conspiracy theory perpetuated by sirmonkey.
 
Nope, you're rambling on but not making any headway here.

I read the thread and it had nothing to do with what sirmonkey was going on about, it did not address those points directly nor had it to do with gpgpu in general.

Again, it was a specific part of folding@home and the way it works and how it utilises a certain aspect of the nvidia architecture.

You are going to have to explain how what that link was on about had anything to do with the conspiracy theory perpetuated by sirmonkey.

Wow, you really have a truly difficult time in reading, but I'll humor you one last time for good measure, just to make sure you and sirmoneky don't deceive anyone else. Actually everyone most likely already read the links and knows how much FUD was spread by sirmonkey.

Here we go:

1) sirmoneky said that NVIDIA was paying to get better F@H performance and that the points system is skewed towards NVIDIA.

Link I posted shows that they get better performance, because there are certain architectural advantages to NVIDIA's products that just can't be done on ATI's hardware and thus reflect the final points and thus better overall performance in that application.

2) sirmonkey said that ATI works on much larger proteins that NVIDIA cards, because they are oh so superior.

Link I posted contains benchmark results taken from a paper written by those who are working in this project (and that know what they are talking about), that show otherwise and they include those large proteins, where NVIDIA's hardware is still faster, although the difference isn't as big as with other proteins.

Here's the chart: http://www3.interscience.wiley.com/cgi-bin/fulltext/121677402/main.html,ftx_abs#SEC1-4

3) It's not a specific part of Folding@Home. It's how the software uses the hardware to achieve better performance, which sirmoneky "claimed" was because NVIDIA was paying someone to get that performance and is simply false. There are trade-offs to the methods used in both company's products, but NVIDIA's are just more efficient for these sort of calculations.

And here's another quote from the paper:

The NVIDIA implementation of the kernel was first meant to be a rough port of the existing ATI code into CUDA, a C-like language present on all NVIDIA GPUs from the 8xxx series onward. In general, the CUDA implementation followed the ATI implementation outlined above. However, it immediately became clear that exploiting architectural features of CUDA allowed for significantly more efficient execution, with differences from the ATI implementation as detailed below.

Link (NVIDIA Implementation): http://www3.interscience.wiley.com/cgi-bin/fulltext/121677402/main.html,ftx_abs#SEC1-3

4) It didn't have to do with GPGPU in general ? You are unreal, because you come to this discussion without even knowing that the Folding@Home client that takes advantage of GPUs, is a GPGPU application...
 
No, folding is a specific application which does not reflect all gpgpu applications.

Of course it is a specific part of the way it works, it not a general aspect of folding at home, just a method by which it uses nvidia hardware.

It said nothing of ati being inherently slower and indeed mentioned how ATI and nvidia have more potential but it does not say anywhere that nvidia is without doubt better.

It also does not mention anything of funding.

When they mention original code, they mean original in that the code was written for the x1900, so of course the 8800's were faster and due to the new architecture were more suited but that has nothing to do with ATI gpu's from the 2000 series upwards.

What changes they have made I do not know and I don't think you know. I think you are just saying, ooh look they know what they are talkiing about. That I am not disputing, it is you who I think does not know what he talking about and is interpreting things to suit you stance.

If you can refute the claims of sirmonkey with evidence fine, but a report which mentions and does not deal with sirmonkeys claims is not acceptable, indeed if his claims were true you would expect such a report to be written as you claim it is.
 
I was basically able to summarize the reasoning like this:

Nvidia has an advantage in F@H with their existing architecture due to how the shaders locally store memory. It has nothing to do with how much horsepowerful each GPU is capable of (which AMD theoretically has much more). It's simply the difference how they go about what they do. In the case of Nvidia's architecture, it's much faster. It doesn't make it inherently better for any GPGPU (?)calculations. It's more like a freak accident that it just happens to be really fast for Nvidia.

The thread also brings up a good point as well; GPUs are capable of so many calculates per clock that it's borderline ridiculous. There comes a point where it's actually faster for GPUs to recalculate certain equations than it is to store/retrieve that information from local memory. It's quite possible that future code will greatly increase the speed of AMD GPU computing in the future, but unlike CPU computing, GPU computing is largely an "exercise in research".
 
I was basically able to summarize the reasoning like this:

Nvidia has an advantage in F@H with their existing architecture due to how the shaders locally store memory. It has nothing to do with how much horsepowerful each GPU is capable of (which AMD theoretically has much more).

How many times do I have to point to the same link ? Did you not read the "deceptive FLOPS" posts, that are mentioned in that thread too ?

Theoretical "horsepower" does NOT translate into better performing hardware and that's not just true for this situation. It's true for games too. The HD 4890 is theoretically capable of producing over 1.2 TeraFLOPS, of computational power, yet it's not faster than a GTX 285 in games, which is just a bit above 1 TeraFLOP.

Another quore from the link:

As such, it's not easy to answer the question in the opening post about useful FLOPS... not all FLOPS are created equally, or even calculated equally. Vijay made a post about "deceptive" FLOPS a while back. Search for it. Since then, PG has taken steps to better represent FLOP counts, but nothing perfect.

It's quite irrelevant how much "horsepower" the hardware has, if there's no efficient way of using it. That's what's explained in that link. Read it sometime.

RanariIt's simply the difference [b said:
how[/b] they go about what they do. In the case of Nvidia's architecture, it's much faster. It doesn't make it inherently better for any GPGPU (?)calculations. It's more like a freak accident that it just happens to be really fast for Nvidia.

A "freak" accident ?...I can't even begin to explain how that's the most pathetic excuse I've read concerning these matters...

Seriously, is it so hard to acknowledge that NVIDIA's architecture DOES have advantages over ATI's, especially when the people that wrote the software also say so and is QUITE clearly explained in the paper ?
 
In all honesty 1200 streams doesn't seem too super of an upgrade for this card, keeping it to a small die and having it on the same process makes it seem to be aimed at lower power and hopefully not require as crazy cooling solution.

I can't wait to see both sides come out next gen, the way games are going with not going crazy on the graphics, I'm more interested in going lower profile, lower power, while maintaining good gaming.
 
300 is not a tiny die. These are going to be power cards with a great many transistors. Make no mistake these are going to push the standard further.
 
I can't wait to own a 5870.

Then, if it sucks, own a 380. :D



Great time to be buying graphics hardware. It is truly unreal.
 
300 is not a tiny die. These are going to be power cards with a great many transistors. Make no mistake these are going to push the standard further.

While I don't disagree that these will push the envelope more, I don't see them being much of a "power house," the die size being 20% larger then the 4870 and having 50% more shaders, i'm seeing these as being a decent upgrade but more twords cost friendly, both consumer and to produce.

What are the rumors on the clock of these things? if the 4890 can hit 950+ rather easy, will they hit 1ghz?

I'm excited to see what new stuff ATI brings to the tables.
 
they should be able to hit 1ghz.. since they are coming stock at 950mhz on the 5870x2.. just look at the 4770 overclocks.. since those gpu's are test versions of what will be in the 5k series..
 
so 1200/5= 240 SP's.. not bad.. now if only the 5 way shaders ATI used actually gave 5 times the performance.. then they would have one killer card.. cant wait to see the 5k series come out though.. i really want to get rid of my 8800GT's but dont want another nvidia card..

but the one big important number is the GFlops.. 2.16 TFlops for the 5870 and 4.56TFlops for the 5870 x2 is just friggin madness!!! i really hope these numbers are true and ATI can actually take advantage of it..

also note the rumored memory speeds.. if those are true.. then they might be using the new DDR5 5ghz chips.. if so then there will be some crazy overclocking going on..

5 way shaders will work wonders under DX11 :)

This is why nVIDIA moved away from a simple scalar architecture to a MIMD architecture for the GT300.
 
If that is the case, just how advanced was the R600 for it's time?

I mean how much groundwork did they lay down when designing it?
 
If that is the case, just how advanced was the R600 for it's time?

I mean how much groundwork did they lay down when designing it?

The R600 was an amazing architecture. My thought is that at the time, the manufacturing process wasn't at a point where AMD/ATI could do it justice. As we've seen, it scales quite nicely. The only real fault i had with it was the sole focus on shaders while disregarding the ROP's. Once they worked that out, they showed us why they we so hyped-up for that chip.
 
5 way shaders will work wonders under DX11 :)

This is why nVIDIA moved away from a simple scalar architecture to a MIMD architecture for the GT300.


we can only hope the 5 way shaders do better.. but it would really be nice if AMD would just unlock the clocking on the shaders so we could push them as far as possible instead of having it directly tied to the core clock..
 
Back
Top