More Pipelines or Higher Core Frequency ???

More Pipelines or Higher Core Frequency ???

  • ATi, Higher Core Frequencies...

    Votes: 38 21.5%
  • nVidia, More Pipelines...

    Votes: 139 78.5%

  • Total voters
    177
sirholio said:
I know that, my post wasn't completely alluding to your poll.

I voted for clock frequencies btw. There really are a lot of things to look at overall beside those two things.
I'm not the OP, and I don't know if I even read what you posted or if I did I didn't look at who posted it. :D That's ok though.
 
More pipes! GPU tasks are some of the easiest to run in parallel and pipes are the best way to do this. By picking an easily attainable clock speed for your process they can keep power low and save a bunch of work in creating a tight tolerence high speed clock tree on the chip. As you keep pushing the clock speed up you eat up more of each cycle with clock uncertainty which kills your efficency eventually. Your effective yield can go up with the many pipe approach as well if you can turn off the bad pipes and sell the dies anyhow. Your chance of being able to do this goes up with the number of pipes. The two approaches should be about equal for overclocking as you are simply taking advantage of the headroom in the timing budget, this should be a similar percentage in both approaches. I would think the number of pipes would be constrained by the maximum die size you think you can get reasonable yield on, or that you can cool easily at your target frequency. I wouldn't expect to have the number of pipes go up each generation, they might even go down depending on the properties of future processes. Keeping each pipe fed may be a challenge as well, although they often can operate on significantly overlapping data sets I would imagine.
 
I voted for clock frequencies. I see ATI move to 90nm cores and new memory architecture a good step forward and ATI should be able to add more pipes relitively easy compared to nvidias next move to 90nm cores. Nvidia cards have been out a while and are priced nicely though.
 
vmerc said:
I'm not the OP, and I don't know if I even read what you posted or if I did I didn't look at who posted it. :D That's ok though.


HAHA, you know, you're right... looks like I wasn't totally paying attention.
 
chrisf6969 said:
X1800 vs 7800 is turning in to a Pentium 4 vs Athlon 64 like fight

P4 less work per clock, but higher clocks to keep up ("radical new design")
A64 (or Pentium M) slower clocks, more work per clock, more traditional design

It just kinda remains to be seen if the 7800 is more like a Pentium 3, Pentium M or AthlonXP or Athlon64 vs. the X1800 being a Pentium 4. It will take some time to tell as drivers pan out & newer games are released.


Ramping up clock speeds I think has been shown to eventually come to an end. I think its become obvious that they need to use the process shrink to make a bigger (wider) chip that can do more parallel processing, like dual cores, etc.... b/c you can't keep shrinking the die AND going faster, b/c you end up with a tiny piece of the sun (hot enough for fusion!) in your PC.

Quoting myself,

b/c I forgot to mention, that 2 gens ago it was the other way around. 9700/9800 = more pipes, 5800/5900 = less pipes/fancier design/higher clocks.
 
vmerc said:
Ugh. Guys it's not an ATI vs. Nvidia poll.
I may not have been intended as a ATI vs. nVidia poll, but because it was worded with ATI and nVidia both at the front of each choice, it will easilly become one.
 
Unoid said:
600mhz defualt gpu on a 7800gtx refresh?

not likely even if they went to 90nm process.

if the chip doesn't change and goes to .09 microns they definitly will be able to clock that high, just from the use of low-k or SOI, then they still will have clocking capabliity with the process drop.

I picked more pipes, but I think a combination of higher clocks and more pipes is the best way to go, just like what nV did with the g70. (although they didn't clock the g70's much higher then the nv40's it does have alot of clockablility)
 
I say whatever works, the less pipes the less transistors used for pipes = the more transistors available for hard ware based features. Either way what ever works works. Its impressive for ATi to only have 16 pipes and keep up or beat the GTX as much as it impressive for the GTX to only have 400mhz clock speed and keep up with the XT.
 
going off from fidel

its just a fact that being more efficient > actual stats

16 pipes keeping up with 24pipes

or 420mhz keeping up with 625mhz

no one cares what the stats are anymore, just instead how it actually performs (something the [H] preaches) what ever it was that these two companies did, it worked for them and it worked well
 
{NG}Fidel said:
I say whatever works, the less pipes the less transistors used for pipes = the more transistors available for hard ware based features. Either way what ever works works. Its impressive for ATi to only have 16 pipes and keep up or beat the GTX as much as it impressive for the GTX to only have 400mhz clock speed and keep up with the XT.

Excellent point. The cool thing is, that right now, you can go either way and be perfectly happy. The price is pretty much the only difference, and that appears to be diminishing. (prices are getting closer together) I'm running a 7800GT right now, and love it. I think I'd be pretty happy with an X1800 too. I go with whoever has the best card at the time when I want to buy. Last two rounds went to nvidia, two rounds before that were ATI, and before that was nvidia. I'm fairly certain that I'll want an ATI card again someday, when I go out to buy a card, an they happen to be the best available at the time. If I was buying a card right this second, I think I'd have a hard time deciding which one I wanted. Luckily the ATIs weren't out when I just picked up my 7800, so the decision was easy. :)
 
{NG}Fidel said:
I say whatever works, the less pipes the less transistors used for pipes = the more transistors available for hard ware based features. Either way what ever works works. Its impressive for ATi to only have 16 pipes and keep up or beat the GTX as much as it impressive for the GTX to only have 400mhz clock speed and keep up with the XT.

I think the glaring simple fact here, is that the 7800GTX provides very similar performance to the X1800XT, and yet it draws less power, takes up less room, makes less noise, and produces less heat...and it's available in quantity...

In the end, all that matters is who delivers the best performance for the price, and then after that who does it making less noise and heat...

Clockspeeds and pipelines don't tell the whole story...so it's hard to judge just based on those two things...in the past more pipelines has caused issues (the x800xtpe) and so has high clock speed (fx5800u)...I think what matters is how well all of these things are balanced by any given company to produce an all around good card...
 
^eMpTy^ said:
I think the glaring simple fact here, is that the 7800GTX provides very similar performance to the X1800XT, and yet it draws less power, takes up less room, makes less noise, and produces less heat...and it's available in quantity...

In the end, all that matters is who delivers the best performance for the price, and then after that who does it making less noise and heat...

Clockspeeds and pipelines don't tell the whole story...so it's hard to judge just based on those two things...in the past more pipelines has caused issues (the x800xtpe) and so has high clock speed (fx5800u)...I think what matters is how well all of these things are balanced by any given company to produce an all around good card...
Funny how the discussion of IQ never factors into the question. Who cares how fast it is if it doesn't look as the game author intended?

I want the card that gives me the highest possible settings while rendering what the game author intended; not some over optomized display with poor IQ. I don't want a card that applies AF to only 1/2 the screen, I want the entire screen.
 
^eMpTy^ said:
That doesn't make any sense...you could just as easily say nvidia is in a better position for the future since their chip already has 24 pipelines and they dont' have to resort to overclocking to get more performance...

What matters is what is available now...and right now the nvidia cards are cooler, quieter, and provide the same performance because they have more pipelines...so I would say that's a win for the more pipelines camp...

They are also cheaper..................
 
I just like the fact that ATi made their architecture so much more efficient. The new memory controller and increased memory bandwith, the smaller 90nm process, it all makes for a core that has lots of room to grow.
 
efficiency seems to be smarter than the brute force of 24 pipelines. it's like pitting a mustang against an infiniti g35. horsepower is not always the key to winning.
 
mtts_ultra said:
efficiency seems to be smarter than the brute force of 24 pipelines. it's like pitting a mustang against an infiniti g35. horsepower is not always the key to winning.
You could say the same things about clockspeed. I find increasing the number of pipes to be a much more elegant way to better an architecture than increasing clockspeed. And why do people keep saying that the x1xxx series is more efficient. If it was more efficient than a 7800, it woudn't be beat at the same clockspeed and pipes (which it is, tested in some driverhean article).
 
R1ckCa1n said:
Funny how the discussion of IQ never factors into the question. Who cares how fast it is if it doesn't look as the game author intended?

I want the card that gives me the highest possible settings while rendering what the game author intended; not some over optomized display with poor IQ. I don't want a card that applies AF to only 1/2 the screen, I want the entire screen.

99% of gamers won't notice the subtle differences in image quality...I know you love to talk about how much better ATi's image quality is...and how much smoother the card runs...but the fact is, hardly anyone else can tell a difference...

Is ATi's image quality better in some scenarios? Sure. But is nvidia's bad? Nope. Can you tell the difference without closely analyzing screen shots? Not really.

And who cares if the image quality is slightly better if the cards are slower, louder, and more expensive?
 
^eMpTy^ said:
99% of gamers won't notice the subtle differences in image quality...I know you love to talk about how much better ATi's image quality is...and how much smoother the card runs...but the fact is, hardly anyone else can tell a difference...

Is ATi's image quality better in some scenarios? Sure. But is nvidia's bad? Nope. Can you tell the difference without closely analyzing screen shots? Not really.

And who cares if the image quality is slightly better if the cards are slower, louder, and more expensive?

So your saying IQ isnt a factor when it comes to video cards?

Also the X1800XT is in par with the 7800GTX not slower.
 
I don't care who but I want a video card that costs 100-200 dollars and gets 60+ FPS in any shooter or similar online game with settings lowered all the way or up for at least 2 years!

Now that is a video card case in point my Ti4600 kicked mucho ass till BF2 came out I could handle any online game. BF2 was a god damn wall of moronic garbage. Now the 6600GT is aiming to fill this role will it complete it's mission?

Time will tell....
 
Im going with more pipelines because that seems to be doing a pretty good job as opposed to overclocking the piss out of the card.
 
I say pipelines simply because if you have a good card and have any sense of direction reading ability, you can overclock your core/mem freq's and increase the performance of your card further, and as far as I know you cant unlock pipes for the new x1XXX series, even if you could its not guaranteed to work.
 
DASHlT said:
So your saying IQ isnt a factor when it comes to video cards?

Also the X1800XT is in par with the 7800GTX not slower.

The x1800xt isn't out yet, so I can only go by the x1800xl...

And I'm not saying image quality isn't a factor...I'm just saying that it's not that big of a difference that most people would pay $100 more for a slower card just to get slightly better image quality...
 
^eMpTy^ said:
And I'm not saying image quality isn't a factor...I'm just saying that it's not that big of a difference that most people would pay $100 more for a slower card just to get slightly better image quality...
Easy to say when you have not actually gamed on both ;)

I am willing to pay the 100.00 for a card within 5% speed wise and better IQ. Its not just in game FYI.
 
^eMpTy^ said:
The x1800xt isn't out yet, so I can only go by the x1800xl...

Why? There are benchmarks showing the XT's performance and they don't even factor in the possible optimizations in future drivers (granted nVidia may also do this with the 7 series, but I'm really focusing on ATi's new memory architecture)
 
tornadotsunamilife said:
Why? There are benchmarks showing the XT's performance and they don't even factor in the possible optimizations in future drivers (granted nVidia may also do this with the 7 series, but I'm really focusing on ATi's new memory architecture)


I would just wait for the next gen, ATi does have a bit more this round (well not really a round because ATi was so late) the extra money its not worth it.
 
USMC2Hard4U said:
More Pipes > More Clock Speed

Look at processors now? We are moving away from High Speed single core, and going to a lowerspeed but Multi Core. Sure Clockspeed is good, but its all aobut how much work is getting done in the end.
Well said Sir...I happen to agree with this sentiment. :) Unfortunately, it seems, most people are being blinded by the "Megahertz Myth", in which they have been brainwashed by marketing peeps to believe that more MHz is always better, irrespective of the underlying Arch. :( It's this fundamental difference in thinking between marketing v.s. engineers which lead Intel to their Doomed (pun intended) NetBurst Arch.

The sad fact is that ATI is following this dangerous trend, with all of the fatal consequences that can ensue if a product cycle is not executed successfully. Intel have, in a short time-period, managed to turn around and started innovating by announcing a whole new Arch for their next-generation Pentium series...but that was Intel Corporation. I hope ATI will manage to do the same (the upcoming unification of pipes & vertex shaders is a wonderful opportunity to accomplish this), for their own sake. :)
 
While graphics cards are virtually infinitely parallizable, at the same time, we're reaching a point where simply adding more raw fill rate/horsepower does not automatically equal more performance, or at the very least, that the increases in performance will scale linearly with the increases in raw power. Phew, I spit it out.

RAM bandwidth efficiency, shader power, power consumption, etc, are all factors in GPU design now. There are considerations in GPU design that we may not be even aware or not fully understand - look at the impact of a fairly simple driver level tweak on the X1Ks memory controller for high IQ OpenGL performance. On the flip side, look at 6800GT/Ultra being comparable to X800/X850, despite often having a deficit on raw clock speed/fill rate of sometimes 20%. Look at 6600GT outperforming 9800Pro, despite being limited by a 128bit bus with less RAM bandwidth.

Brent has said it many times - you just can't tell how capable a GPU is anymore by looking at it on paper. The only way to find out how good it is is to strap it to a bench and see what kind of FPS and IQ it delivers in real games. Raw fill rate certainly isn't helping last gen's 16 pipe cards in F.E.A.R. ...
 
1c3d0g said:
Well said Sir...I happen to agree with this sentiment. :) Unfortunately, it seems, most people are being blinded by the "Megahertz Myth", in which they have been brainwashed by marketing peeps to believe that more MHz is always better, irrespective of the underlying Arch. :( It's this fundamental difference in thinking between marketing v.s. engineers which lead Intel to their Doomed (pun intended) NetBurst Arch.

The sad fact is that ATI is following this dangerous trend, with all of the fatal consequences that can ensue if a product cycle is not executed successfully. Intel have, in a short time-period, managed to turn around and started innovating by announcing a whole new Arch for their next-generation Pentium series...but that was Intel Corporation. I hope ATI will manage to do the same (the upcoming unification of pipes & vertex shaders is a wonderful opportunity to accomplish this), for their own sake. :)
The way I see it, ATi has taken the easiest route of simply increasing clock speeds, and sacrificing what could have been awesome performance to make sure they could get their new architecture out the door.
I don't know about the rest of you, but I will congratulate ATi for not aiming for the performance crown (well....much :p), but instead paving the way for newer generations. If you look closely, this architecture in the current X1K series can easily be modified with lower clockspeeds/more pipelines, simply because the underlying architecture is a great step forward into the future.
Oh, and I refuse to vote on the poll. End performance and new technology is what people should base their opinions on, and, if the poll reflected this, I would vote ATi.
 
Am I the only one that thinks it does not matter so long as the product preforms?
 
^eMpTy^ said:
The x1800xt isn't out yet, so I can only go by the x1800xl...

And I'm not saying image quality isn't a factor...I'm just saying that it's not that big of a difference that most people would pay $100 more for a slower card just to get slightly better image quality...

Maybe Most people dont go by your opinion. IQ is a huge issue to some people, and a non-factor to some. So by saying (most people) is a !!!!!! statement. IQ is a big deal, as well as speed, and IF ati can produce both (which is seems they have) Why not go for a more superior card with HQ AF/IQ and faster performance? Specially with the new openGL hotfix. See those quake4 benchmarks?

http://www.firingsquad.com/hardware/quake_4_high-end_performance/page6.asp

The X1800XL is on par, and somtimes faster then a 7800GT in quake4. And the XL has superior IQ compared to the 7800GT. (plus ATI has just begun tweaking their MC, AND you can buy an X1800XL right now)

So to me pipes and mhz dont matter. I want a good card with great IQ and great performance. (and of course heat is an issue too)

But hey thats my opinion. NOT MOST PEOPLE.
 
First, Its good that aside from a few posts, this poll did manage to stay ATi Vs. nVidia free.

Second, after reading other posts I believe I should have added one more choice : neither ..
This is because its true that performance comes from more places than these 2 factors, and both companies have to find other ways to increase performance without increasing neither the clock or the pipes...
 
How about the choice of more pipelines and more mhz. A 32 pipes 7800 GTX, 512 MB of Ram running at 800 MHZ. Or an X1800XT 32 Pipes, 512 MB of Ram running at 800 mhz so we can now for sure which company is better.
 
.::MAGE::. said:
Yes I have to agree the whole situation from both companies has gotten out of hand. Right now the major costs to a system are your graphics card and CPU which is rather insane but whatever. They make up around 2/3 of my upgrade costs.

Why is that insane? They are by far the most complicated and costly to develop and manufacture.
 
just because the item is out there doesn't mean you are going to need it to game

the 6600 was under $150 when it came out and gave more then enough performance to game on, if you want to build a killer gaming rig then of course its going to cost a crap ton of money, and has always included the video card as being one of the major pricing points
 
Back
Top