Dual Core (not cpu's)!

koopaGG

2[H]4U
Joined
Jan 3, 2006
Messages
3,238
Is there a good reason why there is no dual core GPU's? I don't mean SLI or a 2 cores on a card I mean dual core gpu's like our cpu's. I've noticed we have "SLI Cards" which contain 2 individual GPU's (2 seperate cores). Anyone know a reason why this hasnt occured yet? Maybe heat?
 
These would probably end up replacing the idea of 2 cards linked together wouldn't it? It would be cool to see this happen.
 
well considering 1 core is pumping out 70C load, and has ALOT of transistors already, doubling everything on the core would just cause way too much heat and you would need water. and of course it wouldn't replace SLI, people want to have the most performance.
 
The short answer is that it isn't exactly easy to do.

Now, for a slightly longer answer, but not too long. multiple core CPUs are fairly easy to link together. You use some sort of connector (crossbar, etc) and some memory scheme (shared, split, and so on) and figure out how to package it. That's pretty much it. Other than that last part, the packaging, the operation is straightforward and you can draw on the experience of multiple processors as it's really not any different at a high level. Yes, implementation requires a crowbar and grease, but at its crudest form, that's it. Really, the "dual core" P4s are like that. They're really just two processor dies in one package. They could have just as easily been put into two packages with their connector circuitry placed between them. The AMD x2 and the new Intel "Core Duo" are must more sophisticated, but at a high level, give the same effect.
All that said, GPUs aren't anything like that. Yes, they're processors, and yes they would be connected with each other in a similar way, but that's where the comparison ends. Don't forget that up until the 7900, a separate circuit was required for SLI. That meant a chip between the GPUs, so that would need to be integrated. The latest series has that circuitry embedded, so that would make it easier. On the ATI side of things, what you would have is two slightly different cores. It would kinda be a master/slave core thingy.
Now, whereas most OSes are written to handle multiple CPUs, the drivers would have to be completely rewritten to accomodate a dual core GPU setup. Indeed, I imagine it requres a complete architecure change.
Of course, everything I just typed just answers the question: why can't they just start building them. The real answer to your question lay in how long it takes to design and build a new processor (CPU or GPU). Even though the time between new iterations of the major GPUs is 6-9 months, it takes far longer than that to design and build them. Depending on the circumstances, a new design can take upwards of 6 years from project creation to delivery and can cost billions if it requires a new fabrication plant. Even if you're farming out the fab work, it still requires that SOMEONE build it.

Still, as the job of GPU can easily be split among multiple processors, and dual core/single package is the least expensive way to go, I'm sure that some engineers are busily creating exactly what you're talking about. As one of the others said, we may see one soon.
 
Killa_2327 said:
Is there a good reason why there is no dual core GPU's? I don't mean SLI or a 2 cores on a card I mean dual core gpu's like our cpu's. I've noticed we have "SLI Cards" which contain 2 individual GPU's (2 seperate cores). Anyone know a reason why this hasnt occured yet? Maybe heat?



Why? Real reason: Money. They can make tons more from us consumers while we wait for the next thing.
 
"SLI" cards with 1 card and 2 gpus already exist, much like the old voodoo 2's with 2 gpus on each card. The heat is an issue but if they can do SLi'd gpus on one card with one heatsink, then a dual core shouldn't be any different. But then again doubling the pipes on one core and just making the core size larger is just about the same thing.......... So, what'd be the point? None that I can really see. Plus with only one gpu there'd be no unnecessary engineering problems/costs for trying to get two gpus to work together correctly (and you know there would be at least a few issues to work out!).

~Adam
 
They are already so parallel. As Brent said, they sort of are multi-core. Think aboout the "quads" people have talked about. More of the gpus on the same card or another card was really the next logical evolution. Have to see what they can come up with next.
 
It really depends on how you define things but for the most part when you look at a GPU there are multiple "cores" in it already. You've got separate vertex units which feed into separate pixel shader units and pixel shader processors (ALUs), that feed into texture units that feed into raster operators, and each component has multiple units of each. Then you take batches of those units and put them together in "quads". The whole thing is very parallel.

All you have to do to "double" the GPU is add double the amount of vertex, pixel and raster or texture units. So for example if you had an 8/24/16 architecture (8 vertex units, 24 pixel units and 16 raster units) and you wanted to "dual core" that then you'd just double it to 16/48/32 (16 vertex units, 48 pixel units, 32 raster units). That is a simplified explanation but you can see that truly GPUs are multiple core processors by nature.

Dual GPU = SLI/CrossFire. THAT is your dual core solution.
 
Rage Fury MAXX ftw! ;)

I presume you mean in that mould? Radeon MAXX has a cool ring to it.
 
and if you think about it theyd make more money selling 2 7900gtx than selling a 7900gtx duo :p
 
Mayhs said:
and if you think about it theyd make more money selling 2 7900gtx than selling a 7900gtx duo :p

Actually, depends on how uber that 7900gtx is- if they can charge $600 for it for the next year and a halrd, even when the next 3 generations come out, then they'd have a winner on their hands... Not everyone will buy into SLI because it has obvious drawbacks like needing to have a specific type of motherboard and an "SLI compatible psu".. or in otherwords a good PSU. So that's a whole upgrade right there... if you just have to stick one card in, it's much more feasible. But then again for the reasons stated above by Brent... dual core would really just be a gimmick anyhow.

~Adam
 
CleanSlate said:
Actually, depends on how uber that 7900gtx is- if they can charge $600 for it for the next year and a halrd, even when the next 3 generations come out, then they'd have a winner on their hands... Not everyone will buy into SLI because it has obvious drawbacks like needing to have a specific type of motherboard and an "SLI compatible psu".. or in otherwords a good PSU. So that's a whole upgrade right there... if you just have to stick one card in, it's much more feasible. But then again for the reasons stated above by Brent... dual core would really just be a gimmick anyhow.

~Adam

true but 2 gpus would take similar power and would output as much hear and even if its worth $800 2 7900gtxs = $900+?
 
cyks said:

LOL. That was dumb... uhh that would be SLI. Scalable Link Interface (SLI) is generally simply associated with a bridge connecting two cards, but it is more then that. If both the cores are on the same PCB then no bridge is necessary. Its still SLI. It uses SLI software, and SLI hardware, just minus the bridge, weather or not doing what Asus did in that example increases bandwidth I don’t know, but it's still SLI.

And yea we have dual GPUS, we don’t need to sandwich them, and yea heats a good point… without going into details that dude way back said it best. Other then they're both black and both contain transistors, there are no similarities between CPUs, PPUs, and GPUs.
 
MrWizard6600 said:
LOL. That was dumb... uhh that would be SLI. Scalable Link Interface (SLI) is generally simply associated with a bridge connecting two cards, but it is more then that. If both the cores are on the same PCB then no bridge is necessary. Its still SLI. It uses SLI software, and SLI hardware, just minus the bridge, weather or not doing what Asus did in that example increases bandwidth I don’t know, but it's still SLI.

And yea we have dual GPUS, we don’t need to sandwich them, and yea heats a good point… without going into details that dude way back said it best. Other then they're both black and both contain transistors, there are no similarities between CPUs, PPUs, and GPUs.

There is still an SLI bridge, it is just built internally on the PCB when it is on a single PCB.
 
Brent_Justice said:
It really depends on how you define things [sic]
Which is why Intel got away with dubbing it EE840 as dual core, when it technically was not. Lets all just quit while we are ahead.
 
There are a number of reasons why multi "core" gpus aren't around, and most have been covered, but the one I think of is production yield. The more transistors you put on a single piece of silicon, the greater the probability that some will not work correctly. We see this today in parts where "pipes" for lack of a better word are disabled because some transistors in that quad did not work correctly. What we never see are the chips that have bad transistors in some common area that renders the whole chip useless. If you try to stack more "cores" onto one piece of silicon, you're going to have some cases where one or more whole "cores" don't function. This doesn't sound like much of a problem until you think of the fact that 25-50% of the actual chip isn't working- and what this means for temperature gradients across the silicon. Large temperature gradients can crack a core.

I have no doubt that ATI or Nvidia could today generate a chip design that spans an entire 300mm wafer with one ginormous 1000+ shadersomething GPU . The reason they don't (other than sheer size) is fab process error rate and economics.
 
Why are GPU core's clocked so much slower than CPU cores? I don't think GPU's have cracked the Gigahertz mark yet (have they?) and CPU's are well over 2 Ghz. Do Intel and AMD just have better technology? If that's true the video card companies should try and liscense it.
 
No no, the difference in technologies is what makes the clock speed so different. Cpus and gpus are designed way differently.

~Adam
 
Darth Flatus said:
Why are GPU core's clocked so much slower than CPU cores? I don't think GPU's have cracked the Gigahertz mark yet (have they?) and CPU's are well over 2 Ghz. Do Intel and AMD just have better technology? If that's true the video card companies should try and liscense it.

Why clock a part with a high frequency (which means more voltage and heat) when you can just expand by adding more shader units, processing units and pipelines to achieve better performance?

Welcome to Parallelism

CPUs and GPUs are way different, which means clock speed and performance means something totally different. It is fact that the GPU is way more powerful than the CPU. Clock speed isn't everything, think Athlon vs. P4.
 
dual GPU's are in the lineup at nvidia, i think it was called the g80 chipset, but yes it would have an INTERNAL dual core, none of that "we ghetto stuck two cards together" for those quad sli setups, that has some serious bottlenecking issues.
 
Tigerblade said:
Rage Fury MAXX ftw! ;)

I presume you mean in that mould? Radeon MAXX has a cool ring to it.


Radeon X1900 XTX MAXX? It might flop on the video card side, but keyboard sales would go through the roof as !!!!!!s everywhere destroy their X keys...
 
truffle00 said:
Radeon X1900 XTX MAXX? It might flop on the video card side, but keyboard sales would go through the roof as !!!!!!s everywhere destroy their X keys...

*LOL* :D
I guess soon we will have to have 2 "X" key on our keyboards ;)

Terra...
 
board2death986 said:
dual GPU's are in the lineup at nvidia, i think it was called the g80 chipset, but yes it would have an INTERNAL dual core, none of that "we ghetto stuck two cards together" for those quad sli setups, that has some serious bottlenecking issues.


So you work for nVidia's research and design team ? I suspect that your mention of the rumored G80 having "dual cores" would constitute violation of your employment contractand you would be best to get a laywer and start lookg fr a new job .. otherwise you are talking from your @$$.

As mentioned already time and time again, "multicore" GPUs already exist and have for some time now,.. making the current X1900 effectively a "quad core" and 7800/7900 5-6 cores in number. Thats as close to Multicore GPUs we have seen, anything more is pure speculation and if anyone who actually knew the truth about future nV GPU development, was to read such speculative threads, they and they co-workers would get a chuckle out of it at least.
 
FrameBuffer said:
Thats as close to Multicore GPUs we have seen, anything more is pure speculation and if anyone who actually knew the truth about future nV GPU development, was to read such speculative threads, they and they co-workers would get a chuckle out of it at least.

I am willing to bet they do that from time to time ;) *L*

Terra - I know I would :D
 
what alot of people don't understand is when others mention a GPU Core, they think "must run like a normal CPU core" when its already massively handling engines and quads already, since we already have parts the "core" as a vertex engine and 4x pixel engines or 6x in NV's case

its already been mentioned by brent, if you wanted dual core GPU's just wait till everythings doubled
 
Back
Top