Xbit Labs: Bulldozer up to 50% faster than i7, Phenom II

990FX should have that going by the leaks. There have been plenty of benches done though that show 8x PCIe 2.0 lanes per slot are nearly as good, even at very high res. So unless there is some other feature you want on the 990FX board (RAID 5 I think?) you could save some money on a cheaper one and still get within a couple of percent of the 990FX board.

e: dunno if the PCIe controller being on die or not makes a big deal with gaming performance. Anyways bottleneck there these days is the GPU not the CPU.
 
Last edited:
i tell you what if this thing has 16x pci-e on each lane for dual vga then i will be purchasing this new architecture from amd rather than the stupid intel which only has 8x on each lane. will be waiting wtih my fat wallet when this comes out for sure. i dont reallly care if its just a little bit slower than SB. but i do hope it will be alot cheaper than intel. if not then well i will wait longer.
Posted today:
http://news.softpedia.com/newsImage/AMD-s-Upcoming-AM3-900-Series-Chipsets-Get-Detailed-3.jpg/

The PCIe controller on AM3+ is still in the chipset, even with BD.

For gaming, you may be pretty disappointed with BD.
 
why? because the PCIe controller is on the chipset? why would that disappoint
No, because the most sunshine-y (official) projections don't anywhere near close the performance gap between what BD will likely deliver and the far faster SB.
 
I have been reading alot of good things about AMD and BULLDOZERS latest CPU architecture. This architecture is replacing the Opteron of 2003 so you know it will be very nice.

Intel Fanboys will always be annoying. GG.

So will AMD faboys considering they talk so much about pre-release numbers that when the chip finally comes out, it performs nothing like it was hyped up to be (see past releases)

People have little faith cause AMD hasn't been able to compete with Intel since Athlon 64 when they were kicking butt.
 
Posted today:
http://news.softpedia.com/newsImage/AMD-s-Upcoming-AM3-900-Series-Chipsets-Get-Detailed-3.jpg/

The PCIe controller on AM3+ is still in the chipset, even with BD.

For gaming, you may be pretty disappointed with BD.



thanks for the link. looked at those benches and with more advanced graphics and multi-monitor i think 16x on dual vga will play a role in gaming as far as im concerned. 8x maybe not bottlenecked yet but soon will be with 2 3 or 4 sli in the future. but those are down the line. what im concerned is that intel releasing their SB with just 100 megahertz baseclock without overclocking it is a shame. whats next we need to upgrade out motherboard when intel releases their 133 megahertz baseclock and 166 and 200. hmm seems like a conspiracy to me. SB reminds me of when we used to have 33, 66, then 100 megahertz fsb. all you had to do is oc it and raised the multiplier before they locked them. seems like we are going backwards instead of forwards imo. but for me to upgrade it has to be reasonable price wise and at least come close to it or on par with SB. anyway just my rant.
 
So will AMD faboys considering they talk so much about pre-release numbers that when the chip finally comes out, it performs nothing like it was hyped up to be (see past releases)

People have little faith cause AMD hasn't been able to compete with Intel since Athlon 64 when they were kicking butt.

by the way im intel fanboy and nvidia too. im really thinking of switching to amd side if bulldozer can see some good oc , benchmarks, and price. but will not go amd side for graphics. i like my nvidia drivers and grahics card. ive only owned 1 ati cards and thats for my htpc build. even then i didnt really liked it. y do i need to install .net framework first for me to install ccc. for nvidia i dont need to do that. when i format or build a new pc the first thing i like doing is to download the drivers first for the gpu so i can see everything and not have to worry about some parts of a window is hidden because its soo big i cant click apply or ok. again my rant.
 
Last edited:
<troll>any self respecting ati buyer uses the omega drivers...</troll>

seriously download the omega drivers no .net, I'm not a fan boy and generally buy the best performance per dollars. Sometimes AMD (usualy on CPU's exception Q6600) and Nvidia on GPU's (exception onboards)... IMO ATI needs to drop .net they would get more buyers if they could.
 
.net isn't a big deal. Back when it was first released it was kinda buggy but these days its perfectly fine.
 
what im concerned is that intel releasing their SB with just 100 megahertz baseclock without overclocking it is a shame. whats next we need to upgrade out motherboard when intel releases their 133 megahertz baseclock and 166 and 200. hmm seems like a conspiracy to me. SB reminds me of when we used to have 33, 66, then 100 megahertz fsb.
The base clock is nothing like the old FSB. :p It's a reference clock used for several clock domains.

You could always pay for $11 or $23 premium to get the unlocked K version of the chip. That's pretty reasonable compared to the $30-$70 premium AMD charges for the unlocked Black Edition CPUs. Otherwise, you can overclock the non-K versions by 4 speed bins plus turbo. For example, the "non overclockable" i5-2500 with 3.3GHz clock and 3.4/3.5/3.6/3.7GHz turbo (for 4/3/2/1 active cores, respectively) can be overclocked :p to 3.8/3.9/4.0/4.1GHz turbo fequencies on P67 motherboards (at least one H67 board has similar functionality).
 
.net isn't a big deal. Back when it was first released it was kinda buggy but these days its perfectly fine.
IMO bugginess wan't the problem, early versions just had too many limitations. The 1.x versions were patched to fix bugs, but it wasn't until 2.0 that it started to become much more complete in both the Framework libraries and corresponding enhancements to the languages. With each .NET release, the languages are also improved. C# revisions closely mirror major .NET releases. VB.NET had a lot of catching up to do before 3.0, but now is a first class language like C#, as far as language features go.
 
Well, that article could be true. People are looking at it the wrong way. It is comparing an 8 Core bulldozer vs a i7 950(4 Core, 8 threads). Think about it. BD is twice the cores which core vs core count should be a 100% jump in performance just on the core count. Now tie in HT it drops that to 50% or it may even, even the field and thats on Intels last gen. So performance per core is NOT in favor of AMD from that point because, like I said before SB has dropped and even with BD having a possible 50% lead on a 950, it won't be that against SB. It just makes sense. If BD would have been out before the holidays, I would have said the fight was back. Now, not with the early jump SB got. (Yes its all speculation, but the core vs. core logic still stands.)

an 8 core bulldozer is not really 8 full cores.. its 4 float cores and 8 integer cores..

http://www.anandtech.com/show/3863/amd-discloses-bobcat-bulldozer-architectures-at-hot-chips-2010/4
 
They're close enough to real cores, and I believe the floating point unit can operate as two 128bit units or one big 256bit shared between the cores - time will tell how well this works.

The problem with AMD is they seem to use throughput and speed interchangeably. If they are saying each core is 50% faster than an i7, then I will be very impressed. However, if they are comparing the throughput of 8 bulldozer cores against a 4 core i7 (or even 6), then 50% is a step backwards.

I just wish they would release some proper performance numbers, or better yet. Let the likes of Anand or Kyle play with one.
 
an 8 core bulldozer is not really 8 full cores.. its 4 float cores and 8 integer cores..

http://www.anandtech.com/show/3863/amd-discloses-bobcat-bulldozer-architectures-at-hot-chips-2010/4

They are 8 full cores and those 8 cores each have access to their own FPU. In 256-bit AVX code you can merge 2 FPUs together to run 256-bit executions. For intel they merge their 128-bit FPU with their 128-bit SSE paths to get to 256-bit. Does that make them not real cores because they can't run SSE and AVX at the same time? No. Everyone is merging resources to try to get to 256-bit AVX.

They're close enough to real cores, and I believe the floating point unit can operate as two 128bit units or one big 256bit shared between the cores - time will tell how well this works.

The problem with AMD is they seem to use throughput and speed interchangeably. If they are saying each core is 50% faster than an i7, then I will be very impressed. However, if they are comparing the throughput of 8 bulldozer cores against a 4 core i7 (or even 6), then 50% is a step backwards.

I just wish they would release some proper performance numbers, or better yet. Let the likes of Anand or Kyle play with one.

Benchmarks at launch, that is standard procedure for us.
 
holy crap I thought he quit making omega drivers, I used to love those back in the 9800 days. Glad I saw that. Going to try them out tonight.

edit: meh, he did quit it seems. Everything is old as crap and no win 64bit drivers at all.
 
holy crap I thought he quit making omega drivers, I used to love those back in the 9800 days. Glad I saw that. Going to try them out tonight.

edit: meh, he did quit it seems. Everything is old as crap and no win 64bit drivers at all.

To be fair, AMD driver's aren't so shitty anymore that Omega drivers are a necessity.
 
To be fair, AMD driver's aren't so shitty anymore that Omega drivers are a necessity.

The graphics drivers still suck in linux well compared to nvidia. I just avoid them (or Intel GPUs) on any machine that I think will ever need linux on. Although no companies care about linux sales on the desktop.
 
i'm still wary about it.

8 cores vs 4 cores for 50% higher? clockspeed/clock-for-clock comparison still probably slower than intel

From what I understand of Bulldozer, each 'core' in Bulldozer is not comparable to a normal 'core' as we know it. Normally, each integer core has its own dedicated support hardware (like instruction fetch/decode and L2 cache). But in Bulldozer, each integer core is sharing support hardware with another integer core (images stolen from HotHardware):

Bulldozer Module

8-Core Bulldozer

So each module in BD is two cores that are sharing a bunch of hardware, instead of each having their own dedicated hardware like in Core 2/Phenom/Sandy Bridge/whatever (images stolen from Techreport):

Traditional Core

Bulldozer Core

I think AMD stated somewhere that a core in Bulldozer would only be something like 60-70%, performance-wise, to a normal core. But I forget where I read that (I may have hallucinated it). In exchange though, you get true hardware multi-threading that is completely invisible to the O.S. (no need to enable/disable anything).

I understood it to be kind of like the Radeons and their 5-way shaders. A Radeon with 1600 shaders does not actually have 1600 full shaders, but 1 full shader + 4 small helper shaders. You have to divide the number by 5 to get the number of full shaders which winds up being 1600/5 = 320.
 
Last edited:
From what I understand of Bulldozer, each 'core' in Bulldozer is not comparable to a normal 'core' as we know it. Normally, each integer core has its own dedicated support hardware (like instruction fetch/decode and L2 cache). But in Bulldozer, each integer core is sharing support hardware with another integer core (images stolen from HotHardware):

Bulldozer Module

8-Core Bulldozer

So each module in BD is two cores that are sharing a bunch of hardware, instead of each having their own dedicated hardware like in Core 2/Phenom/Sandy Bridge/whatever (images stolen from Techreport):

Traditional Core

Bulldozer Core

I think AMD stated somewhere that a core in Bulldozer would only be something like 60-70%, performance-wise, to a normal core. But I forget where I read that (I may have hallucinated it). In exchange though, you get true hardware multi-threading that is completely invisible to the O.S. (no need to enable/disable anything).

I understood it to be kind of like the Radeons and their 5-way shaders. A Radeon with 1600 shaders does not actually have 1600 full shaders, but 1 full shader + 4 small helper shaders. You have to divide the number by 5 to get the number of full shaders which winds up being 1600/5 = 320.

All I can say is, WOW. You got so little right.
 
Each module has 2 cores. While cores do share some components, they are real cores in every sense of the world.

The moment the world stepped from single core to dual core, all of the expectations about what a "core" is changed.

Remember when each core had its own memory controller and didn't have to share?

Look at all of the discrete components inside a multicore CPU and you see a ton of them. Some people have claimed that these are not "real" cores because they don't have 100% discrete components, but no processor, save the few single cores, still have that.

The argument becomes where the line is drawn, not if the line is drawn.
 
All I can say is, WOW. You got so little right.

Well, thanks for taking the time to set me straight on that. :rolleyes:

Each module has 2 cores. While cores do share some components, they are real cores in every sense of the world.

The moment the world stepped from single core to dual core, all of the expectations about what a "core" is changed.

Remember when each core had its own memory controller and didn't have to share?

Look at all of the discrete components inside a multicore CPU and you see a ton of them. Some people have claimed that these are not "real" cores because they don't have 100% discrete components, but no processor, save the few single cores, still have that.

The argument becomes where the line is drawn, not if the line is drawn.

Ah, I see. Thank you much for an actual, informative reply.
 
All this Bulldozer talk has gotten me excited like when CPU's went from 486DX to Pentium 75 Mhz Socket 5 early days.

And the old 939 days. ;) This is so cool. :D
 
All this Bulldozer talk has gotten me excited like when CPU's went from 486DX to Pentium 75 Mhz Socket 5 early days.

And the old 939 days. ;) This is so cool. :D

The gamers among us who are itching for the release of Bulldozer are probably asking, "Will it play Crysis." The folders among us are going to be asking "Will it run bigadv." Bigadv are the extra large massive work units that can take days to complete. With Bulldozer having upto 8 physical cores, it should have no problem completing a bigadv on time so long as the IPC and stock clock speeds are decent. However the current single CPU king is Intel's Gulftown for folding purposes............
 
The gamers among us who are itching for the release of Bulldozer are probably asking, "Will it play Crysis." The folders among us are going to be asking "Will it run bigadv." Bigadv are the extra large massive work units that can take days to complete. With Bulldozer having upto 8 physical cores, it should have no problem completing a bigadv on time so long as the IPC and stock clock speeds are decent. However the current single CPU king is Intel's Gulftown for folding purposes............
As a gamer, I must say that I'm more interested in what real games Bulldozer will be able to handle more efficiently, not a worthless benchmark like Crysis. Just had to get that out there.
 
So will AMD faboys considering they talk so much about pre-release numbers that when the chip finally comes out, it performs nothing like it was hyped up to be (see past releases)

People have little faith cause AMD hasn't been able to compete with Intel since Athlon 64 when they were kicking butt.
What pre-release numbers are you talking about? AMD hasn't released any. If you are referring to the highly suspect potential fakes that are always released before new AMD cpu/gpus over the last few years it's your own fault for considering them in the first place.
 
As a gamer, I must say that I'm more interested in what real games Bulldozer will be able to handle more efficiently, not a worthless benchmark like Crysis. Just had to get that out there.

Well, it was more or less a rhetorical question.
 
Lets just hope AMD pulls a rabbit out of their hats with Bulldozer. Their processors have sucked since Intel dropped a Conroe bomb on them.
 
Lets just hope AMD pulls a rabbit out of their hats with Bulldozer. Their processors have sucked since Intel dropped a Conroe bomb on them.

How much was it since then ? 5 years since Intel first allowed Conroe benches out at IDF ?

Time flies...:cool:
 
Lets just hope AMD pulls a rabbit out of their hats with Bulldozer. Their processors have sucked since Intel dropped a Conroe bomb on them.

Its all relative their Answer the Phenom sucked. But the Phenom II has been bang for the buck competitive for the last 2 years or so. Still would be nice for something AMD that could take the screws to a highend I5 or I7 cpu.
 
I'd love to see the return of the days when I could seriously consider AMD for a top performing rig.
 
I'd love to see the return of the days when I could seriously consider AMD for a top performing rig.

They don't have to be.
Right now they offer a perfectly good CPU that does everything you need for less than the equivalent that Intel is giving us.
No they're not competing at the $300 or $500 or $1000 price points, but I don't care. I'm not spending that on a CPU; and neither are most people buying computers.

I understand the desire for parity across the board, but AMD has made business model of competing without it. Athlon and Athlon64 created an expectation of equals in the CPU market but AMD survived for 15 years providing what Intel didn't at a price that Intel wouldn't.

Xbit Labs: Bulldozer up to 50% faster than i7, Phenom II

Yea, and Jessica Alba has up to a 50% chance of accepting my marriage proposal but I'm not counting on that either.
 
Is G34 Bulldozer still dual die? Or will the big one be 8 Bulldozer modules on a single monster die?
 
Dual die.

Interesting to think that somehow if AMD is not competitive with intel at the top bin that somehow we are in trouble and not competitive.

Let's just say that there are 10 slots in the processor stack with that mythical $1000 part that nobody buys at the top and the $29 part that few buy at the bottom.

The 4th through 7th slots in that stack represent 80%+ of the market. So if AMD is competitive in those slots, they are competitive in the market.

Let's say, for instance, that we put all of our effort in beating that top slot. So we come out with a $1200 part that is 20% faster than intel's $1000 part. But in chasing that, we end up adding tons of cost so that everything else in the stack is overpriced.

Now we have a great advantage in that .01% of the market, but where 80% of the market is, we are overpriced. What happens then? We have the top spot, we will be competitive?

Absolutely not.

The market is not, nor will it ever be won by the highest performance part. Take a look at the signatures of the people that you see crowing about how important performance is. What parts are they running? Probably in the 2nd or 3rd slot, definitely not the top.

What is important is beating intel in the spots were people are actually spending money.
 
What is important is beating intel in the spots were people are actually spending money.

YES People!!

Please get this through your thick skulls! Only that small % needs that thousand dollar part that JF-AMD stated. Most of you gamers out there play @ resolutions 1080p or higher so the graphic card takes care of the workload. I'm not sure what tech site hasn't shown that the x6 cpu lineup "overclocked" are sufficient for those needs. Now I must say, seeing that Sandy Bridge is out really puts pressure on AMD. However, if Bulldozer is priced and performs accordingly, AMD has no worries.
 
YES People!!

Its certainly true that most people don't care about or buy the very top end chips, but to compete effectively you still need a chip that at least gets close to the competition if you're gonna sell for a similar price in the mid or low range. Which is what AMD needs to do in order to stay in business. Even though they got rid of their fab making and designing the chips is still very costly.
 
Yea, and Jessica Alba has up to a 50% chance of accepting my marriage proposal but I'm not counting on that either.

Very bad analogy. We know that Jessica Alba has been happily married for a couple of years, but we still know little about BD.
 
50% higher than i7 950 isn't that is around i7 2500? And we still have SB2011 coming. Don't get me wrong, I love those price war so the consumer get better deal. I just want to say don't be too disappointed if BD don't come out as good as many people think.
 
Dual die.
Let's say, for instance, that we put all of our effort in beating that top slot. So we come out with a $1200 part that is 20% faster than intel's $1000 part. But in chasing that, we end up adding tons of cost so that everything else in the stack is overpriced.
But you don't decide in 2008, let's build a CPU that will be good enough for the bulk of the market in 2011. You build the best CPU you can based on limitations in power/heat, die size and process technology and hope for the best. If it's the fastest CPU on the market in 2011, then awesome, you've made a lot of money. If not, its downhill from there. It's no surprise that historically AMD has made the most money when it has had the absolute performance crown and lost the most money when it was the least competitive.

Now we have a great advantage in that .01% of the market, but where 80% of the market is, we are overpriced. What happens then? We have the top spot, we will be competitive?
The economics of x86 manufacturing in the past says this is irrelevant. The costs of making $1200 CPUs are not much different than making $200 CPUs.

The market is not, nor will it ever be won by the highest performance part. Take a look at the signatures of the people that you see crowing about how important performance is. What parts are they running? Probably in the 2nd or 3rd slot, definitely not the top.
But also, you find most of them purchasing the 2nd or 3rd slot of the side with the top slot.

What is important is beating intel in the spots were people are actually spending money.
Which is fundamentally tougher without top performing parts as the other side dictates which spots you can fight in and how much you can charge for your products.
 
Back
Top