Barcelona to have a short market life

spectrumbx

[H]ard|Gawd
Joined
Apr 2, 2003
Messages
1,647
It is clear from AMD's tactics that Barcelona will have a short market life.

This is an embarrassing chapter AMD is trying to put behind.

The processor is already 6 months late, and it has yet to make it through the standard retail channels.
It is clear that AMD will have to precipitate a new architecture as Barcelona is a bust.
 
It is clear from AMD's tactics that Barcelona will have a short market life.
I don't think so. It might be disappointing on some levels, but AMD has no choice but to stick with it until their next core is ready.
 
What makes you believe it will have a short life? Even if a new architecture was introduced this year, Barcelona and its siblings will be around for a long while before they reach EOL status. Look how long socket-A endured with all the processor varieties that AMD released for it. I think your judgment is highly premature and unfounded.
 
makes sense, since it will be replaced with the DDR3 version relatively soon.. ;)
 
I was under the impression that the current chips integrated memory controllers were designed for both ddr2 and ddr3.

Correct me if I am wrong.
 
I was under the impression that the current chips integrated memory controllers were designed for both ddr2 and ddr3.

Correct me if I am wrong.

I was under the impression that Barcelona does in fact have a DDR3 memory controller, but it will only be useable on AM3 socket boards. Please correct me if I'm wrong.
 
Really? Isnt it a bit early to call the Barcelona core a flop?

It is late i"ll give you that. I'm confused as to why everyone thinks that it will be a bust? I think that it just need a little time to mature. I highly doubt that 2.0GHz will be the fastest chip around for too long.
 
It is clear from AMD's tactics that Barcelona will have a short market life.

This is an embarrassing chapter AMD is trying to put behind.

The processor is already 6 months late, and it has yet to make it through the standard retail channels.
It is clear that AMD will have to precipitate a new architecture as Barcelona is a bust.

This post is complete and utter garbage.

Yea, it's such an embarrassing chapter that's why AMD released Barcelona in a celebratory party in 7 countries. :rolleyes:
 
This post is complete and utter garbage.

Yea, it's such an embarrassing chapter that's why AMD released Barcelona in a celebratory party in 7 countries. :rolleyes:

Agreed. Unless the OP has some insight or hands on experience with a Barcelona core and can contribute something concrete to the form I suggest this post be close before it erupts into something it should not . . .
 
AMD's quad core solution is Brad Pitt to Intel's Frankenstein. And it's not like Intel's 45nm is providing that much computing benefit over 65nm. Obviously it benefits shareholders tremendously. Although, I believe IT managers would be much happier with Opteron and Barcelona over Clovertown and Woodcrest. So I still say better late than never.
 
There's plenty of life in Barcelona. In 2008 there's the 45nm Ridgeback refresh (Shanghai). In 2009, a new core will be out. It's been outlined during Analyst Day:

11854699122v2hMv9idt_1_6_l.gif
 
This post is complete and utter garbage.

Yea, it's such an embarrassing chapter that's why AMD released Barcelona in a celebratory party in 7 countries. :rolleyes:

You call that a release?! :rolleyes:

When the truth is hard to pull out, it is clear that it is not good.

This title sums it up best: Analysis Maybe it's got great products. Maybe it doesn't. It's all maybes

Just face it, there is fire at AMD right now, and a new architecture is being commanded for near-term release.

Most motherboard makers aren't even rushing to push out BIOS updates to support Barcelona. That's even more telling.
 
though, I believe IT managers would be much happier with Opteron and Barcelona over Clovertown and Woodcrest. So I still say better late than never


Why......
 
Why......

Why not? Power consumption mainly, performance per watt. And Quad-core Barcelona impressively uses a similar amount of power as the Dual-core Opteron. Split power planes will be especially nice.

I'm not trying to say things are rosy at AMD. But times have definitely been more bleak than this, and AMD survived. And they'll survive this too.
 
Its a bit early to be calling this architecure a failure,still one can only rob paul to pay peter for so long....

I very highly doubt 45nm on AMD's end will make any real difference on shipping product til 2009 at the earliest.They dont have the money to ramp up thier 45nm fap equipment at
a reasonable pace,never mind a fast one.

That is hundreds of millions of dollars at the least,Hundreds of millions + they simply dont have.Roadmaps from AMD are as good as toilet paper if you cant meet the stated goals on them.
 
Why not? Power consumption mainly, performance per watt. And Quad-core Barcelona impressively uses a similar amount of power as the Dual-core Opteron. Split power planes will be especially nice.
Datacenters (and render/compute farms to a lesser extent) are sensitive to power consumption, but those are not the bulk of server sales. Most people buy performance, not performance per Watt (ask yourself how VIA is doing on CPUs :p).

The split power planes are nice, but it requires a Socket F+ motherboard which I don't think are even shipping yet. So it's back to buying all new hardware instead of a simple CPU upgrade in existing systems to get the full benefits.

And really, AMD is in a different situation than before. Some is the same as the old days(relegated to low end on the bulk of their product), which was fine when they had lower expenses. They could break even or make a profit back then. AMD has much higher expenses and debt now than it ever had before. It costs AMD more to borrow money now too. Other than selling off remaining interest in Spansion (which is very likely), AMD has no more to hack off that won't seriously affect its business operations. With losses expected over the next several quarters, it's not hard to guess what will happen.
 
Datacenters (and render/compute farms to a lesser extent) are sensitive to power consumption, but those are not the bulk of server sales. Most people buy performance, not performance per Watt (ask yourself how VIA is doing on CPUs :p).

The split power planes are nice, but it requires a Socket F+ motherboard which I don't think are even shipping yet. So it's back to buying all new hardware instead of a simple CPU upgrade in existing systems to get the full benefits.

And really, AMD is in a different situation than before. Some is the same as the old days(relegated to low end on the bulk of their product), which was fine when they had lower expenses. They could break even or make a profit back then. AMD has much higher expenses and debt now than it ever had before. It costs AMD more to borrow money now too. Other than selling off remaining interest in Spansion (which is very likely), AMD has no more to hack off that won't seriously affect its business operations. With losses expected over the next several quarters, it's not hard to guess what will happen.

You've clearly never even stepped one single foot in a server room. When the number of air conditioners the room has is the biggest concern you have, next one being the power bill, (as is the case in most server farms) then performance per watt matters most, (as is the case in most server farms)........

I run a reasonably small datacenter. I am fortunate enough to only have about 3 dozen machines to look after. And Performance per watt is the primary concern I have. I need to keep temps down, and I need to maintain the same performance I have now. I also need to scale performance in the future to meet growing demands. I cant afford a bigger power bill. I dont have room for more air conditioners. But I do have a steadily increasing process load....

I can just imagine the needs of a larger datacenter.
 
You've clearly never even stepped one single foot in a server room.
LOL, I don't need to. I have a test server next to my desk and I can remote into the other servers. You bring up datacenters when I started out my reply explicitly mentioning that niche segment of server sales. Do you even read the post before you quote it?

Performance, brand and price are the 3 biggest factors for servers I have seen in over 18 years of working at various companies, including up to this day. To make it clear for you, I am talking about typical server buyers, not datacenters or compute farms. Believe it or not, there is much more to server sales than the keyhole you look through. Most IT managers never see a power bill, myself included when I did that years ago. And most regular server buyers don't care about marketing trying to recast a metric like "performance per watt" (that goes for both AMD and Intel making those claims), especially not making it the top priority for consideration.
 
most data centers dont have air conditioners "in" the room, they have a central A/C system, and arent most data centers billed by X amount of power a month, a set amount, not per kilowatt or what ever that your home / small biusness may be billed by since most datacenters are based on DC power systems ?

i could understand in your position where it sounds like you have some servers running in a room with A/C units in the walls or something where power maybe an issue because it is a tight budget, but for "real" datacenters, the performance per watt will not out weight raw performance.
 
It is clear from AMD's tactics that Barcelona will have a short market life.

This is an embarrassing chapter AMD is trying to put behind.

The processor is already 6 months late, and it has yet to make it through the standard retail channels.
It is clear that AMD will have to precipitate a new architecture as Barcelona is a bust.

So we get this ridiculous excuse for a post....

You call that a release?! :rolleyes:

When the truth is hard to pull out, it is clear that it is not good.

This title sums it up best: Analysis Maybe it's got great products. Maybe it doesn't. It's all maybes

Just face it, there is fire at AMD right now, and a new architecture is being commanded for near-term release.

Most motherboard makers aren't even rushing to push out BIOS updates to support Barcelona. That's even more telling.

...and he backs it up with an Inquirer article. Genius!
 
most data centers dont have air conditioners "in" the room, they have a central A/C system, and arent most data centers billed by X amount of power a month, a set amount, not per kilowatt or what ever that your home / small biusness may be billed by since most datacenters are based on DC power systems ?

i could understand in your position where it sounds like you have some servers running in a room with A/C units in the walls or something where power maybe an issue because it is a tight budget, but for "real" datacenters, the performance per watt will not out weight raw performance.

That may be so, but most datacenters are just like mine. I wouldnt consider my back office any different then most. You go into any place where data needs to be kept safe and there will be a datacenter of similar size to the one that I run. Several dozen servers, in a small room with several air conditioners....

I would say that system like mine --far-- outnumber the systems in large corporate datacenters. As such performance per watt is more important for more people.
 
You've clearly never even stepped one single foot in a server room. When the number of air conditioners the room has is the biggest concern you have, next one being the power bill, (as is the case in most server farms) then performance per watt matters most, (as is the case in most server farms)........

I run a reasonably small datacenter. I am fortunate enough to only have about 3 dozen machines to look after. And Performance per watt is the primary concern I have. I need to keep temps down, and I need to maintain the same performance I have now. I also need to scale performance in the future to meet growing demands. I cant afford a bigger power bill. I dont have room for more air conditioners. But I do have a steadily increasing process load....

I can just imagine the needs of a larger datacenter.

3 dozen machines (if they're 1U, thats not even a full rack) is a data center? Come now.

But anyways, for every data center that needs performance per watt (google being a prime example), there's also a data center/rendering farm that needs absolute performance. Prime example would be special effects studios. Movies are made on a strict schedule, and most don't give a flying hell about performance per watt (within reason, of course).

Oh and to answer your question from:

http://hardforum.com/showpost.php?p=1031430651&postcount=29

English isn't my first language, but in a multi-lingual world, it doesn't really matter. And yes, your post was the definition of irony.
 
It is clear from AMD's tactics that Barcelona will have a short market life.

This is an embarrassing chapter AMD is trying to put behind.

The processor is already 6 months late, and it has yet to make it through the standard retail channels.
It is clear that AMD will have to precipitate a new architecture as Barcelona is a bust.

sounds like 939 to AM2 again
 
Its a bit early to be calling this architecure a failure,still one can only rob paul to pay peter for so long....

I very highly doubt 45nm on AMD's end will make any real difference on shipping product til 2009 at the earliest.They dont have the money to ramp up thier 45nm fap equipment at
a reasonable pace,never mind a fast one.

That is hundreds of millions of dollars at the least,Hundreds of millions + they simply dont have.Roadmaps from AMD are as good as toilet paper if you cant meet the stated goals on them.
You have to realize that AMD is not developing the technology alone. But instead, AMD and IBM are heavily involved in joint-development of 45nm node and beyond. Of course, nothing is for certain. But from what I've seen so far, there is no reason why 45nm shouldn't be online in 2008.
 
3 dozen machines (if they're 1U, thats not even a full rack) is a data center? Come now.

But anyways, for every data center that needs performance per watt (google being a prime example), there's also a data center/rendering farm that needs absolute performance. Prime example would be special effects studios. Movies are made on a strict schedule, and most don't give a flying hell about performance per watt (within reason, of course).

Oh and to answer your question from:

http://hardforum.com/showpost.php?p=1031430651&postcount=29

English isn't my first language, but in a multi-lingual world, it doesn't really matter. And yes, your post was the definition of irony.

And how many rendering nodes are there vs DB servers? Or terminal servers? Or web servers, or file servers? Eh? Most of the time these machines sit idle. In these situations, which account for the vast majority, performance per watt is the most important. I never said my datacenter was big. However it is, i'm sure, representative of the vast majority of the servers in use today.

What's ironic, is that you bring up the importance of a niche market to justify your narrow opinions. Over-all performance is not the end all be all. If it was x86 would bever have been popular and the 6800, and later the 68000 would have taken over the market. Or today we'd all be using a PowerPC chip. Price, Performance, and Power In that order.... Price per performance per power.... That is how it has always been.

Besides your not going to fit 3 dozen 1u systems in a single 48u rack. Maybe 2 racks if your cram them in there. I've actually got 6 racks. Most of them are 2u, some are 3u, and a few are 4u. Each system is designed for a different purpose. Most of them are terminal servers. They are all 2u. The MySQL servers are all 3u. The web, and FTP, and Email servers are all 3u, and the backup servers are all 4u. The terminal servers take up the most juice they actually have the largest load. At any one time I might have 100 people working on a particular terminal server. Needless to say it scales with cores linearly. I could use a low power quad core for them.
 
That may be so, but most datacenters are just like mine. I wouldnt consider my back office any different then most. You go into any place where data needs to be kept safe and there will be a datacenter of similar size to the one that I run. Several dozen servers, in a small room with several air conditioners....

I would say that system like mine --far-- outnumber the systems in large corporate datacenters. As such performance per watt is more important for more people.

Performance per Watt? Intel wins! Read all benchmarks. :p
 
Performance per Watt? Intel wins! Read all benchmarks. :p

Except in all benchmarks wher power is actually measured with a meter at the wall. Most software will tell you that Intel uses less power.... No surprise there.... However measured readings disagree....
 
Except in all benchmarks wher power is actually measured with a meter at the wall. Most software will tell you that Intel uses less power.... No surprise there.... However measured readings disagree....
What I can't easily understand is how the benchmarks claim that Intel uses less power but AMD architecture has an ondie NB, and Intel motherboards employ FB-DIMMs, which based on what I recently read, consume ~5W per module of extra power... :confused:

Are they referring to each processor or the whole platform?
 
What I can't easily understand is how the benchmarks claim that Intel uses less power but AMD architecture has an ondie NB, and Intel motherboards employ FB-DIMMs, which based on what I recently read, consume ~5W per module of extra power... :confused:

Are they referring to each processor or the whole platform?

It is not that Intel uses less power, but it is that Intel has more performance per watt.

As far as best power usage per good performance, Intel's core2 duo mobile platform wins.
Anyone truly concern about power usage should check out the mobile platform.

My ThinkPad T60 paired up with an eSATA drive and 4GB of ram screams!!!!!!!! :D
This beast runs on a 65 watt power brick, but performs better than many desktops and servers out there.
 
You'll have to excuse me, as subtlety doesn't come through over the net very well. Is that sarcasm?

Morfinx, didn't you know that pxc knows more about Barcelona and the inner workings of AMD than anyone else? Surely more than any AMD engineer. ;)

It is not that Intel uses less power, but it is that Intel has more performance per watt.

As far as best power usage per good performance, Intel's core2 duo mobile platform wins.
Anyone truly concern about power usage should check out the mobile platform.

My ThinkPad T60 paired up with an eSATA drive and 4GB of ram screams!!!!!!!! :D
This beast runs on a 65 watt power brick, but performs better than many desktops and servers out there.

That post is essentially gibberish...
 
You'll have to excuse me, as subtlety doesn't come through over the net very well. Is that sarcasm?

AMD has always had lofty goals when it comes to process nodes.

For instance, 65nm was supposed to enter production by 2H'05 and products out by 1H'06. If AMD is saying 2H'08 for 45nm, there's a good chance that's not going to be true.
 
You have to realize that AMD is not developing the technology alone. But instead, AMD and IBM are heavily involved in joint-development of 45nm node and beyond. Of course, nothing is for certain. But from what I've seen so far, there is no reason why 45nm shouldn't be online in 2008.



When wanting to know of the future,one should look to the past.History quite usually repeats itself. :) I did realize that AMD and IBM are working together,but thanks
for the friendly reminder !

What I beleive,is that it wont have an effect on the bottom line,other then creating more debt untill 2009 and beyond.AMD has historically been about~ one node behind for ages.I thnk that could turn into 2 nodes by the decades end.Two years and a few odd months is not so long.
 
Except in all benchmarks wher power is actually measured with a meter at the wall. Most software will tell you that Intel uses less power.... No surprise there.... However measured readings disagree....

What I can't easily understand is how the benchmarks claim that Intel uses less power but AMD architecture has an ondie NB, and Intel motherboards employ FB-DIMMs, which based on what I recently read, consume ~5W per module of extra power... :confused:

Are they referring to each processor or the whole platform?

It's the comparison between current QC Xeons vs DC Opterons. QC Xeons absolutely slaughter the DC Opterons in almost every performance/watt server benchmark:

http://anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3095&p=4

Opterons are better performance per watt in the Dual-Core/2S market for most server applications against Dual-Core/2S Xeons, mainly due to FB-DIMMs and their associated traces on the MB. Drop-in Quad-Core Xeons (which have been available for almost a year now), and the situation changes entirely. Some might call this comparison unfair, but this is currently the market situation as of now. Barcelona will probably shift the favor back to AMD. However, with lower power FB-DIMMS/Traces coming by the end of the year, it's anyone's guess.

What's ironic, is that you bring up the importance of a niche market to justify your narrow opinions. Over-all performance is not the end all be all. If it was x86 would bever have been popular and the 6800, and later the 68000 would have taken over the market. Or today we'd all be using a PowerPC chip. Price, Performance, and Power In that order.... Price per performance per power.... That is how it has always been.

Uh, no. Rendering farms are not a niche market. Traditionally, people didn't give a damn about power. It was always price/performance up until maybe 2-3 years ago when it was changed to performance/watt. But that's more or less more marketting BS because the old mantra of price/performance included TCO (Total Cost of Ownership), which includes your electrical bill over however long you planned on owning it.
 
It's the comparison between current QC Xeons vs DC Opterons. QC Xeons absolutely slaughter the DC Opterons in almost every performance/watt server benchmark:

http://anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3095&p=4

Opterons are better performance per watt in the Dual-Core/2S market for most server applications against Dual-Core/2S Xeons, mainly due to FB-DIMMs and their associated traces on the MB. Drop-in Quad-Core Xeons (which have been available for almost a year now), and the situation changes entirely. Some might call this comparison unfair, but this is currently the market situation as of now. Barcelona will probably shift the favor back to AMD. However, with lower power FB-DIMMS/Traces coming by the end of the year, it's anyone's guess.

At best it's an incomplete comparison, at worst it's disingenuous and faulty. I wouldn't mind seeing Barcelona as a future addendum though.

Uh, no. Rendering farms are not a niche market. Traditionally, people didn't give a damn about power. It was always price/performance up until maybe 2-3 years ago when it was changed to performance/watt. But that's more or less more marketting BS because the old mantra of price/performance included TCO (Total Cost of Ownership), which includes your electrical bill over however long you planned on owning it.

Certainly AMD played a major part in this paradigm shift. And let's not even discuss the socio-political implications, environmental concerns, or strain on an already overburdened powergrid by such a wanton use of electricity and natural resources. I'm just glad things have started to change for the better, which benefits us all.
 
That may be so, but most datacenters are just like mine. I wouldnt consider my back office any different then most. You go into any place where data needs to be kept safe and there will be a datacenter of similar size to the one that I run. Several dozen servers, in a small room with several air conditioners....

I would say that system like mine --far-- outnumber the systems in large corporate datacenters. As such performance per watt is more important for more people.

Clowetown > K10 in performance per watt genius , it losses only in absolute numbers but those are meaningless.
 
Clowetown > K10 in performance per watt genius , it losses only in absolute numbers but those are meaningless.

Yeah it's only the twisted and corrupted numbers that mean anything. Absolutes are worthless eh? Dude seriously, you need to reread what you just wrote. While your at it, re-evaluate your thought process.
 
Back
Top