Barcelona and R600 to be a double release.

Software platform: Windows, Linux, OS X

Business platform: Nvidia MB, chipset, vidcard, amd or intel cpu (probably leaning more toward intel now) and they would prefer not to piss off Bill and recommend Windows OS.

Hardware platform: Amd or Intel

You break it down even further: Socket 939 or LGA 775 or even Nforce4 or Via for older chipsets.

But I never hear of people saying they are on a Front Side Bus platform or a Hypertransport platform.

For me, an AMD platform isn't just the dfi motherboard.....it needs an amd CPU to be an amd platform. Not saying my view is right or wrong....it's my definition

I think where things got lost in this whole mess, is when "architecture" and "platform" somehow got thrown together. Amds architecture is way different than intels......no one can argue that. But when you say "Intel platform" you have to include an intel cpu, the same as saying an "Amd platform" you have to include an amd cpu. Otherwise you might as well say "Desktop computer platform" and throw everything together (which is what I think happened here) and be left arguing about what sub-components work better than others.

So......what else is new?

I can agree with you there... Maybe we should all start using the terminology... "the platform that xxx is using"
 
BigDH01
The hard drive is part of the platform, but not a necessary part. You can boot and run software from any I/O device. If you load and run software from an particular hard drive, then it is part of the platform and affects system stability and performance.

You seem to be so stuck on this idea that only controllers can be part of the platform. I have news for you, the CPU is the largest controller of all.


And soon you and every person who has a current INTEL motherboard and wants to get that brand spanking new CPU with IMC will have to go out and purchase a new one. Doesn't that just suck BigDH.
 
I predicted this about two months ago. I guarantee you that they will spin this to like a Live 2 platform now.
 
Actually, definitions can be correct and incorrect. We live in a society with certain norms, and many of these norms are well documented in the industrial and scientific community. If you start calling CPUs airplanes, no one will know what you are talking about as an airplane is already defined to mean something else. You can't just go randomly through a dictionary and start randomly reassigning words (actually, you can but people will consider you crazy). In this case, we have an industry-accepted definition that a hardware platform includes the CPU. People on this board can choose to define a hardware platform however they'd like, but it is wrong when considering the industry standard. The definition is set by the very people that build and create platforms. If they say that a platform includes the CPU, then the accepted industry definition of a platform includes the CPU. You can say otherwise but people will either consider you foolish or wrong. Can you imagine me telling Levi that their jeans aren't really jeans but instead are oranges? I would probably get admitted.

Again, I disagree. It seems to me that you are making the claim that it is incorrect to use the variable 'x' or the symbol 'lambda' for more than one purpose. Historical precedent says otherwise. Lambda is typically used for Plank's constant, wavelength, and probably a whole host of other common uses I've never heard of. Words are not static, sorry. As long as a word, variable, or symbol is adequately defined for use within a certain context, there is no issue with other or previous definitions for the word.
 
BigDH01



And soon you and every person who has a current INTEL motherboard and wants to get that brand spanking new CPU with IMC will have to go out and purchase a new one. Doesn't that just suck BigDH.

I don't know how this is relevant. If you want to use DDR3 and HT3 in an AMD platform do you not also have to buy a new motherboard? I don't understand the point you were trying to make.
 
Again, I disagree. It seems to me that you are making the claim that it is incorrect to use the variable 'x' or the symbol 'lambda' for more than one purpose. Historical precedent says otherwise. Lambda is typically used for Plank's constant, wavelength, and probably a whole host of other common uses I've never heard of. Words are not static, sorry. As long as a word, variable, or symbol is adequately defined for use within a certain context, there is no issue with other or previous definitions for the word.

No, I'm saying that as long as you put the variable in context that is does have an accepted definition. If we are putting lambda in the context of quantum physics or mechanics then lambda is defined and static (constant). In this case, we can say the word platform is the variable, it has many different definitions. When we put the word in context, hardware platform, then it is defined. It is defined by the people who design and create hardware platforms and helped coin the term. However, what duby is doing is saying that the word hardware platform, even when used in proper context, means a set of components that doesn't include the CPU. Going by the current definition of the word hardware platform, he is incorrect. This is analogous to him saying that San Antonio isn't a part of Texas because he's redefining the borders that are internationally accepted. Granted, San Antonio wasn't always a part of Texas and it might not always be a part, but it is not up to Duby to decide when those borders change.

I don't know how to make this anymore clear. If I were to write a scientific paper with regards to quantum phyics, I would clearly define in the introduction what Lambda represents, correct? I cannot then change the definition of lambda without notice halfway through the paper. AMD and Intel design and create platforms. They have said that their platforms include the CPU. Until they change their definition, I'll simply have to take their word on it.

And you're right, I do prod Duby. I prod him because he has stated things in this thread that are clearly false (AMD has better prefetch logic, AMD's chipsets have more functional PCIe, AMD's ethernet controller is faster with less latency, etc). If I didn't prod him, I'm assuming these ridiculous statements would go uncontested. I contested them by posting evidence to the contrary to prove him wrong. His rebuttal was that I should Google it as apparently he couldn't formulate his own argument. Of course, you then quoted that and made the argument that you and him should feel pity for me for apparently not agreeing with him and having evidence for my stance. So I made be prodding him for the incorrect statements but I don't know how that is worse than you defending him for it.

BTW, I thought Planck's constant was represented by h and the cosmological constant was lambda. It's been awhile since I had particle physics but my gf is taking quantum mechanics next semester so I should get re-acquainted.
 
I'm not trying to redefine anything. I'm simply stating what a hardware platform is. You believe a hardware platform is the OS, I believe a hardware platform is the hardware that makes the other components work...

If your willing to admit a difference of belief, then fine, we'll drop it right here, but instead you insist on personal attacks, and name calling. I've offered to drop this several times, yet here you are still attacking my integrity.... Shame really....

Yes, you are trying to redefine Intel's and AMD's definition of a hardware platform by excluding the CPU. This cannot be too difficult for you to understand.

I'm willing to admit that there is a difference of belief. My belief is supported by facts and evidence (an industry standard) and yours is simply fabricated. I accept that.
 
I can agree with you there... Maybe we should all start using the terminology... "the platform that xxx is using"

So you're fine with saying that an AMD platform includes the CPU? Ok, you've said previously that AMD has a superior platform. I said that AMD's current platform is inferior to Intel's because the CPU is holding the system back. Now that you agree that an AMD CPU is part of an AMD platform, can you admit that on the desktop Intel has a superior platform. After all, Intel has a far superior CPU at this point.
 
Maybe I should h quoted specifically what I was agreeing with....

and throw everything together (which is what I think happened here) and be left arguing about what sub-components work better than others.


This what I agree with.... If you you include the CPU in the platform, then you have to include everything in the platform, and that simply doesnt work.
 
Maybe I should h quoted specifically what I was agreeing with....




This what I agree with.... If you you include the CPU in the platform, then you have to include everything in the platform, and that simply doesnt work.

Slippery slope fallacy..
http://www.nizkor.org/features/fallacies/slippery-slope.html

We don't have to worry about such a fallacy because hardware platform has clearly been defined in this context by the people who have the power to do so. You can personally believe whatever you'd like but you certainly can't use your definition in a public debate. The powers that define the word computing or hardware platform have already decided that the CPU is part of it. I know you don't like it, but like or dislike does not dictate fact.

You know, if I tried this in any of my schooling, that is redefining a word that already had a meaning in academia, I would have been laughed at by my peers and professors. The industry has an accepted scope of platform and the CPU falls within that scope. If you don't choose to accept the standard then I cannot help you. However, you shouldn't state your opinion as fact nor does it have any bearing or merit in a true discussion of the advantages or disadvantages of competing platforms.
 
Slippery slope fallacy..
http://www.nizkor.org/features/fallacies/slippery-slope.html

We don't have to worry about such a fallacy because hardware platform has clearly been defined in this context by the people who have the power to do so. You can personally believe whatever you'd like but you certainly can't use your definition in a public debate. The powers that define the word computing or hardware platform have already decided that the CPU is part of it. I know you don't like it, but like or dislike does not dictate fact.

You know, if I tried this in any of my schooling, that is redefining a word that already had a meaning in academia, I would have been laughed at by my peers and professors. The industry has an accepted scope of platform and the CPU falls within that scope. If you don't choose to accept the standard then I cannot help you. However, you shouldn't state your opinion as fact nor does it have any bearing or merit in a true discussion of the advantages or disadvantages of competing platforms.

And then there is the group mentality fallacy.... I'm no psychologist, butr now it seems your pulling nonsense out of the air becouse you believe it will be supported by the group....
 
And then there is the group mentality fallacy.... I'm no psychologist, butr now it seems your pulling nonsense out of the air becouse you believe it will be supported by the group....

I don't need the support of the group. My position is aligned with that of the very people who coin the term and create the products. Is this too difficult for you to understand?

You're obviously not a psychologist. Argumentum ad populum is the logical fallacy of which you speak. It occurs when people hold a proposition to be true simply because a majority of populace believes it. I don't hold my definition to be true because people believe it, I hold it to be true because people who create computing platforms have determined that the CPU falls within that scope.

That is unless you are redefining the appeal to belief fallacy as well.

I don't need anyone's approval and support to know that you are wrong. I already know it. I simply post to refute you so that others reading this thread will know not to believe everything that you write. I would also encourage them to discover their own facts. You only bother me in as much as you try to pass your statements as fact when they are clearly not. If you had not made ridiculous claims then I would have not posted. My post count is not that high and I will usually only post if I see misinformation, such as yours.
 
I'll start this by mentioning that I'm an AMD fan. I don't bias or propagandize or slant my wording; I always try to look at things objectively. However, if given a choice between support for companies, I would rather support AMD. Just thought I'd get that out of the way before you read the rest of this :D

I agree to an extent with BigDH01 that the CPU is included as part of the platform, mainly because the technology offered differs per CPU, even among the A64 platforms. 939/754/AM2 all have extremely similar architectures, but differ most significantly with the inclusion of dual channel memory access on 939 and DDR2 on AM2, thus differentiating them. The platform changes with the CPU.

No one can argue against the fact that at similar clockspeeds (or differing, to a point), a C2D will outrun an A64. The HT system may be more efficient, and it may be a better idea from a technology standpoint, but that doesn't erase the fact that in the desktop realm so far the FSB isn't choking the C2D enough to matter. Similar idea with the memory access latencies; an integrated memory controller may be superior to the northbridge-contained memory controller, but that doesn't erase the fact that it doesn't effect the C2D enough to matter (massive cache helps this along).

I just hate it when people make things up based on whatever bias they have.

Anyway, lets wait and see what happens with the Barcelona #s and R600 #s. I'm surprised the Inq hasn't fabricated some numbers yet!
 
I'll start this by mentioning that I'm an AMD fan. I don't bias or propagandize or slant my wording; I always try to look at things objectively. However, if given a choice between support for companies, I would rather support AMD. Just thought I'd get that out of the way before you read the rest of this :D

I agree to an extent with BigDH01 that the CPU is included as part of the platform, mainly because the technology offered differs per CPU, even among the A64 platforms. 939/754/AM2 all have extremely similar architectures, but differ most significantly with the inclusion of dual channel memory access on 939 and DDR2 on AM2, thus differentiating them. The platform changes with the CPU.

No one can argue against the fact that at similar clockspeeds (or differing, to a point), a C2D will outrun an A64. The HT system may be more efficient, and it may be a better idea from a technology standpoint, but that doesn't erase the fact that in the desktop realm so far the FSB isn't choke the C2D enough to matter. Similar idea with the memory access latencies; an integrated memory controller may be superior to the northbridge-contained memory controller, but that doesn't erase the fact that it doesn't effect the C2D enough to matter (massive cache helps this along).

I just hate it when people make things up based on whatever bias they have.

Anyway, lets wait and see what happens with the Barcelona #s and R600 #s. I'm surprised the Inq hasn't fabricated some numbers yet!

I would guess (pure speculation) that Barcelona is faster than an equally clocked Conroe by 5-10% on average in most desktop apps. It will further AMD's lead in server type applications and give them the lead in the desktop. Hopefully, and I know most people don't like to hear this, it will drive up the ASPs on AMD's chips and bring them back to profitability. It's all for naught if AMD can't make money.

As far as the R600, I have no idea. I don't know if AMD delayed the launch because they didn't have the number of GPUs to have anything more than a paper launch or because their GPU wasn't competitive with the G80. This one could really end up going either way.

Your thoughts?
 
guys if this is true amd is going to write history a new cpu lineup with a new top to bottom gpu lineup not to mention motherboard and chipset lineup amd is truely going fro a quatro approach
 
I would guess (pure speculation) that Barcelona is faster than an equally clocked Conroe by 5-10% on average in most desktop apps. It will further AMD's lead in server type applications and give them the lead in the desktop. Hopefully, and I know most people don't like to hear this, it will drive up the ASPs on AMD's chips and bring them back to profitability. It's all for naught if AMD can't make money.

As far as the R600, I have no idea. I don't know if AMD delayed the launch because they didn't have the number of GPUs to have anything more than a paper launch or because their GPU wasn't competitive with the G80. This one could really end up going either way.

Your thoughts?

I agree about the performance speculation in everyday apps, though I think it will vary more significantly on the application (be it FPU intensive or whatever else). Hopefully with the revised floating point unit we'll see an increase in gaming performance/video editing, as these are areas very close to the enthusiast's heart.

You're right also about the price, but to be honest AMD knows that they'll have to keep relatively in line with Intel's price based on performance, so I don't think we'll see anything wildly out of whack.

As for R600...god knows at this point. I remember a lot of speculation went around during the first delay that it was due to AMD's management who told ATi no more paper launches, but I don't think it was ever confirmed. This second delay makes me wonder about yields as well, but what I don't understand is why they haven't shown off any benchmarks to the public as of yet. Obviously the hardware must be final at this point, and clearly they have some working prototypes, so why not show off the benches?

From what I've read about the hardware in R600 I don't see how it could perform anything less than G80, so I'm thinking this may still be down to making sure that the drivers are 100% golden before the card is available. G80 has become a bit of a Vista joke at this point, as has nVidia's driver support overall.

I'm also interested to see how AMD's chipsets work out compared to nVidia's chipsets. Maybe with the in-house knowledge they'll have a chipset performance advantage?
 
As for R600...god knows at this point. I remember a lot of speculation went around during the first delay that it was due to AMD's management who told ATi no more paper launches, but I don't think it was ever confirmed. This second delay makes me wonder about yields as well, but what I don't understand is why they haven't shown off any benchmarks to the public as of yet. Obviously the hardware must be final at this point, and clearly they have some working prototypes, so why not show off the benches?

Is ati still using tsmc for it's manufacturing? I know they're huge, but they have a workload that would make a Mickey D's clerk shit himself. Not to mention they're churning out a ton of gpus for the 360.

And if the r600 is already taped, even a minor change means a huge delay (which they wouldn't make unless it was a critical error).

I dunno. I'm still sittin onthe fence waiting to see how things fare with the new toys. And with the predicted release dates, I don't know that I can wait for a screaming fast new pc for a price that won't cost me my last nut.
 
Is ati still using tsmc for it's manufacturing? I know they're huge, but they have a workload that would make a Mickey D's clerk shit himself. Not to mention they're churning out a ton of gpus for the 360.
Yes, I think you are correct that ATI is using TSMC. I doubt that will change any time soon. AMD has talked about doing some ATI production on their own fabs in the "distant" future... Which may happen. I suppose the big problem would be to redesign the GPU from production on bulk Si to SOI. My guess is that first ATI product to be made on AMD's fabs would be the first fusion CPU+GPU combo.

I dunno. I'm still sittin onthe fence waiting to see how things fare with the new toys. And with the predicted release dates, I don't know that I can wait for a screaming fast new pc for a price that won't cost me my last nut.
Good thing I have three ;)
 
As far as the R600, I have no idea. I don't know if AMD delayed the launch because they didn't have the number of GPUs to have anything more than a paper launch or because their GPU wasn't competitive with the G80. This one could really end up going either way.

Your thoughts?

I'm under the impression that the delay was really for marketing BS reasons. Though, they could just be masking another error/bug with that explanation. However, I don't remember where (maybe someone else does), but I remember reading about performance of initial R600 cards. The sites I were reading were written as if they have a card and put it through some simple paces, though no pictures and no graphs were given. Still, the reviewers claimed that R600 would easily beat nVidia's latest by a good margin. How true this will be after giving nVidia more time (because of the delay) remains to be seen. Still, I'm hopeful.
 
I don't think anyone was disagreeing that C2D is faster then A64 on desktop, I think the reference to the faster archecture was in servers with more then 4 cpu cores

I'll start this by mentioning that I'm an AMD fan. I don't bias or propagandize or slant my wording; I always try to look at things objectively. However, if given a choice between support for companies, I would rather support AMD. Just thought I'd get that out of the way before you read the rest of this :D

I agree to an extent with BigDH01 that the CPU is included as part of the platform, mainly because the technology offered differs per CPU, even among the A64 platforms. 939/754/AM2 all have extremely similar architectures, but differ most significantly with the inclusion of dual channel memory access on 939 and DDR2 on AM2, thus differentiating them. The platform changes with the CPU.

No one can argue against the fact that at similar clockspeeds (or differing, to a point), a C2D will outrun an A64. The HT system may be more efficient, and it may be a better idea from a technology standpoint, but that doesn't erase the fact that in the desktop realm so far the FSB isn't choking the C2D enough to matter. Similar idea with the memory access latencies; an integrated memory controller may be superior to the northbridge-contained memory controller, but that doesn't erase the fact that it doesn't effect the C2D enough to matter (massive cache helps this along).

I just hate it when people make things up based on whatever bias they have.

Anyway, lets wait and see what happens with the Barcelona #s and R600 #s. I'm surprised the Inq hasn't fabricated some numbers yet!
 
I'm under the impression that the delay was really for marketing BS reasons. Though, they could just be masking another error/bug with that explanation. However, I don't remember where (maybe someone else does), but I remember reading about performance of initial R600 cards. The sites I were reading were written as if they have a card and put it through some simple paces, though no pictures and no graphs were given. Still, the reviewers claimed that R600 would easily beat nVidia's latest by a good margin. How true this will be after giving nVidia more time (because of the delay) remains to be seen. Still, I'm hopeful.

I was hoping that we'd see leaked benchmarks by now ala Intel's letting reviewers release early benchmark information about Conroe. IMO, they should at least give us a taste in order to persuade people who are sitting on the fence to wait for the r600. It doesn't really matter though, us enthusiasts are not exactly a huge part of the market. It'll be interesting to see if nVidia releases low to midrange DX10 parts before AMD can get them out the door.
 
I was hoping that we'd see leaked benchmarks by now ala Intel's letting reviewers release early benchmark information about Conroe. IMO, they should at least give us a taste in order to persuade people who are sitting on the fence to wait for the r600. It doesn't really matter though, us enthusiasts are not exactly a huge part of the market.
Well, that's another question. Do you think that giving some leaked benchmarks before the actual product release hurts the company or helps them overall? While getting some hype and support for new products is a plus, it pulls some attention away from the current/older product line.

While I agree it would be great to see some numbers :) , I'm not sure Intel made the right decision with the Conroe hype... and I'm not sure if it would help or hurt AMD at this point in time.

It'll be interesting to see if nVidia releases low to midrange DX10 parts before AMD can get them out the door.
Unfortunatly (as I have a wee bit of AMD stock), nVidia probably will. There's still a whole quarter+ to go, yes? It would be hard to imagine that they don't put out some new cards before R600 launches. What models of the G80 are out now, anyways?
 
I was hoping that we'd see leaked benchmarks by now ala Intel's letting reviewers release early benchmark information about Conroe. IMO, they should at least give us a taste in order to persuade people who are sitting on the fence to wait for the r600. It doesn't really matter though, us enthusiasts are not exactly a huge part of the market. It'll be interesting to see if nVidia releases low to midrange DX10 parts before AMD can get them out the door.

Their point was made just prior to 4 X 4. If AMD had something to show that kicked ass, it would have been leaked. If they didn't, it wasn't leaked. Examples, the first K8, 939, X2 and FX (unlocked) were all leaked. AM2, 4 X 4, and Brisbane weren't;) The very first K8 was shown running at only 800MHz.

At least two folks at XtremeSystems are said to have hard info on Barcelona also said 10%. After AMD was claiming 40% faster, 10% makes them liars. I don't post here much anymore after being warned by the Mods. I'd hate to banned from the whole forum for arguing with Duby. He hasn't figured out how the FSB even works yet.

Also funny was many of the same AMD Cheerleaders taking AMD's word for it while saying they didn't believe folks testing Conroe at Xtreme.org, Webmasters and etc... Even accused Intel of paying off Tech-Report LOL!

Sit on the fence? Why? By the time Barkie launches my system will be almost a year old. I'd not wait that long on an Intel or AMD based system.
 
The sites I were reading were written as if they have a card and put it through some simple paces, though no pictures and no graphs were given. Still, the reviewers claimed that R600 would easily beat nVidia's latest by a good margin.

Referring to that level505 (?) "preview" that was soundly outed all over as 100% fake maybe? They supposedly did testing where it blew G80 away, yet never provided any proof, let alone pics of the purported card(s?). Some surmised it was simply an attempt to gain ad dollars - but not from me - ABP & Firefox works great for iffy "sites" like those.

[edit] And I pretty much agree with Donnie (and many others) on AMD's silence - I really think there would have been far more informative leaks up to this point if there was such kick ass products coming..If nothing more than to steal some sales & thunder from Conroe and G8x.

Believe me, I WANT AMD to come out with guns blazing - ALL my BYO setups (in 6+ years of doing so) have been AMD CPU based.. Split 50/50 between ATI/NV for graphics though. With the little pre-release info given, in the case of 4x4 & R600 the power required, it doesn't look all that rosy, but I have to reserve final judgement (2nd time today :p) until actual release & reviews (as far as final performance/perf to price ratio goes particularly). No matter how they end up, competition is a good thing for us consumers..Now if only the prices would stop creeping up and up we'd be golden.
 
Referring to that level505 (?) "preview" that was soundly outed all over as 100% fake maybe? They supposedly did testing where it blew G80 away, yet never provided any proof, let alone pics of the purported card(s?). Some surmised it was simply an attempt to gain ad dollars - but not from me - ABP & Firefox works great for iffy "sites" like those.

[edit] And I pretty much agree with Donnie (and many others) on AMD's silence - I really think there would have been far more informative leaks up to this point if there was such kick ass products coming..If nothing more than to steal some sales & thunder from Conroe and G8x.

Believe me, I WANT AMD to come out with guns blazing - ALL my BYO setups (in 6+ years of doing so) have been AMD CPU based.. Split 50/50 between ATI/NV for graphics though. With the little pre-release info given, in the case of 4x4 & R600 the power required, it doesn't look all that rosy, but I have to reserve final judgement (2nd time today :p) until actual release & reviews (as far as final performance/perf to price ratio goes particularly). No matter how they end up, competition is a good thing for us consumers..Now if only the prices would stop creeping up and up we'd be golden.

Almost all comsumer first folks I know want Barkie to kick ass. I do too. Only the hardcorest of Intel fans, employees and or large stock holders want differently. Yes, I agree with you too:)
 
I'd hate to banned from the whole forum for arguing with Duby. He hasn't figured out how the FSB even works yet.

Remember what I said about inviting me Donnie ;)

Ask your self where is the memory controller on Intel's platform? If the CPU needs to access it.... Where does the data travel to get there?

You can continue to deny the truth, but it remains none the less....
 
While getting some hype and support for new products is a plus, it pulls some attention away from the current/older product line.

LOL! What attention? ;)

Seriously, there is going to have to be a serious perfomance delta at a very good price to keep me from just selling my 6400 and going drop-in quad. I don't see myself getting enough for my 8800GTX by the time R600 is almost available to bother with switching at this time.

Since AMD can't be bothered to at least give us a taste, I'm finding it hard to get interested in their wares.
 
Well, that's another question. Do you think that giving some leaked benchmarks before the actual product release hurts the company or helps them overall? While getting some hype and support for new products is a plus, it pulls some attention away from the current/older product line.

While I agree it would be great to see some numbers :) , I'm not sure Intel made the right decision with the Conroe hype... and I'm not sure if it would help or hurt AMD at this point in time.


Unfortunatly (as I have a wee bit of AMD stock), nVidia probably will. There's still a whole quarter+ to go, yes? It would be hard to imagine that they don't put out some new cards before R600 launches. What models of the G80 are out now, anyways?

I think it can be a good idea when your current products don't match up well against the competition. For example, I think it was wise of Intel to release early benchmarks of the Conroe because the P-Ds were clearly inferior to the X2s. Releasing early benchmarks basically made people seriously consider a third option when they were upgrading. They had 1) get the slower and more power hungry P-D 2) get the faster and less power hungry X2 3) wait a few months and get the even faster C2D. I think you have to give people some idea of performance to get them to seriously consider option 3. People just aren't all that patient. When I chose between the P-D and X2, I knew the C2D was about 6 months away but didn't know how it would perform so I got the X2. If I had waiting another month or two and seen the early benchmarks I may have just waited. Without option 3, option 2 was the obvious choice.

In the case of the R600, I think AMD needs to release something. They've already delayed the launch twice and they basically have no competitor to the G80. So an enthusiast basically has two options right now 1) Buy the G80, the undisputed king of performance 2) Wait for the R600 which might or might not perform as well. Of course, this problem has been amplified because of the delays. There are people who have chosen option two a couple of times now only to be disappointed by delay. IMO, I think it would be wise to at least assure these people that the delay is worth the wait.

The only options based on the G80 are the 8800gtx and the 8800gts. The gts comes in two memory flavors, 640 and 320. I own the 320 but am looking to do a step-up before my 90 days is up. I was hoping to see the 8900 in that timeframe which would've been released to compete with the r600. I'll probably have to step-up to an 8800gtx now.
 
Remember what I said about inviting me Donnie ;)

Ask your self where is the memory controller on Intel's platform? If the CPU needs to access it.... Where does the data travel to get there?

You can continue to deny the truth, but it remains none the less....

The memory controller in on the North Bridge on the Intel chipset, whoppie doo! It matters less because Intel's Processor is far more advanced and even Noobs know that LOL! The only folks who don't quite know that in denial. C2D doesn't depend on an Integrated Memory Controller because it Pre-fetches Better, Has Smart Cache, Even Smarter Memory Access and in those rare occasions when it accesses the FSB, it does so in a Parallel Fashion, NOT serial like Hypertransport. Just because you love AMD, doesn't mean its older technology is better or superior.

What you call the truth is only part of your misinformed opinions. BigDH01 is right because I have given you link after link as well and you still don't get it. Please stick to calling folks names, that's about all you're good at.

Ask yourself some questions?
1. Which suffers more if Higher latency RAM is used?
2. Which suffers more from a slower System Bus (not just FSB BTW)?
3. Which one takes a Bigger hit when moving from DDR2-800 to DDR-667?
4. South Bridge to North Bridge DIM Bandwidth =_____________.
5. Define DMI in this case?
6. If Intel's North Bridge was a wired Network device would it be a Hub or a Switch?

AMD bought ATI so they could become a Platform provider instead of a Parts Supplier. That's GOOD/POSITIVE for AMD if the can pull it off. :p
A.
1. AMD since its older tech doesn't feature smart cache or smart memory access.
2. AMD since 1600MT sucks compared to 2000MT LOL!
3. AMD, same answer as #1.
4. 2GB
5. Direct Memory Interface that bypasses the Processors' FSB.
6. Switch or else DMI wouldn't work.

Now you're saying what about FSB? The real disadvantage Intel has is fewer PCI-E Lanes on their affordable motherboards.
 
I think it can be a good idea when your current products don't match up well against the competition. For example, I think it was wise of Intel to release early benchmarks of the Conroe because the P-Ds were clearly inferior to the X2s. Releasing early benchmarks basically made people seriously consider a third option when they were upgrading. They had 1) get the slower and more power hungry P-D 2) get the faster and less power hungry X2 3) wait a few months and get the even faster C2D. I think you have to give people some idea of performance to get them to seriously consider option 3. People just aren't all that patient. When I chose between the P-D and X2, I knew the C2D was about 6 months away but didn't know how it would perform so I got the X2. If I had waiting another month or two and seen the early benchmarks I may have just waited. Without option 3, option 2 was the obvious choice.

In the case of the R600, I think AMD needs to release something. They've already delayed the launch twice and they basically have no competitor to the G80. So an enthusiast basically has two options right now 1) Buy the G80, the undisputed king of performance 2) Wait for the R600 which might or might not perform as well. Of course, this problem has been amplified because of the delays. There are people who have chosen option two a couple of times now only to be disappointed by delay. IMO, I think it would be wise to at least assure these people that the delay is worth the wait.

The only options based on the G80 are the 8800gtx and the 8800gts. The gts comes in two memory flavors, 640 and 320. I own the 320 but am looking to do a step-up before my 90 days is up. I was hoping to see the 8900 in that timeframe which would've been released to compete with the r600. I'll probably have to step-up to an 8800gtx now.

Dead on!
 
Donnie, dont judge me. Your not better then me or anybody else, so get off your high horse.
 
<-- Has the feeling of pending thread lock. Unfortunately, this thread has degraded to debating the definition of "platform", and now to personal attacks. Let's try to get this back on topic please.
 
<-- Has the feeling of pending thread lock. Unfortunately, this thread has degraded to debating the definition of "platform", and now to personal attacks. Let's try to get this back on topic please.

Thread doesn't need to be locked because I don't have to add anything else:)
 
Donnie27 said:
It's not about who or what I like but this thing not being the best option. I can't wait for K8L and their improved Dual Core versions that will follow. When all is said and done, it's the Dual Versions that has my eye. If they have the Price to performance I'm gotten use to with AMD, I'd be one it like a fat man on a Ham After a shaky start my 3500+ has served me well and I'm looking at an Sc- 939 Opteron 165 or 170 right now to replace it. I can be tagged whatever you guys feel like tagging me, but my wallet knows better. I want to see and buy what's good for me, not AMD or Intel.

If you think I'm being unfair, hey, go ahead. I can't wait for Barcelona just as any real Geek looks forward to New or Tweaked products with anticipation, eagerness and just plain old Curiosity! Geeks are passionate about hardware, only Fans want their team to win at all costs, even to the detriment of their own wallet, X2 for example;) If Intel continues to trash AMD, their 45nm New Cores, not just the smaller C2D's will have X2 like premium prices. Same thing goes for AMD if Barkie kicks mucho ass for a long period of time.

Here's something to save. If AMD falters on Barcelona, look for some Multi-National Group to buy them out. These folks will have ties to both the Germans and EU. AMD is now under the "Too Big to fail" umbrella. Oh, in or about the next 18 months:)
 
1. AMD since its older tech doesn't feature smart cache or smart memory access.
(emphasis mine) Oh, gosh. I forgot. AMD doesn't have smart cache or memory access, AMD must be using the "stupid" version. :rolleyes:

Anyways, sorry for the prod Donnie. I know why you claim the FSB is superior in some cases. IIRC, you like to point out the situation where some device needs to do lots of DMA. In an Intel system, this DMA traffic would not need to touch the bus (at first), while on an AMD platform the DMA transfer would necessarily cause HT traffic. That is a valid example that shows some of the advantages of having the memory controller attached to a north bridge.

However, that case it not really as important or as "common" (in the sense of &#37; of all memory access) to overcome the disadvantages of having the memory controller separated from the CPU by the GTL+ bus. For the vast majority of tasks (even those that use DMA initially), the CPU will eventually touch the data, and usually many, many times. Even something like GBe traffic that may start out as a DMA transfer, needs to have the packets decoded/parsed by the CPU, and later, the CPU transfers copies of that data all over the place in memory as user apps request it. I think you'll find the pattern hold pretty well for most tasks. Even disk access requires the FS to be interpreted, which can take more time that you would expect, especially when dealing with lots of small files as opposed to a single large one. It is simply more important to keep memory close to the CPU as opposed to keeping the memory close to the I/O. I think you will also find that this idea is backed up my most in the industry, and it shows in the design decisions of many companies, including Intel and their CSI plans.

You are one of the only people left who doesn't seem ready to agree that P2PB + IMC > shared FSB, and while you are entitled to your opinion, I have to admit I think it's a little out there. Considering you are in the minority with your FSB ideas here, I don't think Dubby quite deserves all the negative attention you give him.

BigDH01 said:
think it can be a good idea when your current products don't match up well against the competition. For example, I think it was wise of Intel to release early benchmarks of the Conroe because the P-Ds were clearly inferior to the X2s. Releasing early benchmarks basically made people seriously consider a third option when they were upgrading. They had 1) get the slower and more power hungry P-D 2) get the faster and less power hungry X2 3) wait a few months and get the even faster C2D. I think you have to give people some idea of performance to get them to seriously consider option 3. People just aren't all that patient. When I chose between the P-D and X2, I knew the C2D was about 6 months away but didn't know how it would perform so I got the X2. If I had waiting another month or two and seen the early benchmarks I may have just waited. Without option 3, option 2 was the obvious choice.
I see what you are saying here, and I don't strongly disagree. Still, I can't stop thinking about what happened to the P4/PD line. Before the leaked Core2 benchmarks, the PD may have been behind, but it wasn't behind to the point of being worthless. I guess I'm trying to say that I would have expected all the Core2 chips to sell anyways (remember that Core2 started out as a small % portion of manufactured chips), and that releasing the benchmarks for Core2 as early as they did (6 months) probably devalued the PD quite a bit. Don't you remember Intel's 1Q2006 and 2Q2006 financial results? http://www.intel.com/intel/finance/investorfacts/incstate.htm For the most part, I'm only looking at late 2005 and 2006 data. I'm not going to say that the leaked benches were the only factor here, but I bet they are a significant one. You'll notice that the two quarters that looked the roughest for Intel at first glance were precisely the quarters where Intel was competing with not only AMD's products, but Intel's own future products as well.

I could see that more public demonstrations with running boxen doing some real work could help to grow a confidence in the company as a whole, and I can see that some vauge comments about performance from AMD reps might do the same, but to come out in a quantitative way with the claim that one's current products are worthless seems a little too far for me. If you were in charge of AMD's Barcelona and r600 press, how would you handle the issue?
 
(emphasis mine) Oh, gosh. I forgot. AMD doesn't have smart cache or memory access, AMD must be using the "stupid" version. :rolleyes:

Anyways, sorry for the prod Donnie. I know why you claim the FSB is superior in some cases. IIRC, you like to point out the situation where some device needs to do lots of DMA. In an Intel system, this DMA traffic would not need to touch the bus (at first), while on an AMD platform the DMA transfer would necessarily cause HT traffic. That is a valid example that shows some of the advantages of having the memory controller attached to a north bridge.

However, that case it not really as important or as "common" (in the sense of % of all memory access) to overcome the disadvantages of having the memory controller separated from the CPU by the GTL+ bus. For the vast majority of tasks (even those that use DMA initially), the CPU will eventually touch the data, and usually many, many times. Even something like GBe traffic that may start out as a DMA transfer, needs to have the packets decoded/parsed by the CPU, and later, the CPU transfers copies of that data all over the place in memory as user apps request it. I think you'll find the pattern hold pretty well for most tasks. Even disk access requires the FS to be interpreted, which can take more time that you would expect, especially when dealing with lots of small files as opposed to a single large one. It is simply more important to keep memory close to the CPU as opposed to keeping the memory close to the I/O. I think you will also find that this idea is backed up my most in the industry, and it shows in the design decisions of many companies, including Intel and their CSI plans.

You are one of the only people left who doesn't seem ready to agree that P2PB + IMC > shared FSB, and while you are entitled to your opinion, I have to admit I think it's a little out there. Considering you are in the minority with your FSB ideas here, I don't think Dubby quite deserves all the negative attention you give him.

Please quote me saying the FSB is superior, please, just one line? Hey, you said stupid version, not me LOL! I said C2D was newer and smarter. I said Athlon64 wasn't as advanced as C2D. It has more to do with different architectures. NOT which Platform is superior or not.

The prod was no biggie:) At least you get it and we can disagree without the cracks and quips. Yours at least show some class. There's more to the system bus than the FSB.

Now, for FSB vs HT = Parallel vs Serial. Each has advantages and disadvantages. I agree with most of what you're saying. The FSB isn't holding the C2D back=P It might not be the best system for Xeon. AMD needs IMC, Intel doesn't. That seems to be the point many AMD leaning folks seem to not understand.

I know an IMC wouldn't do much for C2D on the Desktop because moving from Super Fast RAM to slow RAM has minimal effect on performance. Just as going from 800MHz to 1066 FSB. Even going from 2MB to 4MB doesn't change the out come in most cases.
 
Back
Top