AMD K10 SuperPi Results

Nothing is repeatable.. This whole argument is bunk.. every benchmark you do is gonna score slightly different.. for instance, 3dmark.. etc.

The PC will always be in a different state every time you test it. It could be .00000003 degrees warmer/colder on one test less than an hour apart.. ANYTHING could change.

With this argument, NOTHING is a benchmark.

Dyno'ing my car wouldn't be a benchmark.. because it'll never dyno exactly the same.. perhaps within 5 horsepower.. but even if it dynos 400 twice, it still didn't dyno 400.xx, or whatever the case may be.

In other words.. no 2 results will ever be exactly the same.. whether it be hundreds or thousand of decimals back.. there is still margin of error. Super-Pi is relatively consistant.

Josh

You still have a margin of error, and when the margin of error is 300% in between runs, you know the "value" of it as a benchmark is nill... There are quite a few legitimate benches out there. Superpi just isnt one of them. If it could actually calculate pi in a realistic time frame, and produce reliable reproducable results consistantly across testbeds, then it would be OK to xcall it a benchmark, but in its current form it cant do that... At best it is a poorly written pi calculator and nothing more.
 
If you account for scaling from dual to quad core, it seems it will easily best most of these scores in integer workloads.
Do you also want to account for a 37% decrease in clock speed in your calculations? :p

The highest 4S SPECint rate Opteron 8212 (2GHz, dual core) score is 77.4. A Barcelona that is only as fast as K8 with perfect scaling (not realistic because each core has less memory bandwidth than each core of the 8212) will get a 154.8 score. That would take a 2.8GHz Barcelona to match the Xeon MP that launched today.

5 days to go with no server benchmarks released. I hope that AMD chose to give press samples to those who are very good at showing even more slides to "just wait for better performance." Mr Just Wait was tripping over his beard 6 months ago. We waited and we're underwhelmed.
 
One is not "much simpler" than the other. The 7300 chipset has very low pin count FBDIMM traces, but has to route 64 data bus pin & AGTL+ to each CPU. 4 way Opterons have to communicate through 3 HT links in the glueless configuration and route 128 bit memory traces to each DIMM bank. Point to point is superior, but without FBDIMMs the overall layout advantage decreases.

Try to visualize where the traces would be on an actual board. In an Intel system they all meet up at a central point... The chipset..... On an AMD system the traces are spread out across the board. I f you dont undserstand why that is a much simpler design then I'm sorry but I cant help you. I guess that it is is just something you will have to keep denying.

The only problem with using HT (even right now) is that it would have been slower than the serial link chosen for FBDIMMs. It will all be moot anyways next year since Intel is going with an IMC on Nehalem.

According to this link FBDIMMS have 8GB/s of bandwidth per channel. And according to this link HTT3 has 41.6GB/s of bandwidth per controller. Lets also consider latency.....
This document here goes into quite a bit of detail about latency of the FB controller. It doesnt take into consideration the latency of th memory chips used.... And here is some info on the latency of HTT. Without specific benchmark, and a functional implementation it is impossible to say what latency on a so called HTDIMM would be. But based on the specs for HTT3 it will be pretty dang good I'd think.

I'm using the standard version that *everyone* compares, SuperPI mod 1.5 from here: http://www.xtremesystems.com/pi/ "2006/02/14 XtremeSystems branded Pi released! No physical code changes other than heading at top of dialog box." It's the same old code from early 2005.

Geez duby, you're entertaining.

And that is the problem. Superpi results still get published and defended even today even though the code is unmaintained and the binary has been comprimised. Even though the results are not directly comparable on the same system, let alone between systems. And what is sad is that "everyone else is doing it"
 
Try to visualize where the traces would be on an actual board. In an Intel system they all meet up at a central point... The chipset.....
Please look at the 7300 chipset before you make a comment like that. It's not a regular NB/SB configuration. There isn't just one point that all 4 CPUs meet. edit: I meant to add clarification to this before the board went offline. The duties of the NB/SB are other I/O chips are split up vs a typical NB/SB configuration. And while Xeon CPUs on 2S chipsets do connect to the same point, the topology on Caneland is different. Maybe we disagree on what is a "point."

According to this link FBDIMMS have 8GB/s of bandwidth per channel. And according to this link HTT3 has 41.6GB/s of bandwidth per controller. Lets also consider latency.....
8GB/s is one channel at the older 533MHz (PC2-4200) spec. There are 667MHz and 800MHz FBDIMMs. See: http://cache-www.intel.com/cd/00/00/27/84/278436_278436.pdf It is possible to have several FBDIMM channels in the same pin count as dual channel DDR2. I don't think anyone, not even AMD, makes a HT3.0 *buffer. A buffer in the hand is better than vapor on the web. :p If Intel is going to a faster bus, it will probably borrow from CSI anyways, which has higher bandwidth per pin and lower latency.

And that is the problem. Superpi results still get published and defended even today even though the code is unmaintained and the binary has been comprimised. Even though the results are not directly comparable on the same system, let alone between systems. And what is sad is that "everyone else is doing it"
Come off it already. SuperPI mod 1.5 uses the SuperPI 1.1e binary. A checksum was added and the timer resolution was increased. The algorithm hasn't been patched. The results are comparable on the same system and between systems. :rolleyes: You can always run the 1.1e binary and get the same score with a resolution of 1s vs 1ms. I just ran SuperPI 1.1e and got 19s. How amazing, the same compiled binary runs at the same speed, but just displays a lower resolution result. :rolleyes:
 
Careful pxc, you're treading on Duby's sacred ground. If Barcelona turns out to be such a dud that it gets owned by VIAs and Cyrixes, he'd eventually be able to accept that, but he'd argue that the platform is superior until the bitter end.
 
Careful pxc, you're treading on Duby's sacred ground. If Barcelona turns out to be such a dud that it gets owned by VIAs and Cyrixes, he'd eventually be able to accept that, but he'd argue that the platform is superior until the bitter end.

Rather then argue on the merits of the technology, you choose to use inuendo in its worst form... A shame really.....
 
Please look at the 7300 chipset before you make a comment like that. It's not a regular NB/SB configuration. There isn't just one point that all 4 CPUs meet.
You're joking right? Not only is it intuitively obvious that all CPUs need to connect to the single MCH, but Intel's tech docs say the same thing, and is also confirmed by the massive number of FSB related pins listed on the chipset pinout diagrams.

Dubby is quite correct. Look at the 7000 series for example. There are over 1000 pins on that package. Start packing four 700+ pin CPUs and a massive MCH with another 1000+, and the board layout gets complicated. That parallel AGTL+ bus needs to have all the traces be almost the same length, and is very susceptible to capacitance issues across traces as well.

If you want to defend performance, that is one thing (and another discussion). However, to argue that 4 massive AGTL+ busses being fed off a single MCH is trivial, low TDP, or cheap for board designers is complete nonsense.

Read the docs:
http://support.intel.com/support/chipsets/sb/CS-010496.htm
 
If you want to defend performance, that is one thing (and another discussion). However, to argue that 4 massive AGTL+ busses being fed off a single MCH is trivial, low TDP, or cheap for board designers is complete nonsense.
LOL, I never made any of those 3 claims. That's really becoming a pattern of your arguments: you invent a strawman and assign it to someone who never said it. :rolleyes: What I did say is that the FBDIMM routing (and illustrated with a very simple picture showing the differences) is simpler than routing 128-bit memory. I thought that was clear enough, but I guess it wasn't clear enough for *everyone*. The "massive AGTL+ busses" you talk about use less than 2x more traces when configuring Barcelona in glueless configuration, and the traces don't have to be run across the board between each Xeon CPU like is required for HT.

You linked to the 7300 chipset drivers. The technical documentation is not on that page. If you can actually find the the MCH technical documentation, you'll see the MCH is around a 2000 ball package (vs ~900 for AMD's 8151 and ~1200 for the single FSB P965). To put the difficulty of routing a 2000 ball package in perspective, the R600 GPU has about the same number. :rolleyes: Count 'em if you like: http://www.techpowerup.com/img/06-11-25/r6004.jpg

That's going to be a brain twister for you today: how do you route four separate "massive AGTL+ (bus)" LGA771 chips to one 2000 pin package that also contains it's own power, ground, I/O and SB communication pins? Must be magic!
 
You can attack the person instead of the argument all you want. However, what you said, and I'll quote it a second time for you, was:

There isn't just one point that all 4 CPUs meet.

Untrue. All the traces meet at the MCH. You going back to edit your posts to (slightly) correct your FUD is nice, but it would be better if you didn't lie in the first place, or at least properly fixed your comment with an edit.

[. . .] less than 2x more traces when configuring Barcelona in glueless configuration, and the traces don't have to be run across the board between each Xeon CPU like is required for HT.
So, now you seem to be arguing that spreading 1/2 the number of traces Intel needs over more board space is worse than cramming x2 that number in a smaller space?!? Further, HT doesn't have the same trace length matching issues AGTL+ does.

Again, you are defending insanity with more of the same. If you want to argue for something other than ease of routing between HT vs GLT+, go for it. However, you are not going to convince anyone here that more lanes in a smaller space is cheaper, easier, or better from a MB layout standpoint.
 
You can attack the person instead of the argument all you want. However, what you said, and I'll quote it a second time for you, was:

Untrue. All the traces meet at the MCH. You going back to edit your posts to (slightly) correct your FUD is nice, but it would be better if you didn't lie in the first place, or at least properly fixed your comment with an edit.
I made that post right before the board went down. I don't consider an independent bus to meet at "one point" unless it's shared like on 2S configurations. I didn't realize what I made last night was so unclear until I went back and re-read it. Do you forgive me?

So, now you seem to be arguing that spreading 1/2 the number of traces Intel needs over more board space is worse than cramming x2 that number in a smaller space?!? Further, HT doesn't have the same trace length matching issues AGTL+ does.
I didn't make any judgement of HT routing in the last post, I just pointed out it (HT) needed to be spread across the board between CPUs. What I actually said in this thread is quite the opposite, strawman factory. And in fact I did point out that point to point was superior several posts up (which no one disputes, even Intel is moving to CSI), and i've never argued that quad FSB was simplier or cheaper. But it does seem to have the performance advantage for typical server workloads that will take AMD a while to catch up to what's available today. Got it?

Again, you are defending insanity with more of the same. If you want to argue for something other than ease of routing between HT vs GLT+, go for it. However, you are not going to convince anyone here that more lanes in a smaller space is cheaper, easier, or better from a MB layout standpoint.
At least you're consistent... at making claims that I never wrote. I didn't say AGTL+ was easier. Bye. PLONK!
 
You still have a margin of error, and when the margin of error is 300% in between runs, you know the "value" of it as a benchmark is nill... There are quite a few legitimate benches out there. Superpi just isnt one of them. If it could actually calculate pi in a realistic time frame, and produce reliable reproducable results consistantly across testbeds, then it would be OK to xcall it a benchmark, but in its current form it cant do that... At best it is a poorly written pi calculator and nothing more.

Does Superpi have a 300% margin of error? I'd like to see that. Superpi is a valid benchmark but many people (including myself) question the relevancy of it. That applies to 3dmark and nearly every other synthetic benchmark as well.
 
You people make me laugh.. especially the SuperPi can show radically different times...

Only time that is gonna happen is if you have a bad stick of RAM.. a processor that has bad cache.. a processor that is switching speeds on the run.. or stuff running in the background.

Although I don't usually run SuperPI... I have NEVER once seen it give radically different results on the same system running at the same speed.
 
You people make me laugh.. especially the SuperPi can show radically different times...

Only time that is gonna happen is if you have a bad stick of RAM.. a processor that has bad cache.. a processor that is switching speeds on the run.. or stuff running in the background.

Although I don't usually run SuperPI... I have NEVER once seen it give radically different results on the same system running at the same speed.

Or, or, or..... How many exceptions do you make before it becomes a problem? You've already given three exceptions, how many more does it take before relevancy is void?
 
Does Superpi have a 300% margin of error? I'd like to see that. Superpi is a valid benchmark but many people (including myself) question the relevancy of it. That applies to 3dmark and nearly every other synthetic benchmark as well.

In most cases, yes. All you have to do, is go look at Toms reviews, then look at Anands. Then compare that with Legits, and others, specifically taking into consideration the superpi results. They are literally across the board. In some cases more then 300% error margins.
 
In most cases, yes. All you have to do, is go look at Toms reviews, then look at Anands. Then compare that with Legits, and others, specifically taking into consideration the superpi results. They are literally across the board. In some cases more then 300% error margins.

You've made the claim, show me some evidence that sites using the same hardware and software show 300% variance in this benchmark.
 
You've made the claim, show me some evidence that sites using the same hardware and software show 300% variance in this benchmark.
He just makes up stuff. Sadly, he seems to believe his own lies.
 
I was dumb enough to actually go and look. I figured the Conroe launch would be a pretty big review for all three of the sites he mentioned, and they'd throw every benchmark in their arsenal at it. Guess what? Only Legit ran SuperPi as part of the review, Tom's and Anand didn't.

So please, Duby, show us this 300% variance. I followed your instruction - sadly enough, I didn't find it. Clearly it takes the eye of a master such as yourself to find such things. Please share it with us, so that we may all be humbled by your veracity.
 
Or, or, or..... How many exceptions do you make before it becomes a problem? You've already given three exceptions, how many more does it take before relevancy is void?

You've used faulty logic that actually weakens your argument. You state that SuperPi isn't relevant and use the three exceptions given above as reasons for its irrelevancy. However, those three "exceptions" are examples of situations where SuperPi results are more relevant than we can normally expect. If there is a hardware problem then SuperPi should show dramatically reduced performance indicative of a system-wide performance problem and consequent loss. Hence, SuperPi in this case (hardware faults) can be used as a barometer to diagnose system-wide hardware faults. Your exceptions actually showcase situations where SuperPi is more relevant, not less. I would argue that SuperPi is irrelevant in systems that are fully functioning as it tells you little about application performance. A large decrease in SuperPi score can only foretell a hardware problem that would decrease the performance of all applications (fault RAM, faulty CPU, etc) and actually has some utility (hence relevancy) when examining system-wide performance.
I'll be really interested to see how you justify 300% margin of error by the way.

I assume you know how that's calculated. I would like to see you show me how you figure 300% margin of error.
 
I was dumb enough to actually go and look. I figured the Conroe launch would be a pretty big review for all three of the sites he mentioned, and they'd throw every benchmark in their arsenal at it. Guess what? Only Legit ran SuperPi as part of the review, Tom's and Anand didn't.

So please, Duby, show us this 300% variance. I followed your instruction - sadly enough, I didn't find it. Clearly it takes the eye of a master such as yourself to find such things. Please share it with us, so that we may all be humbled by your veracity.

Sorry, but it doesnt take a master to find superpi results...... They are everywhere..... Twist and wiggle,,,, squirm and squeeze....

And as per usual, instead of arguing on the merits of the technology you choose innuendo... Still a shame, even after so many tries.
 
Sorry, but it doesnt take a master to find superpi results...... They are everywhere..... Twist and wiggle,,,, squirm and squeeze....

And as per usual, instead of arguing on the merits of the technology you choose innuendo... Still a shame, even after so many tries.

Then they shouldn't be too difficult for you to find and post. You made the claim, not him. Now show us how you figured 300% margin of error.
 
You've used faulty logic that actually weakens your argument. You state that SuperPi isn't relevant and use the three exceptions given above as reasons for its irrelevancy. However, those three "exceptions" are examples of situations where SuperPi results are more relevant than we can normally expect. If there is a hardware problem then SuperPi should show dramatically reduced performance indicative of a system-wide performance problem and consequent loss. Hence, SuperPi in this case (hardware faults) can be used as a barometer to diagnose system-wide hardware faults. Your exceptions actually showcase situations where SuperPi is more relevant, not less. I would argue that SuperPi is irrelevant in systems that are fully functioning as it tells you little about application performance. A large decrease in SuperPi score can only foretell a hardware problem that would decrease the performance of all applications (fault RAM, faulty CPU, etc) and actually has some utility (hence relevancy) when examining system-wide performance.
I'll be really interested to see how you justify 300% margin of error by the way.

I assume you know how that's calculated. I would like to see you show me how you figure 300% margin of error.

More relevant how? You made the claim, show me how this application can be more relevant on broken hardware? How can it be more relevant on a loaded system? How? How can this application take these situations and show us how are hardware performs?

Instead of defending an application that has no real use, please try to help by contributing to other more worthy projects. Superpi isnt the best pi calculator, and it isnt a benchmark. So what is the point? It cant show me how my hardware performs. It cant calculate pi in a timely fashion. The binary has already been compromised. It no longer gets updated. So what value does it haev? Where is the relevance you speak of?

In pretty much every sense possible superpi has no value. None. At all. Period. It has no relevance.
 
More relevant how? You made the claim, show me how this application can be more relevant on broken hardware? How can it be more relevant on a loaded system? How? How can this application take these situations and show us how are hardware performs?

Instead of defending an application that has no real use, please try to help by contributing to other more worthy projects. Superpi isnt the best pi calculator, and it isnt a benchmark. So what is the point? It cant show me how my hardware performs. It cant calculate pi in a timely fashion. The binary has already been compromised. It no longer gets updated. So what value does it haev? Where is the relevance you speak of?

In pretty much every sense possible superpi has no value. None. At all. Period. It has no relevance.

It was claimed that large fluxuations in SuperPi results are caused by certain hardware issues. This situation (a large decrease in SuperPi performance) is relevant to the rest of system performance as it indicates a hardware failure that will decrease the performance of every application. It is quite easy to understand... a large fluxuation in SuperPi performance is a prelude to reduced performance in other benchmarks and applications. Faulty hardware will surely degrade the performance of the rest of the system. In this case, fluxuations in SuperPi results *are* relevant as they foretell degredation in other applications.

When the system is functioning normally then SuperPi results don't fluctuate and aren't really indicative of performance in other applications. At least when hardware is failing, SuperPi can indicate a trend and is therefore more relevant to system performance. This shouldn't really be too difficult to understand. Does that explain it better for you?

Now, I'm waiting on your 300% margin of error calculations and I'm assuming we'll be seeing those shortly? Yes, no?
 
It was claimed that large fluxuations in SuperPi results are caused by certain hardware issues. This situation (a large decrease in SuperPi performance) is relevant to the rest of system performance as it indicates a hardware failure that will decrease the performance of every application. It is quite easy to understand... a large fluxuation in SuperPi performance is a prelude to reduced performance in other benchmarks and applications. Faulty hardware will surely degrade the performance of the rest of the system. In this case, fluxuations in SuperPi results *are* relevant as they foretell degredation in other applications.

When the system is functioning normally then SuperPi results don't fluctuate and aren't really indicative of performance in other applications. At least when hardware is failing, SuperPi can indicate a trend and is therefore more relevant to system performance. This shouldn't really be too difficult to understand. Does that explain it better for you?
Wait, wait. You're telling me that SuperPi is the magic oracle, and when it starts to get poor results, I know that my hardware is about to break/degrade, and affect other applications? The only possible validation of such a statement is that SuperPi is CPU intensive, and therefore stresses the hardware. The same can be said of any other CPU intensive task, so I fail to see how this adds any significant value to an outdated fake benchmark. If you want to stress the CPU, run any real benchmark instead; they are all built for the purpose of stressing the CPU. Your argument is very thin.
 
Wait, wait. You're telling me that SuperPi is the magic oracle, and when it starts to get poor results, I know that my hardware is about to break/degrade, and affect other applications? The only possible validation of such a statement is that SuperPi is CPU intensive, and therefore stresses the hardware. The same can be said of any other CPU intensive task, so I fail to see how this adds any significant value to an outdated fake benchmark. If you want to stress the CPU, run any real benchmark instead; they are all built for the purpose of stressing the CPU. Your argument is very thin.

Nowhere did I call it a magic oracle, as I simply don't believe that is true. The term being discussed here is relevancy. Meaning, can SuperPi be used to determine dimishing performance from a system? Further definition here. Can SuperPi provide evidence of failing hardware (as well as other applications)? Why, yes, it can. SuperPi can indicate failing hardware (much like any other CPU intensive app that you might also call relevant to the matter) therefore it has relevancy to the user and system as a barometer to overall system health. This is a case where SuperPi is relevant. I fail to see how SuperPi is any less "real" than any other benchmark. Are its results not repeatable? If you consider my argument thin then perhaps you need to question how you would define relevance as SuperPi (when used as a trending indicator) can indicate improved or degraded performance in other applications (especially in our case when hardware failure is involved). Whether or not you find that relevance valuable will be a subjective judgment on your part. However, knowing that Duby opposes any relevant use of SuperPi I am assuming that you would as well. That doesn't exclude it from having a degree of objective relevancy.

I postscript this by telling you that I really don't think SuperPi is all that valuable (or relevant) when it is not indicating trends. I would not run SuperPi to determine how fasts my applications would be. However, if I ran SuperPi twice and noticed degraded performance I would expect that my applications would also proportionally drop in performance and I would start examing hardware. Hence, it's trending relevance in relation to the rest of the system. Whether or not you use SuperPi for that purpose (it's certainly not the only option) is up to you.
 
Nowhere did I call it a magic oracle, as I simply don't believe that is true. The term being discussed here is relevancy. Meaning, can SuperPi be used to determine dimishing performance from a system? Further definition here. Can SuperPi provide evidence of failing hardware (as well as other applications)? Why, yes, it can. SuperPi can indicate failing hardware (much like any other CPU intensive app that you might also call relevant to the matter) therefore it has relevancy to the user and system as a barometer to overall system health. This is a case where SuperPi is relevant. I fail to see how SuperPi is any less "real" than any other benchmark. Are its results not repeatable? If you consider my argument thin then perhaps you need to question how you would define relevance as SuperPi (when used as a trending indicator) can indicate improved or degraded performance in other applications (especially in our case when hardware failure is involved). Whether or not you find that relevance valuable will be a subjective judgment on your part. However, knowing that Duby opposes any relevant use of SuperPi I am assuming that you would as well. That doesn't exclude it from having a degree of objective relevancy.

I postscript this by telling you that I really don't think SuperPi is all that valuable (or relevant) when it is not indicating trends. I would not run SuperPi to determine how fasts my applications would be. However, if I ran SuperPi twice and noticed degraded performance I would expect that my applications would also proportionally drop in performance and I would start examing hardware. Hence, it's trending relevance in relation to the rest of the system. Whether or not you use SuperPi for that purpose (it's certainly not the only option) is up to you.

The problem with this assertion is that you assume that Superpi will be the application that "enlightens" you. That by running Superpi, you can know that your hardware is failing. I simply disagree. Sure it may indicate problems with the hardware, but any other applications you run will prolly be having problems too. In that instance Superpi is just another application that is performing poorly. Again in that role it has no value. None. It's just another application.
 
The problem with this assertion is that you assume that Superpi will be the application that "enlightens" you. That by running Superpi, you can know that your hardware is failing. I simply disagree. Sure it may indicate problems with the hardware, but any other applications you run will prolly be having problems too. In that instance Superpi is just another application that is performing poorly. Again in that role it has no value. None. It's just another application.
For the most part I agree. Sure, SuperPI has some minuscule value in the fact that it can execute instructions, but that's not saying much...
 
For the most part I agree. Sure, SuperPI has some minuscule value in the fact that it can execute instructions, but that's not saying much...

And those instructions are only a very small handful of old integer instructions that dont get used much outside of productivity apps. And in those cases you'd be better served by a good productivity bench.
 
The problem with this assertion is that you assume that Superpi will be the application that "enlightens" you. That by running Superpi, you can know that your hardware is failing. I simply disagree. Sure it may indicate problems with the hardware, but any other applications you run will prolly be having problems too. In that instance Superpi is just another application that is performing poorly. Again in that role it has no value. None. It's just another application.

Wrong. In that sense it is evidence. It provides evidence to that fact that there is a hardware problem. A hardware problem has a known causal relationship to system performance as a whole. It's a transitive relationship. Imagine this relationship:

B is a known, determinant event or state that has a causal relationship with effect C. We also have a variable A that can provide evidence for the existence of B. Knowing that B -> C, we can say, transitively, that A provides evidence for effect C. Example: if I have a piece of evidence that indicates there will be increased sunspot activity on the Sun and we know that sunspot activity disrupts satellite communications then we can also say that I have evidence of increased satellite communication disruption. I am not claiming that A is the determining factor for event B, only claiming that it provides evidence for a certain state (in this case increased sunspot activity). Using the standard definition of relevant, an evidentiary relationship is all that is required for a piece of information to reach the threshold of relevance.

In our case, it was claimed that SuperPi times only show strong fluctation when there is a hardware problem which thereby degrades SuperPi performance. The question then becomes, does drastically (as it would most certainly almost be in this case) reduced performance in SuperPi indicate reduced system performance? Meaning, does it provide evidence (and is it therefore relevant) in determining the relative speed of the system as a whole in this situation. So let's set up the events as described previously

B is a state or event (hardware failure) that has a known and determinant causal relationship with C (reduced system performance). I don't think anyone would argue that a cache failure or RAM failure would degrade system performance as a whole. SuperPi will be A in this case as its results are variable (albeit also caused by B). If SuperPi results suddenly decrease in performance then we have evidence (although not determinary) that state or event B exists. Here, that means the sudden drop in SuperPi performance is evidence for a hardware failure. Knowing that hardware failure, B, decreases system performance, we know, again transitively, that a sudden drop in SuperPi performance is evidence to decreased system performance, C. This evidentiary relationship is all that is required to call A relevant (as per the definition given above).

You called event or state B (hardware failure) the exception that proves that A (SuperPi) is irrelevant. In fact, this is a situation where SuperPi is probably more relevant than what we'd usually expect to find. A static SuperPi score is really only evidence for how the machine will perform in algorithms similar to the one employed by SuperPi. Since I do not know what these programs are (don't have access to much source code) it would be very difficult to prove the relationship between B and C above (haven't effectively shown that efficient computing of SuperPi state B effects the performance of application or system C). Your exceptions actually show an increased relevance of A to C, which is what I've been saying all along.

The fact that other applications might be used to provide evidence for B is rather inconsequential. In my opinion, Visaris and yourself have elected to take a rather egocentric view on the matter. Simply because you choose not to use said application to determine anything does not mean that it *can't* be used in some objective sense to provide evidence for degraded system performance. Now, I don't wake up every morning and run SuperPi to determine if my cache failed the previous night, but if I noticed a general system decrease in performance I would run SuperPi to determine if its speed had decreased from a previously measured baseline. In this way, I am using it as evidence to a larger system problem and it has become relevant. Whether it is the first application where a degradation is to be noticed or the last is rather irrelevant :p.
 
Wrong. In that sense it is evidence. It provides evidence to that fact that there is a hardware problem. A hardware problem has a known causal relationship to system performance as a whole. It's a transitive relationship. Imagine this relationship:

B is a known, determinant event or state that has a causal relationship with effect C. We also have a variable A that can provide evidence for the existence of B. Knowing that B -> C, we can say, transitively, that A provides evidence for effect C. Example: if I have a piece of evidence that indicates there will be increased sunspot activity on the Sun and we know that sunspot activity disrupts satellite communications then we can also say that I have evidence of increased satellite communication disruption. I am not claiming that A is the determining factor for event B, only claiming that it provides evidence for a certain state (in this case increased sunspot activity). Using the standard definition of relevant, an evidentiary relationship is all that is required for a piece of information to reach the threshold of relevance.

In our case, it was claimed that SuperPi times only show strong fluctation when there is a hardware problem which thereby degrades SuperPi performance. The question then becomes, does drastically (as it would most certainly almost be in this case) reduced performance in SuperPi indicate reduced system performance? Meaning, does it provide evidence (and is it therefore relevant) in determining the relative speed of the system as a whole in this situation. So let's set up the events as described previously

B is a state or event (hardware failure) that has a known and determinant causal relationship with C (reduced system performance). I don't think anyone would argue that a cache failure or RAM failure would degrade system performance as a whole. SuperPi will be A in this case as its results are variable (albeit also caused by B). If SuperPi results suddenly decrease in performance then we have evidence (although not determinary) that state or event B exists. Here, that means the sudden drop in SuperPi performance is evidence for a hardware failure. Knowing that hardware failure, B, decreases system performance, we know, again transitively, that a sudden drop in SuperPi performance is evidence to decreased system performance, C. This evidentiary relationship is all that is required to call A relevant (as per the definition given above).

You called event or state B (hardware failure) the exception that proves that A (SuperPi) is irrelevant. In fact, this is a situation where SuperPi is probably more relevant than what we'd usually expect to find. A static SuperPi score is really only evidence for how the machine will perform in algorithms similar to the one employed by SuperPi. Since I do not know what these programs are (don't have access to much source code) it would be very difficult to prove the relationship between B and C above (haven't effectively shown that efficient computing of SuperPi state B effects the performance of application or system C). Your exceptions actually show an increased relevance of A to C, which is what I've been saying all along.

The fact that other applications might be used to provide evidence for B is rather inconsequential. In my opinion, Visaris and yourself have elected to take a rather egocentric view on the matter. Simply because you choose not to use said application to determine anything does not mean that it *can't* be used in some objective sense to provide evidence for degraded system performance. Now, I don't wake up every morning and run SuperPi to determine if my cache failed the previous night, but if I noticed a general system decrease in performance I would run SuperPi to determine if its speed had decreased from a previously measured baseline. In this way, I am using it as evidence to a larger system problem and it has become relevant. Whether it is the first application where a degradation is to be noticed or the last is rather irrelevant :p.

I can perfectly understand your reasoning, but in this role, Superpi has no value. You'd be far better served by running a productivity bench. Or by running an encoding bench, or a HDD bench. You need to observe what is failing, and then confirm it wirth a bench that is capable of doing so. If it is memory, that you believe to be failing then run a memory bench. If you believe it is a HDD then run a HDD bench. There are specific problems, with specific points of failure, and Superpi doesnt identify any point of failure other then a handfull of very old integer instructions, and in that case you'd be better served with a productivity bench.
 
You can't win. In case you haven't noticed, he's quietly discarded the whole 300% variance argument, and is now arguing relevance with you, which is a very subjective thing. Easier to argue, since what's relevant to one person isn't necessarily relevant to another. We all know Kyle's gone on a crusade against the synthetic benchmark, 3DMark being first on the list. Yet some people still consider it somewhat useful, if not definitive. Case in point: the huge thread about the Barcelona 3DMark numbers.

It consistently gives a lower number on faster CPUs and a higher number on slower ones. Sounds like a benchmark to me.
 
You can't win. In case you haven't noticed, he's quietly discarded the whole 300% variance argument, and is now arguing relevance with you, which is a very subjective thing. Easier to argue, since what's relevant to one person isn't necessarily relevant to another. We all know Kyle's gone on a crusade against the synthetic benchmark, 3DMark being first on the list. Yet some people still consider it somewhat useful, if not definitive. Case in point: the huge thread about the Barcelona 3DMark numbers.

It consistently gives a lower number on faster CPUs and a higher number on slower ones. Sounds like a benchmark to me.

As was already clearly stated and explained, all you gotta do is look. Superpi benches are everywhere. Once --you've-- done that and seen for yourself with your own eyes, and comprehended it with your own brain, then.... then you'll understand....

Until then you continue to rely on innuendo as your only aid disregarding the merits of a technical discussion, and still even now its a shame.
 
I looked. I'm not going to waste my time proving your points for you. If this variance is so easy to find as you claim, then why not produce evidence? It'd be a great way to shut me up and make me look like a moron.

Making wild claims without producing a shred of proof is bad for your credibility. If you came in here, with links to three different sites, touting this huge variance of yours, what argument could there be, barring some sort of huge oversight on the part of one of the reviewers? You claim it's really easy to do, yet you don't do it yourself and you berate us for not doing it.

If you have the capability to end this argument effectively by producing this evidence, yet choose not to do so, one would infer that you have something to gain from continuing this argument.

Speaking of continuing this argument, I move that this thread be closed. We've left the original topic miles behind us.
 
Why would I want to shut you up? Or make you look like a moron? Your clearly a strong willed, intelligent person. These are admirable traits that give you just as much right to be here as I have. I dont want to shut you up, or make you look like a moron. With the exception of your innuendo that I'm stupid, I think your contributions here have been awesome. As long as your willing to participate in a technical discussion, no need to make anybody look like a moron, or to try and shut them up.

Though I do have to question you motive. A lot of people subconsciously expect others to behave the same way they do. If you are asking me why I dont try to make you look like a moron, and why I dont try to shut you up, the only thing I can think of is that you feel guilty for trying to do these things to me. It's ok you dont need to feel guilty. Though I aint gonna go anywhere.

Off topic? Isnt this thread about superpi results for Barcelona? and the current discussion about the usefullness of superpi? It doesnt seem like its very far off topic to me.....

If you want to compare superpi results you can certainly do so, if not then you certainly dont have to. My suggestion was simply an exercise for those interested.
 
Man, i'm good: http://www.hardforum.com/showpost.php?p=1029830909&postcount=3 <- prediction & http://www.hardforum.com/showpost.php?p=1029831868&postcount=9 <- reasoning

(Note: This is back when everyone was calling Barcelona the K8L... K8L looks dumb now. :p)
me said:
So K8L is 10-13 months away, from historical AMD tape-outs to release.
me said:
I was being generous with the low end of my estimate.

Read who the wrongest person is in that thread. :D

Who else called it virtually on the button back in August 2006 (and also how false the Q1 rumors were)? I was only 5 days off. :p
 
Though I do have to question you motive. A lot of people subconsciously expect others to behave the same way they do. If you are asking me why I dont try to make you look like a moron, and why I dont try to shut you up, the only thing I can think of is that you feel guilty for trying to do these things to me. It's ok you dont need to feel guilty. Though I aint gonna go anywhere.
That's actually a very intelligent point. You're just looking at it backwards. I subconsciously and consciously can't understand why you wouldn't want to send a clear, concise message with all your ducks in a row. I've been known to do this myself when I want to make a point crystal clear. The part about the other guy looking like a moron is more of a byproduct than an actual aim...my aim was to get the truth out there and combat FUD, rather than destroy someone's credibility.

If you truly believe that what you're saying is the truth, then you should want to present your case in a succinct manner, with sources to back it up, rather than lowering your own credibility by ignoring repeated requests for proof. At least, that's how I think.
 
Yes.. show us PROOF.

On a correctly functioning system that has no random crap running in the background eating up CPU or RAM resources.... the variance in SuperPI is VERY LOW.

I can run it nemerous times on my computer.. and it shows almost exactly the same time every time.

Or are you talking about the difference in between the 512k and 2M run???? That might be a 300% difference.
 
Man, i'm good: http://www.hardforum.com/showpost.php?p=1029830909&postcount=3 <- prediction & http://www.hardforum.com/showpost.php?p=1029831868&postcount=9 <- reasoning

(Note: This is back when everyone was calling Barcelona the K8L... K8L looks dumb now. :p)



Read who the wrongest person is in that thread. :D

Who else called it virtually on the button back in August 2006 (and also how false the Q1 rumors were)? I was only 5 days off. :p

Yeah I was a bit off, though 6-9 months still holds. I was assuming the tape-out at that time was going to be the release revision, and I was wrong. What was that? B0? Turns out that had some problems, and couldnt be released. And two more tapeouts have accured since then.
 
Yeah I was a bit off, though 6-9 months still holds. I was assuming the tape-out at that time was going to be the release revision, and I was wrong. What was that? B0? Turns out that had some problems, and couldnt be released. And two more tapeouts have accured since then.
There was an unforeseen problem with B0. It's largely what caused the delay although some have previously speculated the ATI acquisition stalled things a bit, which I don't entirely buy. If it wasn't for the problem with the B0 stepping, we would have Barcelona on the shelves already.
 
Back
Top