GTX 680 & 690 CORE 17 PPD

MGMCCALLEY

[H]ard DCOTM SEP 16 / NOV 17
Joined
Jul 3, 2012
Messages
361
Just two days ago, I broke down and switched from GPU Tracker V2 to FAH 7.3.6 so I could get my GTX 690 cranking on CORE 17. I have to say, that after two days I'm a bit disappointed in the results. With a mild stable overclock, about 10%, temps well within range and months on GPU Tracker I was pulling in about 35-40K PPD PER GPU, giving about 75K PPD for the card running 24/7. After 7.3.6 and only setting the client-type beta in the extra slot options, I'm down to a combined 55K PPD folding project 7662 units. Now I've seen these posts:

http://folding.typepad.com/news/2013/03/introducing-foldinghome-core-17-gpu-zeta-core.html

http://folding.typepad.com/news/201...about-2x-increase-in-ppd-for-gpu-core-17.html

And I assumed at least SOME increased PPD. So my questions are:

1. What am I doing wrong?
2. I know I'm running core 17, so are there any other steps that need to be taken to reach a higher PPD?
3. What the heck is OpenMM 5.1 and how do I get my grubby hands on it? Assuming I'm running OpenMM 5.0 now?

They use the 680 as an example in the second article, so I'm assuming anything that applies to it will cross over nicely to the 690. I could really use some expert GPU folder advice.


 
Last edited:
Core 17 on windows will be upgraded to OpenMM 5.1 soon™ the Linux version of core 17 uses 5.1 but it is in very limited beta with no point out out.

NVIDIA implemented OpenCL very badly so you won't get as many points as from AMD cards. For example a 7970 on core 17 openmm 5.1 can get around 150K PPD, a 680 would get around 60K PPD.
Note this is not final results and is taken from internals beta results and may change.
This won't get better until NVIDIA releases cards that have a better implementation of OpenCl.
 
NVIDIA implemented OpenCL very badly so you won't get as many points as from AMD cards. For example a 7970 on core 17 openmm 5.1 can get around 150K PPD, a 680 would get around 60K PPD.
Note this is not final results and is taken from internals beta results and may change.
This won't get better until NVIDIA releases cards that have a better implementation of OpenCl.

We actually have no idea what a windows openMM 5.1 core 17 will give. It's very premature to even speculate at this point. The second part is true, NVidia has not implemented a very clean openCL driver yet, but that doesn't mean kepler will not produce good ppd with future versions of core 17.
 
Is a 7970 making 150k PPD? Mine was just under 40k.

I parked my GPU's because I'm power limited. At 150k PPD, it becomes an active folding machine, and I retire a slower machine.
 
Is a 7970 making 150k PPD? Mine was just under 40k.

I parked my GPU's because I'm power limited. At 150k PPD, it becomes an active folding machine, and I retire a slower machine.


I too had read claims like this. Not happening atm but very soon. That coupled with being an AMD fan boy is why I just sold my 670 and picked up a 7970.
 
So I'm reading from the following post:

http://forums.pcper.com/showthread.php?480534-F-H-v7-Client-with-GPU-QRB-(Core-17)-Setup-Guide

... that the Quick Return Bonus has been active for a while? At least the setup eludes to that. I thought that only beta-testers had access to beta units, but I've been folding under Core 17 for a while now.
So going on these assumptions, that QRB is already active, but OpenMM5.1 is what is really needed to make these fly, I've reverted back to GPU Tracker v2. I seem to be making about 15-20% higher PPD on my GTX 560 and GTX 690 on GPU Tracker v2 versus the newest F@H client and CORE 17.
Can anyone else verify these statistics and assumptions?
 
Anyone can get beta units by using the beta flag, but you won't get any official help with them if you're not part of the beta team, as only beta members have write access to the FF beta forum and any beta discussion at FF outside of that board is deleted/moved. GPU QRB has not had an official launch yet, as the core 17 projects are still in beta.

You didn't need to go back to GPU Tracker. Removing the client-type:beta slot option from v7 would have accomplished the same thing. From what I've read about Kepler performance a 15-20% performance drop on core 17 sounds about right.
 
I would suggest against using the beta flag if you are not willing to spent time trying to figure out issues and report errors.
The general rule is that PPD will always be worse on beta units as they are more unstable and aren't benchmarked.

IIRC proteneer isn't going to release a windows core 17 with openmm 5.1 until the chimp challenge ends as some people are worrried the beta will be abused by people and the required testing won't get done.
There linux core 17 uses openmm 5.1 but yields 0 points at the moment and is in internal beta.
 
I knew I didn't have to revert to the GPU Tracker, but I just like it's layout better. I really like the client stats layout, it gives a good idea of overall performance over time.
I guess I went with beta because I thought the CORE 17 would fold at a greater rate on the 690. But I'm guessing that the only increase in PPD I'll get is when OpenMM5.1 and GPU QRB finally do get released.
So I set my calendar for the 24th, day after CC, and I'll switch clients again to see if updates have been released. Thanks for the advice guys. I'll try to remember to update stats on this thread after the switch.
 
To give you a heads up a 670 on openmm 5.1 is currently getting a tpf of 1min 53secs on the internal core. Thats 68K ppd. A 660 Ti is getting 2min 11secs which gives 55K ppd. Not people aren't getting any points though as internal cores don't give points.
 
That is impressive. For some reason, I just don't believe those numbers will be consistent after the release. If they did, my GTX 690 could in theory earn 150k PPD. That's competing with some of the lower-end 4P rigs, and all on a single card. That might upset some folks who invested some serious cash on a 4P system plus a lot of electrical costs. Granted, it wouldn't be the first time a points restructure from F@H upset someone. I'm very curious now, I haven't been this impatient for a folding update since I started Bigadv years ago.
 
So I didn't want to post prematurely, but there was a definite improvement once the new units came out. I've been folding 8900s for a while now, 4 minutes 26 seconds TPF. Just under 90k per core per day, for an estimated total of just under 180k PPD. This is a fast improvement over previous attempts, and previous units. 7663s ran about 10k ppd total under the 8900s.

The new software also allows me to do basic tasks like browse pages with Adobe Flash videos while still using the cards for folding, while previous versions would cause lag and stutter. I'm not sure if this is a function of the new software, or that my video card still has some fraction unused that goes towards that. I did notice that two cores of my i7 970 are "pegged", even though it distributes the workload across other cores, it always shows the FAHCore_17 at 17 percent (one full core, rounded up). This leads me to believe that my CPU might actually be holding back my GPU in PPD. I had never had this problem previously, but who knows what might come up with the new software. In EVGA Precision X I also noticed that one GPU typically runs in the 95-98 percent usage, and the other frequently dips into the high 80s for usage. And on occasion, they both have a dip in GPU usage and power usage, and then seem to recover. The overclock is low, temps are totally within operating range, and I've increased the fan profile to further compensate for heat production.

These issues really aren't complaints, but rather observations. I haven't come across any failing units, and my PPD is higher than when I had 4 hex-cores running BigAdv, but with only 540 Watt draw at the wall. I'm one happy camper. :D
 
A few comments, others might chime in:
1) 180k looks way better than your original posted ppd in April :)
2) To compare: my Titans are currently at 175k for "zero admin frequency" (200k with more OC), GTX 780 is 150-155k for stable perf, 170k with OC.
3) Don't mind the CPU usage. It's the OpenCL driver from NVidia, busy doing nothing
4) The dip in power usage is due to the checkpointing taking place.
5) the P8900 is more "efficient" than P7663 (another x17 core), more power, more heat

rgds,
Andy
 
Andy,

Thanks for the comments. I hadn't considered checkpointing to be the cause, I'll have to find a tool that shows a time span on the dips. I believe I currently have checkpoints set to 10 minutes, so it should be something easy to measure if I find the right tool.

And that's interesting that it would soak a complete core doing "nothing". Is it somehow "reserving" that core for it's own use, to keep it from being used by other application? I hope they come out with a more optimized version if that's the case. I have plenty of CPU power to spare, but heat production for idle use seems wasteful.

And what do you mean by "zero admin frequency"? I haven't heard of that term in reference to folding. Is that a Titan-specific reference? I myself considered Titans for folding, but only had budget for a single $1k card, and actually purchased my 690 just a couple of months before the Titan release. I think I would have gone with the 690 anyway, given that I'm a gamer, and wanted to drive 3 1080p monitors. The 690 is still (for now) the fastest single-card GPU for gaming.

And thank you again for your extensive testing on the 780s and Titans. I look forward to your future posts.
 
Andy,

Thanks for the comments. I hadn't considered checkpointing to be the cause, I'll have to find a tool that shows a time span on the dips. I believe I currently have checkpoints set to 10 minutes, so it should be something easy to measure if I find the right tool.

And that's interesting that it would soak a complete core doing "nothing". Is it somehow "reserving" that core for it's own use, to keep it from being used by other application? I hope they come out with a more optimized version if that's the case. I have plenty of CPU power to spare, but heat production for idle use seems wasteful.

Core_17 is hard coded to do a checkpoint every 2%, it's not adjustable (and separate from the global checkpoint setting). This 'feature' may or not make it out of beta (I suspect it will).

Nvidia using a full CPU core is a driver issue. It won't change until NVidia changes the default behavior of their driver, which I doubt will happen. The behavior is adjustable in CUDA, but not openCL.
 
And what do you mean by "zero admin frequency"? I haven't heard of that term in reference to folding. Is that a Titan-specific reference? I myself considered Titans for folding, but only had budget for a single $1k card, and actually purchased my 690 just a couple of months before the Titan release. I think I would have gone with the 690 anyway, given that I'm a gamer, and wanted to drive 3 1080p monitors. The 690 is still (for now) the fastest single-card GPU for gaming.

And thank you again for your extensive testing on the 780s and Titans. I look forward to your future posts.

You are welcome.

The term "Zero admin frequency" is a made up term since I started folding a few weeks ago.

I learned, that I have basically 4 frequencies available to set:
1) Stock frequency: Slow and boring
2) Zero admin frequency: Find the frequency that over the course of weeks you don't have any admin overhead with your folding rigs, even with multiple GPU cards -> zero admin
3) WU frequency: What is the top OC speed, so that your card is able to finish one WU.
4) Peak OC frequency: Level of frequency, that is sufficiently long stable to make a screenshot and show the incredible performance of your card to others ....

I guess, often values you see floating around in folding forums are of type 3 and 4. "Hey look, my 4 year old card delivers xxx.000 ppds"
If your goal is folding and create tons of points over longer periods of time, I like type 2 most.

:)
Andy
 
Last edited:
Quisarious, thank you for your input. It's a question that I definitely wanted an answer to.

Andy, I've been doing that exact same thing for years, just never had a name for it. I'd push CPUs and GPUs to their limits, test the heck out of them, find a frequency that was rock-solid, then back it off 5% or so to accommodate room temperature fluctuations and dust accumulation.

Considering I'm gone most times, most days, I too wanted something that would just keep chugging along with "zero admin" input. And I've been folding since '05 I believe, since 07 for [H]ardOCP. Having four low-end Bigadv rigs was as serious as I ever got, and didn't compare with our top 50 producers. Now I'm running a single quad-core and my GTX 690 only. Coming close to a 150k PPD average given that I frequently use them for other projects. It's nice, though, to just "set it and forget it" and know that it's going to do the work while I'm away.

I agree, that too many users post their theoretical max PPD when overclocking, and typically never see those numbers long-term. So I should clarify that although my theoretical PPD is currently 180k PPD on my 690, my actual is around 130k PPD because I'm constantly interrupting it for gaming or other reasons. Although I do believe that at my solid 1175 GPU / 3200 MEM clocks, I could hit that 180k PPD target if I left it to fold 24/7 without interruption. Possibly more in a Linux setting, which I refuse to do for my main rig.

I have considered adding another 690, and think it would be interesting to see how they scale, or if I would get the full performance of another card. It's not like SLI, so I think I would be close to 350-360k if I ran it 24/7. The cards themselves are more power efficient than two 680s, partially because of the architecture, partially because the chips are binned that way. So reasonably, a dual gtx 690 would yield higher performance per watt, and in turn performance per dollar long-term even given the small higher initial cost for the cards. And given the proper case, CPU, and PSU, a quad 690 would be feasible. Max power consumption for the 690 is listed at 300 watts, so four of them would likely require two power supplies, or one heck of a hefty 1500 watt or so PSU. Maybe this one:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817256054

On this board:

http://www.asus.com/ROG_ROG/RAMPAGE_IV_EXTREME/

That might be the one mentioned previously for it's 40 amp 5 volt rail. It specifies 8 PCI-E connections. Can't tell from the photo if they're all 8-pin or not. But I'd LOVE to see something like that happen. I guess the question is, even if it comes in at 1000 watts, is that worth the expenditure for 720k PPD in one rig? 4P Mafia, care to comment?
 
My 2 cents -
I switched one of my stock GTX 680 over to core 17 on Linux(Ubuntu 12.04) and am observing about 80k ppd whilst using about 240 watts at the wall. Other system specs as follows:
E5506(not utilized for folding)
Asus P6T Deluxe V2
4 GB GSkill ram(3rd stick died so only in dual channel)
64 GB SSD
2 120mm case fans
Corsair GS 600 PSU

So far, I'm pretty happy with everything. Being able to use Linux is such a bonus imho.
 
Thanks for the info TYoda. Linux is getting the cold shoulder from many folders because F@H doesn't run very well with the AMD cards yet on Linux. I'm not currently running it because the rig I have mine on is my main server, main gaming rig, and main folding rig all at the same time.

I should say that my power usage isn't just because of F@H. I also have an 3TB X 8 drive RAID 6 array (about 16.5 TB formatted), one SSD and one additional 3 TB storage drive. They're never really idle, and I run an older 28 inch LCD off the same UPS system. So the power usage I list is for complete system, running actively. I'm sure I could drop about 60 watts or so turning off the monitor, and there would be quite a dip if I didn't have the RAID 6 attached. So I'd be interested to see a GTX 690 folding rig that is dedicated without all the extra hardware. If you're only pulling 240 for a single 690, my guess would be about 150 watts per GPU while folding? If that's the case, it would be very possible to run 4 690s in a single rig, given that each GPU in a 690 uses less than a 680 because of binning and a slight underclock. Still, at the moment the PPD/Watt still doesn't look like it would compete with a 4P system. The real advantage would be when it came time to game. :D
 
Back
Top