A2 core can someone post some frame times

OldGuru

Weaksauce
Joined
Aug 30, 2007
Messages
71
Im trying to get some data on A2 to compare my systems what kinda frame times you getting and whats your setup? Thanks
 
I've been meaning to ask this question but didn't want to start a new thread. How do we acquire the new core?

 
the -advmethods in config

I saw posted frame times over @ FAH home he said they were native linux installs(I took that as running on 4 cores of a quad),but his numbers looked like other posted 2 core vmware numbers. And one of my team mates is getting them but hes afraid my barcelona's are kicking Intels arse so he wont tell me.LMAO
 
the -advmethods in config
Thanks, I'm going to be reinstalling all my SMP clients in the new future so I'll remember to add that flag. Does the A2 core work better on two cores as well or is it optimally designed for greater number of cores?
 
Well they made it to scale better on more cores, and it does 8 cores @95%, but i was getting ready to retire my old opty 180 and A2 brought new life to it.
 
Normally, A2 should behave the same as A1 core on dual-cores. However, on quad-cores and above, A2 should scale much better (doubling the PPD vs A1). I don't have lots of numbers myself since I always run double VM instances (no native linux quad) but in the past, when I tested some 2662 units with A2, it gave me about 4500-5000 ppd on a 3.2 GHz quad.

 
so like 7 minute frames per a 2 cores? on A2 I got the same frame time I got last year on the opty 180 but they were worth 1920 instead of 1760, the last few months 2605 got longer by almost 3 minutes,VMware and dual cores are whats getting them(A2), I got 2 on my 8 core box.Thanks I decided to turn all my SMP's off, since I posted over there dual core are getting them, they changed things,When i7 comes A2 will be everywhere !
 
The only thing I can add is I've not seen that many A2 core WU's on my Q6600's, but they've been consistent for my E6600 or when I was running VMWare on one of my quads (ie. WU 2662, in either Ubuntu LinSMP, WinXP WinSMP, the VM was Ubuntu v8.05) :)


I'm still scratching my head on the less frequent "hit or miss" thing on the quads. I read on this forum getting the A2 core was kinda' a "luck of the draw" proposition ATM, but the A2 core would be the default client in the future. ;)

Just remember in Windose and Linux to activate the -advmethods flag, on Linux at the command line and on Windows the properties under the desktop icon. (or the folder icon if you didn't shortcut it to the desktop, either way use the -advmethods flag before initial configuration) I even add it as a "side bet" on the Windows WinSMP client during the -configonly phase, extra flags or something like that :confused:

I'll be glad when all WU's utilize the A2 core because they scale CPU usage much, much better than the A1 core on a quad core CPU :p

FOLD ON!

 
Normally, A2 should behave the same as A1 core on dual-cores. However, on quad-cores and above, A2 should scale much better (doubling the PPD vs A1). I don't have lots of numbers myself since I always run double VM instances (no native linux quad) but in the past, when I tested some 2662 units with A2, it gave me about 4500-5000 ppd on a 3.2 GHz quad.
What should I expect from an 8-core system with the same WU?

The thing that makes me skeptical about this is the overall production on only one client. I guess I would have to try it to find out, but I think the best way to configure quads and higher cores is with VMs. Having a single client running on many cores would be bad if it crashed since you'd lose the work theretofore accomplished on all the cores. It's one of the reasons I never ran a single client across the maximum number of cores on a given system. If one VM crashes, you lose only the work completed on 2 cores. So, unless I see a major benefit to running one client on that many cores, I think I'll stick to a multiple client setup until it becomes unfeasible.

 
It does scale well for quads cores but dunno how far with octocores so as you guessed yourself, you should experiment ;)

However, you have a good point with many clients having less of a impact when a wu fail vs a fully idle with a single A2 core taking all cores.

 
Normally, A2 should behave the same as A1 core on dual-cores. However, on quad-cores and above, A2 should scale much better (doubling the PPD vs A1). I don't have lots of numbers myself since I always run double VM instances (no native linux quad) but in the past, when I tested some 2662 units with A2, it gave me about 4500-5000 ppd on a 3.2 GHz quad.


The e6700 dual core I have produces a great improvement with the A2 core vs. A1.
90% of the WUs are 2662 at 3.3ghz 3000-3100 ppd A1 2300-2400ppd.
 
As I posted up there on * AMD BArcelona cores all * cores run from 95-100 ,and the frame time is exactly half of 4 cores I tested it with taskset -c 0-3 ./fah6 -smp

Ive seen it posted over there over and over return time is the most important thing, but what is happening really doesnt support that, They claim they want Quad boxes, you not running vmware, but they give huge rewards for that kinda setup,and have done so for the past year.
Then Ive seen FAH people complain because people are running vm's,but they could change that anytime they want,just like they got everyone to run out and buy nvidia.My conflic at the home site has always been over these issue what you say and what you do dont match up !When I point that out they attack !And claim we are doing exactly what we say !Anyway i just think its funny LMFAO

jws2346 thanks for confirming what I know!
 
I'd say the A2 core may be pretty good. I have no clue what I would be getting on this protein if it was running the A1 core. As far as the points are worth, I would guess it would get around the same PPD as the dreaded 2665s.

This is running in a Linux VM on a [email protected] with I believe 512MB RAM given to the VM to use and obviously only running on 2 cores.

I normally get 2605s on this client (the last one on the list) and it usually does between 2200-2300PPD as you can see by the client above it which is the other VM client running on that machine. If I managed to get these on all of my VMs my PPD would probably go up another 2500-3000PPD.

a2corerd8.png


 
From the folding forums ...........
"A2 cores prior to version 1.95 don't auto-update. The current A2 versions are 2.01. They're substantially faster and contain some important bugfixes.

We've posted this before, but please take this opportunity to double-check your cores. Our servers may refuse to accept work unit returns from A2 cores prior to 1.95."

I noticed that one of my VM's had picked up a p2662 but was only doing 1,600 PpD.
Checked the core and it was still version 1.90 ............ :eek:
Stopping the work-unit caused it to crash so I deleted the out of date core and the work.
Restarted the client and redownloaded the p2662.
I'm now crunching it at 2,500 PpD.

So check your folders for any old FahCore_a2.exe files.
If your not sure of their age and your not useing to crunch with, I'd delete it just to be safe.

Luck ............ :D
 
From the folding forums ...........
"A2 cores prior to version 1.95 don't auto-update. The current A2 versions are 2.01. They're substantially faster and contain some important bugfixes.

We've posted this before, but please take this opportunity to double-check your cores. Our servers may refuse to accept work unit returns from A2 cores prior to 1.95."

I noticed that one of my VM's had picked up a p2662 but was only doing 1,600 PpD.
Checked the core and it was still version 1.90 ............ :eek:
Stopping the work-unit caused it to crash so I deleted the out of date core and the work.
Restarted the client and redownloaded the p2662.
I'm now crunching it at 2,500 PpD.

So check your folders for any old FahCore_a2.exe files.
If your not sure of their age and your not useing to crunch with, I'd delete it just to be safe.

Luck ............ :D

Luckily I just started getting the A2 core yesterday so I shouldn't have to worry about getting old ones. Even better is that I now have 4 out of 6 of my VMs running the 2662 with the A2 core. At the moment, I'm up about 2400-2500PPD from where I was.

 
I got the first 2662 on one of my VM and for a box which usually average 2100 PPD with a 2605 and 1750 ppd with 2665, I'm getting close to 2800 PPD. It's awesome and I hope they ditch A1 units in favor of A2 soon.

 
I've just started getting them again today.
But I had the old FahCore_a2.exe file from when they where first released.
So if you have had any in the past, check your core version number.

From my log files.
Outdated version shows up as ...............
[19:02:14] Folding@Home Gromacs SMP Core
[19:02:14] Version 1.91 (2007)
New version shows up as ..................
[09:00:28] Folding@Home Gromacs SMP Core
[09:00:28] Version 2.01 (Wed Aug 13 13:11:25 PDT 2008)

Luck .............. :D
 
Yeah, i will check all the instances tonight but I'm pretty sure there is no older A2 since I set them with no advmethods.

 
Now I'm a' gonna try and clarify what I tried to put into a post previously. ;) I have not got any A2 core type WU's on either one of my Q6600's runnin' either WinXP, w/SP3 or a "native" Linux 64 of any flavor. (if my "rememberin' " cells are intact and correct, I play "musical" OS's all the time, plus my boxen are "dual" booters, WinXP and some flavor of Linux 64 :))

I did get the A2 core on the Q6600 that was runnin' VMWare server and in "native" Ubuntu v8.04 on my E6600 "dual" core CPU. I was runnin' Ubuntu v8.04 in the VMWare on the quad and, as far as I know, the Stanford servers "saw" only two cores (2) on the quad. I did notice the A2 core WU's (2662) scaled better than the A1 core on a "dual" core CPU. I don't know for a fact and I'm curious to know, if the A2 core scales better on a four (4) or more core CPU :confused: It'd sure be nice if it did. With my own experience runnin' the A1 core on a Q6600 in "native" Linux (Ubuntu v8.04, Kubuntu v8.04, CentOS v5.2) it sucks big time in the scaling department.The Windose task manager always shows 100% so I can't tell how the A2 core scales on a E6600 in Windose. :(

I sure hope this post is a little clearer because my previous post was f*ucked up and it even confused me :( (which ain't hard to do sometimes :D)

FOLD ON!

 
I believe Linux SMP clients are the only ones right now which can get the A2 cores. For the time being, I think Windows SMP users are SOL on that one.

 
I do remember a comment from kasson that 2662 and 2665 is 99% identical. The only difference is that one use A1 core and the other A2. With the A1, it scaled horribly due to the way the WU is done but A2 fixed that and scaled much better. This might be what he said when he told to be patient with this.

 
Windows is always SOL on this kinda work ! you dont see any Super computers running windows LOL Also the A2 core units are a newer version of gromacs according to the client display,

on the 2800PPD what your clock on that box X?
 
T5270 (Core 2 Duo ) @ 1.4Ghz - PPD 1350.00
Code:
[03:23:40] 
[03:23:40] *------------------------------*
[03:23:40] Folding@Home Gromacs SMP Core
[03:23:40] Version 2.01 (Wed Aug 13 13:11:25 PDT 2008)
[03:23:40] 
[03:23:40] Preparing to commence simulation
[03:23:40] - Ensuring status. Please wait.
[03:23:49] - Assembly optimizations manually forced on.
[03:23:49] - Not checking prior termination.
[03:23:51] - Expanded 5000999 -> 24742709 (decompressed 494.7 percent)
[03:23:52] Called DecompressByteArray: compressed_data_size=5000999 data_size=24742709, decompressed_data_size=24742709 diff=0
[03:23:52] - Digital signature verified
[03:23:52] 
[03:23:52] Project: 2662 (Run 1, Clone 261, Gen 18)
[03:23:52] 
[03:23:52] Assembly optimizations on if available.
[03:23:52] Entering M.D.
[03:34:47] Completed 2509 out of 250000 steps  (1%)
[03:45:31] Completed 5009 out of 250000 steps  (2%)
[03:56:14] Completed 7509 out of 250000 steps  (3%)
[04:06:57] Completed 10009 out of 250000 steps  (4%)
[04:17:39] Completed 12509 out of 250000 steps  (5%)
[04:28:21] Completed 15009 out of 250000 steps  (6%)
[04:39:01] Completed 17509 out of 250000 steps  (7%)
[04:49:44] Completed 20009 out of 250000 steps  (8%)
[04:57:34] - Autosending finished units...
[04:57:34] Trying to send all finished work units
[04:57:34] + No unsent completed units remaining.
[04:57:34] - Autosend completed
[05:00:26] Completed 22509 out of 250000 steps  (9%)
[05:11:07] Completed 25009 out of 250000 steps  (10%)
[05:21:49] Completed 27509 out of 250000 steps  (11%)
[05:32:30] Completed 30009 out of 250000 steps  (12%)
[05:42:59] Completed 32509 out of 250000 steps  (13%)
[05:53:42] Completed 35009 out of 250000 steps  (14%)
[06:04:24] Completed 37509 out of 250000 steps  (15%)
[06:15:06] Completed 40009 out of 250000 steps  (16%)
[06:25:48] Completed 42509 out of 250000 steps  (17%)
[06:36:29] Completed 45009 out of 250000 steps  (18%)
[06:47:10] Completed 47509 out of 250000 steps  (19%)
[06:57:51] Completed 50009 out of 250000 steps  (20%)
[07:08:33] Completed 52509 out of 250000 steps  (21%)
[07:19:15] Completed 55009 out of 250000 steps  (22%)
[07:29:57] Completed 57509 out of 250000 steps  (23%)
[07:40:40] Completed 60009 out of 250000 steps  (24%)
[07:51:23] Completed 62509 out of 250000 steps  (25%)
[08:02:05] Completed 65009 out of 250000 steps  (26%)
[08:12:46] Completed 67509 out of 250000 steps  (27%)
[08:23:12] Completed 70009 out of 250000 steps  (28%)
[08:33:54] Completed 72509 out of 250000 steps  (29%)
[08:44:35] Completed 75009 out of 250000 steps  (30%)
[08:55:16] Completed 77509 out of 250000 steps  (31%)
[09:05:59] Completed 80009 out of 250000 steps  (32%)
[09:16:42] Completed 82509 out of 250000 steps  (33%)
[09:27:23] Completed 85009 out of 250000 steps  (34%)
[09:38:06] Completed 87509 out of 250000 steps  (35%)
[09:48:48] Completed 90009 out of 250000 steps  (36%)
[09:59:30] Completed 92509 out of 250000 steps  (37%)
[10:10:12] Completed 95009 out of 250000 steps  (38%)
[10:20:54] Completed 97509 out of 250000 steps  (39%)
[10:31:37] Completed 100009 out of 250000 steps  (40%)
[10:42:20] Completed 102509 out of 250000 steps  (41%)
[10:53:00] Completed 105009 out of 250000 steps  (42%)
[10:57:34] - Autosending finished units...
[10:57:34] Trying to send all finished work units
[10:57:34] + No unsent completed units remaining.
[10:57:34] - Autosend completed
[11:03:43] Completed 107509 out of 250000 steps  (43%)
[11:14:13] Completed 110009 out of 250000 steps  (44%)
[11:24:54] Completed 112509 out of 250000 steps  (45%)
[11:35:36] Completed 115009 out of 250000 steps  (46%)
[11:46:17] Completed 117509 out of 250000 steps  (47%)
[11:56:59] Completed 120009 out of 250000 steps  (48%)
[12:07:40] Completed 122509 out of 250000 steps  (49%)
[12:18:22] Completed 125009 out of 250000 steps  (50%)
[12:29:04] Completed 127509 out of 250000 steps  (51%)
[12:39:44] Completed 130009 out of 250000 steps  (52%)
[12:50:25] Completed 132509 out of 250000 steps  (53%)
[13:01:08] Completed 135009 out of 250000 steps  (54%)
[13:11:49] Completed 137509 out of 250000 steps  (55%)
[13:22:31] Completed 140009 out of 250000 steps  (56%)
[13:33:13] Completed 142509 out of 250000 steps  (57%)
[13:43:57] Completed 145009 out of 250000 steps  (58%)
[13:54:18] Completed 147509 out of 250000 steps  (59%)
[14:04:36] Completed 150009 out of 250000 steps  (60%)
[14:15:26] Completed 152509 out of 250000 steps  (61%)
[14:26:15] Completed 155009 out of 250000 steps  (62%)
[14:37:09] Completed 157509 out of 250000 steps  (63%)
[14:48:05] Completed 160009 out of 250000 steps  (64%)
[14:58:56] Completed 162509 out of 250000 steps  (65%)
[15:09:49] Completed 165009 out of 250000 steps  (66%)
[15:20:32] Completed 167509 out of 250000 steps  (67%)
[15:31:11] Completed 170009 out of 250000 steps  (68%)
[15:41:54] Completed 172509 out of 250000 steps  (69%)
[15:52:36] Completed 175009 out of 250000 steps  (70%)
[16:03:16] Completed 177509 out of 250000 steps  (71%)
[16:13:58] Completed 180009 out of 250000 steps  (72%)
[16:24:41] Completed 182509 out of 250000 steps  (73%)
[16:35:23] Completed 185009 out of 250000 steps  (74%)
[16:45:39] Completed 187509 out of 250000 steps  (75%)
[16:55:56] Completed 190009 out of 250000 steps  (76%)
[16:57:34] - Autosending finished units...
[16:57:34] Trying to send all finished work units
[16:57:34] + No unsent completed units remaining.
[16:57:34] - Autosend completed
[17:06:11] Completed 192509 out of 250000 steps  (77%)
[17:16:21] Completed 195009 out of 250000 steps  (78%)
[17:26:33] Completed 197509 out of 250000 steps  (79%)
[17:36:46] Completed 200009 out of 250000 steps  (80%)
[17:46:58] Completed 202509 out of 250000 steps  (81%)
[17:57:09] Completed 205009 out of 250000 steps  (82%)
[18:07:21] Completed 207509 out of 250000 steps  (83%)
[18:17:31] Completed 210009 out of 250000 steps  (84%)
[18:27:42] Completed 212509 out of 250000 steps  (85%)
[18:37:53] Completed 215009 out of 250000 steps  (86%)
[18:48:05] Completed 217509 out of 250000 steps  (87%)
[18:58:18] Completed 220009 out of 250000 steps  (88%)
[19:08:29] Completed 222509 out of 250000 steps  (89%)
[19:18:41] Completed 225009 out of 250000 steps  (90%)
[19:28:53] Completed 227509 out of 250000 steps  (91%)
[19:39:05] Completed 230009 out of 250000 steps  (92%)
[19:49:16] Completed 232509 out of 250000 steps  (93%)
[19:59:28] Completed 235009 out of 250000 steps  (94%)
[20:09:40] Completed 237509 out of 250000 steps  (95%)
[20:19:50] Completed 240009 out of 250000 steps  (96%)
[20:30:03] Completed 242509 out of 250000 steps  (97%)
[20:40:15] Completed 245009 out of 250000 steps  (98%)
[20:50:25] Completed 247509 out of 250000 steps  (99%)
[21:01:30] 
[21:01:30] Finished Work Unit:
[21:01:30] - Reading up to 21421872 from "work/wudata_03.trr": Read 21421872
[21:01:31] trr file hash check passed.
[21:01:31] - Reading up to 4922396 from "work/wudata_03.xtc": Read 4922396
[21:01:31] xtc file hash check passed.
[21:01:31] edr file hash check passed.
[21:01:31] logfile size: 177963
[21:01:31] Leaving Run
[21:01:34] - Writing 26728615 bytes of core data to disk...
[21:01:34]   ... Done.
[21:01:35] - Shutting down core
[21:01:35] 
[21:01:35] Folding@home Core Shutdown: FINISHED_UNIT
[21:04:56] CoreStatus = 64 (100)
[21:04:56] Unit 3 finished with 75 percent of time to deadline remaining.
[21:04:56] Updated performance fraction: 0.759325
[21:04:56] Sending work to server

If you put the A2 core on a Quad Core, this thing really flies.

 
Q6600 @ 3.105. this is the second of 2 VMs sharing cores 0-2. Core 3 is dedicated (XP32 SP2) to a GPU client. The other VM running a 2605 is turning 15:18 frame times. This 2662 is turning 14:10 and pulling down an additional 400 ppd just with the A2 core. SWEEET!

Code:
[07:45:10] + Processing work unit
[07:45:10] Core required: FahCore_a2.exe
[07:45:10] Core found.
[07:45:10] Working on Unit 03 [August 26 07:45:10]
[07:45:10] + Working ...
[07:45:10] - Calling './mpiexec -np 4 -host 127.0.0.1 ./FahCore_a2.exe -dir work/ -suffix 03 -checkpoint 5 -forceasm -verbose -lifeline 459 -version 602'

[07:45:10] 
[07:45:10] *------------------------------*
[07:45:10] Folding@Home Gromacs SMP Core
[07:45:10] Version 2.01 (Wed Aug 13 13:11:25 PDT 2008)
[07:45:10] 
[07:45:10] Preparing to commence simulation
[07:45:10] - Ensuring status. Please wait.
[07:45:20] - Assembly optimizations manually forced on.
[07:45:20] - Not checking prior termination.
[07:45:23] - Expanded 4922945 -> 24360573 (decompressed 494.8 percent)
[07:45:24] Called DecompressByteArray: compressed_data_size=4922945 data_size=24360573, decompressed_data_size=24360573 diff=0
[07:45:24] - Digital signature verified
[07:45:24] 
[07:45:24] Project: 2662 (Run 2, Clone 124, Gen 20)
[07:45:24] 
[07:45:25] Assembly optimizations on if available.
[07:45:25] Entering M.D.
[07:59:35] Completed 2509 out of 250000 steps  (1%)
[08:13:59] Completed 5009 out of 250000 steps  (2%)
[08:28:05] Completed 7509 out of 250000 steps  (3%)
[08:42:18] Completed 10009 out of 250000 steps  (4%)
[08:56:25] Completed 12509 out of 250000 steps  (5%)
[09:10:31] Completed 15009 out of 250000 steps  (6%)
[09:24:50] Completed 17509 out of 250000 steps  (7%)
[09:38:51] Completed 20009 out of 250000 steps  (8%)
[09:53:00] Completed 22509 out of 250000 steps  (9%)
[10:07:07] Completed 25009 out of 250000 steps  (10%)


 
So, the method to grab the A2 core is simply to add the -advmethods flag and that's it?

 
So, the method to grab the A2 core is simply to add the -advmethods flag and that's it?


I thought that was supposed to be the case but looking at the beginning of my log, I didn't set Notfreds up with that flag and got one anyway so maybe it's coming regardless?

Code:
--- Opening Log file [August 12 22:18:37] 


# SMP Client ##################################################################
###############################################################################

                       Folding@Home Client Version 6.02

                          http://folding.stanford.edu

###############################################################################
###############################################################################

Launch directory: /etc/folding/1
Executable: ./fah6
Arguments: -local -forceasm -verbosity 9 -smp 

Warning:
 By using the -forceasm flag, you are overriding
 safeguards in the program. If you did not intend to
 do this, please restart the program without -forceasm.
 If work units are not completing fully (and particularly
 if your machine is overclocked), then please discontinue
 use of the flag.

[22:18:37] - Ask before connecting: No
[22:18:37] - User name: nomad8u (Team 33)
[22:18:37] - User ID not found locally
[22:18:37] + Requesting User ID from server
[22:18:37] - Getting ID from AS: 
[22:18:37] Connecting to http://assign.stanford.edu:8080/
[22:18:38] Posted data.

 
All my VM clients are running without the -advmethods flag and I'm getting them.
So you dont need the flag

Luck .......... :D
 
Then what must be done to get the core? I don't have any SMP client running the A2 core, TMK. :confused:
 
If your running the Linux client then its how lucky you are with Stanford's servers.
If your not running the Linux client the your out of luck.

I've running 18 VM's and I've only seen the odd 2 or 3.
But they are coming through slightly more often than p2665.

Luck ........... :D
 
its in the config of the client, where its asks if you want to run experimental work, YES there adds the flag.
 
Ya, one boxen finally got double 2662 for a average of over 5600 ppd just with 2 SMP. Others is a combination of 2x2605, 1x2665+1x2605 or 1x2665+1x2662.

No -advmethods flag needed like Tigerbiten.

 
The -advmethods flag to get the A2 core was when it was still experimental/beta. Obviously, it has now come out of that stage and anyone running the Linux client with at least two cores should be able to get it.

I have run into what may seem to be a problem with it, though. At one point I had 4 of my 6 VMs running the A2 core and each and every client running them got stuck after the work unit was finished and had uploaded the work unit. For some reason, it did not grab more work and continue onto another work unit.

I'm wondering if this is because the SMP client does not clean up the work files like it's supposed to and leaves them behind. When running the A1 core and any other work units this didn't seem to be a problem, it would just overwrite the old files which didn't get cleaned up. Because of that, I haven't been manually cleaning up the Work directory like I used to when the SMP client first came out. I've done two things with these VMs to get it running again:

1. Kill the client via CTRL-C, delete the A2 core, even though it shouldn't be the old one, delete the queue.dat file and delete the whole Work directory since it was easier than going in and deleting all the files. I then started the client back up and everything seems to be running normal, but by luck of the draw each one grabbed a 2605.

2. Kill the client via CTRL-C, delete the A2 core and delete all the files in the Work directory but left the directory intact. After starting the client it picked up a 2605 as well.

I don't know if anyone else if having problems like this but I thought I would pass on my experiences for others to keep an eye on things in case something similar happens. When I get another A2 core work unit to crunch, I will be keeping an eye on it and all the conditions surrounding it if the client stalls or if it goes on to another work unit. I have a feeling the problem may be that I didn't have the Work directory cleaned out. I believe there was a similar problem early on with the SMP client and something similar happening.

I'm just somewhat pissed because I woke up this morning with 4 clients hung doing nothing and at least a couple of them had been stuck like that for hours.

 
I'm getting A2 units consistently with the notfred disk using the -advmethods switch.

Dell Inspiron 530s, E2160, 1 GB RAM

Project 2662
Avg. Time / Frame : 16mn 32s - 1672.26 ppd

Project 2668
Avg. Time / Frame : 16mn 18s - 1696.20 ppd

Dell Dimension 521, A64x2 3800+, 1 GB RAM

Project 2662
Avg. Time / Frame : 20mn 50s - 1327.10 ppd

 
Windows is always SOL on this kinda work ! you dont see any Super computers running windows LOL Also the A2 core units are a newer version of gromacs according to the client display,

on the 2800PPD what your clock on that box X?

Sorry, missed that question... The box doing that is a Q6600 clocked at 3.4 GHz . Now, with both 2662 running, I average 2900 ppd each now. Another box is at 3.2 Ghz doing 2x 2662 right now with a average of 2100 ppd each but keep in mind it's on 3 cores only because the 4th is reserved to the 9800GT GPU client (Windows XP box). The HTPC got a 2662 as well with a 3.2 GHz quad but one of the cores is shared with the 8800GTS under Vista. It give a average of 2500 ppd.

Those are great numbers and with 4 2662 in the mix, my ppd average is up by 2000 ppd :D

 
The -advmethods flag to get the A2 core was when it was still experimental/beta. Obviously, it has now come out of that stage and anyone running the Linux client with at least two cores should be able to get it.

I have run into what may seem to be a problem with it, though. At one point I had 4 of my 6 VMs running the A2 core and each and every client running them got stuck after the work unit was finished and had uploaded the work unit. For some reason, it did not grab more work and continue onto another work unit.

I'm wondering if this is because the SMP client does not clean up the work files like it's supposed to and leaves them behind. When running the A1 core and any other work units this didn't seem to be a problem, it would just overwrite the old files which didn't get cleaned up. Because of that, I haven't been manually cleaning up the Work directory like I used to when the SMP client first came out. I've done two things with these VMs to get it running again:

1. Kill the client via CTRL-C, delete the A2 core, even though it shouldn't be the old one, delete the queue.dat file and delete the whole Work directory since it was easier than going in and deleting all the files. I then started the client back up and everything seems to be running normal, but by luck of the draw each one grabbed a 2605.

2. Kill the client via CTRL-C, delete the A2 core and delete all the files in the Work directory but left the directory intact. After starting the client it picked up a 2605 as well.

I don't know if anyone else if having problems like this but I thought I would pass on my experiences for others to keep an eye on things in case something similar happens. When I get another A2 core work unit to crunch, I will be keeping an eye on it and all the conditions surrounding it if the client stalls or if it goes on to another work unit. I have a feeling the problem may be that I didn't have the Work directory cleaned out. I believe there was a similar problem early on with the SMP client and something similar happening.

I'm just somewhat pissed because I woke up this morning with 4 clients hung doing nothing and at least a couple of them had been stuck like that for hours.


I had the exact same issue with my first 2662 yesterday/this morning. The WU completed and uploaded and was just sitting there having never tried to get another WU. It sat that way for a little over 2 hours.

This was running on Notfreds VM setup in VMware Server. I killed/restarted the instance and it picked up another 2662. It's due to finish in about 3-4 hours so we'll see what happens with this one. By killing the instance in VMware, I'm effectively doing the same as you did and wiping the core and work folder/queue. I'm hoping this one continues after this WU but will report it at the FCF if not.


 
Thanks for the info X, seems funny its not experimental but still has issues(sounds experimental to me LMAO)

I had that same hang on the first wu I got after getting the 2.01 core! But none in days?

its in the config file it doesnt show when you start the client it invisable,Im not saying they didnt open it up,to everyone,Im just saying it doesnt show when you start the client,and notfreds is set that way, from jump.
-advmethods flag always, requesting new advanced
scientific cores and/or work units if available (no/yes) [yes]?
all mine are set that way !
 
I had the exact same issue with my first 2662 yesterday/this morning. The WU completed and uploaded and was just sitting there having never tried to get another WU. It sat that way for a little over 2 hours.

I had that same hang on the first wu I got after getting the 2.01 core! But none in days?
I'm getting these hangs on almost all the clients that have been undated to the new core. It's becoming a tedious chore going through each VM remotely every couple of hours to determine if it's hung, especially since VMs are slow to access and even slower through the network remotely. Anyone have news on what is causing this and what Stanford is planning on doing about it? I'm beginning to regret my decision to add the -advmethods flag on all my Linux SMP clients. :rolleyes: :mad:

 
I just finished putting three VMs on 3 boxes, dual linux each. Quads at 3.2, 2 machines are xp-32 and one Vista-64.

Fah Mon just doesn't cut it for me, it's way off.

My mean frame time average is about 10 min give or take a few seconds.

Yes, the quads seem to scale very well with double VMs. And face it, it sure beats the hell out of 2665s in windows.

The down side is these clients still need a bit of work, and if you are band width limited the upload is right close to 27 megs as per Smoke's measurments.

Plus, you need a back log to really start showing points as it takes a while for them to be credited to your numbers.

Welcome to the worlds largest beta team, the entire folding comunity:rolleyes:

 
Back
Top