Message boards :
News :
Tests of new scheduler features.
Message board moderation
Previous · 1 . . . 10 · 11 · 12 · 13 · 14 · 15 · 16 . . . 17 · Next
Author | Message |
---|---|
![]() Send message Joined: 15 Mar 05 Posts: 1547 Credit: 27,183,456 RAC: 0 ![]() |
You're lucky. My home desktop hasn't gotten any work, or asked for any. Sometimes I wonder what BOINC 7 is thinking. ![]() |
![]() ![]() Send message Joined: 15 Jun 05 Posts: 970 Credit: 1,495,169 RAC: 0 ![]() |
Hello everyone, Just now successfully crunched ... my first SETI@home Application Version 7 WorkUnit on my old, slow, WXP, Intel Box ... (Circa 2004) ... over at Main: as Eric K. said: Many thanks to all the folks over at ... KWSN / Lunatics for getting this done and especially ... to Raistmer, Jason, Josef, Urs, Claggy, Mike, Richard, and too many beta testers to name. Well done everyone!
''When Johannes Kepler found his long-cherished belief did not agree with the most precise observation, he accepted the uncomfortable fact. He preferred the hard truth to his dearest illusions, that is the heart of science" ... Carl Sagan Well done everyone! Byron |
![]() Send message Joined: 18 Jan 06 Posts: 1038 Credit: 18,734,730 RAC: 0 ![]() |
As a reminder, the first 9 CAL targets are Hi Eric, Did you forget to set the opencl_ati5 plan classes to capability 8+ on main ? See http://setiathome.berkeley.edu/forum_thread.php?id=71810&postid=1374253 and his hosts result list http://setiathome.berkeley.edu/results.php?hostid=5879391 _\|/_ U r s |
Send message Joined: 1 May 07 Posts: 556 Credit: 6,470,846 RAC: 0 ![]() |
I have just suspended 40 opencl_ati5_sah HD4600 downloaded this afternoon on Main. I do not want to abort them at the moment if I can get away with a reset project. Also received a batch of cuda 50 on the other host. Michael. |
![]() Send message Joined: 15 Mar 05 Posts: 1547 Credit: 27,183,456 RAC: 0 ![]() |
I thought the recommendations was cal target 6+ for the ati5 versions. Did I miss a message somewhere? Must have missed it. I'll set it to 8+ ![]() |
![]() Send message Joined: 18 Jan 06 Posts: 1038 Credit: 18,734,730 RAC: 0 ![]() |
I thought the recommendations was cal target 6+ for the ati5 versions. Did I miss a message somewhere? cal target 6+ was ment only for use here at Beta and only for temporary experimentation purposes, if i recall that discussion correct. _\|/_ U r s |
![]() Send message Joined: 15 Mar 05 Posts: 1547 Credit: 27,183,456 RAC: 0 ![]() |
All the ati5 versions at the main project are now cal target 8+ ![]() |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
Fine! And how about NV AP for Linux here ? |
![]() Send message Joined: 15 Mar 05 Posts: 1547 Credit: 27,183,456 RAC: 0 ![]() |
I hope to have it and the latest AMD version released today. ![]() |
Send message Joined: 3 Jan 07 Posts: 1451 Credit: 3,272,268 RAC: 0 ![]() |
I made it! Cuda32 now has a higher APR than cuda42 for my Kepler, and was chosen above cuda50 for the latest work fetch. Application details for host 63280 |
Send message Joined: 11 Dec 08 Posts: 198 Credit: 658,573 RAC: 0 ![]() |
I made it! Cuda32 now has a higher APR than cuda42 for my Kepler, and was chosen above cuda50 for the latest work fetch. I must have looked too late, Cuda5 says highest APR there at the moment :) |
Send message Joined: 3 Jan 07 Posts: 1451 Credit: 3,272,268 RAC: 0 ![]() |
I made it! Cuda32 now has a higher APR than cuda42 for my Kepler, and was chosen above cuda50 for the latest work fetch. So what? I never said it wasn't. I only stated cuda32 was higher than cuda42 - I didn't make any comparison with cuda 50. But now you come to mention it, you looked too soon. Here's a full show as at the time I started to type this post (13:20 UTC) SETI@home v7 7.00 windows_intelx86 (cuda32) |
Send message Joined: 11 Dec 08 Posts: 198 Credit: 658,573 RAC: 0 ![]() |
Yep different numbers than when I looked (which is certainly a factor in trying to work out if the mechanism is remotely working). With nearly 1000 consecutive valid on Cuda5, are you suggesting the average processing rate for that is more or less accurate than the ~100 & ~300 for the lower Cuda revisions ? With a given more or less random mix of tasks in a large enough population, which APR is correct ? The measured one or one concocted from a synthetic benchmark? I'm not attempting to answer those questions myself, other than to suggest perhaps 100-300 tasks for a given app version isn't enough AR spread to dial in. How many would make averages relatively stable, and what would happen if you upgraded to 2 x classified 780 water cooled, or downgraded to an 8400GS ? .... and should the system handle that without reset of some sort ? |
Send message Joined: 3 Jan 07 Posts: 1451 Credit: 3,272,268 RAC: 0 ![]() |
It's all a bit complicated. Before we start, I suggest you try and skim the VLAR conversation I've been having with Eric, ever since his release announcement in message 46039. Basically, the 'dialling-in' process relies - critically - on the fpops/time curve being reasonably accurate over all ARs. The original curve that Josef and I researched all those years ago was for CPU tasks only: it has been updated by Eric to compensate with an additional (non-linear) component for autocorrs (I think he added the fudge-factor, instead of multiplying by it). That means that the perceived speed of computation, as averaged into APR, is good for all CPU builds and tasks (E & OE), and good for CUDA builds at mid-AR through VHAR. When I started this test, the APRs were (as recorded in message 46045): cuda50: 163 cuda42: 149 cuda32: 96 which is pretty reasonable. Maybe 163 is a little high for cuda50 - it had been pushed that way by a recent shorty storm (VHAR), but cuda42 hadn't - but I've been watching, and I'd say it's working, once the initial boundary conditions have been left behind. But as we found at the very beginning, APR can be skewed by outliers - the dreaded 30/30 AP storm, if you remember. VLAR on Kepler haven't been treated as outliers, but perhaps they should - they certainly behave like outliers. With a runtime of the order of x5 the time predicted by the calibration curve, APR plummets - as I predicted to Eric. Both cuda50 and cuda42 have run long, contiguous, blocks of VLAR - cuda50 as the result of deliberate micro-management by me, cuda42 by the natural run of work allocation. That's what has driven both APRs southwards - cuda42 faster than cuda50, because (as you rightly note) it has fewer completed tasks incorporated in the average. A 'young' card or app_version will always be more dynamic than an 'older' one (like teenagers anywhere, I suspect) - 'mature' cards, with 10,000 or 100,000 tasks completed, will be much more stable. |
Send message Joined: 11 Dec 08 Posts: 198 Credit: 658,573 RAC: 0 ![]() |
... See my Engineering perspective suggests this: "There Are no Runtime Outliers" ... they are tasks. i.e. they take what they take. That fact is that these 'runtime outliers' break the estimate system (which we knew was already won6ky and controls task scheduling & all manner of work fetch issues) Remove the artificially imposed limits (at 10x then kill a task) and award credit by an absolute figure, or alternatively some scale depending on6 project total throughput. Hard limits are meant as fail-safes. If they come into normal operation then they introduce instability in the system. An unstable system inherently will not stabilise. It's the 'control freak' factor again, which says if you hold your pet hamster too tight you will kill it. |
Send message Joined: 14 Oct 05 Posts: 1137 Credit: 1,848,733 RAC: 0 ![]() |
Richard Haselgrove wrote: ... The exponential average underlying APR has a fixed 0.01 factor (once the app version has 20 "completed"), and the host pfc average does the same. That is in effect a half-life of 69.4 non-outlier task validations. For high end GPU work that makes a fairly volatile APR unless the work is exceptionally well spread across angle ranges. For legacy CPU work it is extremely slow to adapt. OTOH, the code for choosing the "best" app version is affected by larger counts, so once there are 10,000 or more completed whichever app version has the higher APR at the time of a work request will have a very high probability of being chosen. Joe |
Send message Joined: 29 May 06 Posts: 1037 Credit: 8,440,339 RAC: 0 ![]() |
After only ever getting Cuda5 Wu's for my GTX460, I aborted them all to drive my Max Tasks per Day low enough (down to 3), so I couldn't get any more Cuda5 Wu's, when I tried again (it was a CPU & Nvidia & AMD request) I got AMD OpenCL tasks (AMD app version's APRs are higher than non-existent Cuda32 and Cuda42 app version's APRs), Ahhhhhh, so I remove some of those Wu's from my client_state.xml and set my preferences so Boinc could only do a Nvidia work request, now I get Cuda32 app and Wu's, and next request gets me Cuda42 app and Wu's, after removing the remainder of those recently sent AMD Wu's from my client_state.xml, I got some of the remainder resent, before the server expires the rest, that is annoying too. All tasks for computer 5427475 Claggy |
Send message Joined: 3 Jan 07 Posts: 1451 Credit: 3,272,268 RAC: 0 ![]() |
... So, now my cuda50 APR is below my cuda32 APR, I need to pray for a block of cuda32 VLARs (ugh!) to let me pick cuda50 again? |
![]() Send message Joined: 15 Mar 05 Posts: 1547 Credit: 27,183,456 RAC: 0 ![]() |
The right way to do this (and I indicated this to David a long time ago) is to use an estimate of the median rather than weighted averages, as medians are not strongly affected by outliers. I could change the current code make an estimate of the running median... ![]() |
Send message Joined: 29 May 06 Posts: 1037 Credit: 8,440,339 RAC: 0 ![]() |
Have you heard there's odd Credit awards going on for Astropulse v6 now at the Main project, around 15 to 25 Credits per AP Wu: http://setiathome.berkeley.edu/forum_thread.php?id=71827&postid=1374823 Valid AstroPulse v6 tasks for computer 6910524 I grabbed some Stock OpenCL AP work, and got very low awarded Credit too: All AstroPulse v6 tasks for computer 5427475 Claggy |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.