Message boards :
Number crunching :
Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 |
Richard, could you provide a link or the title to that thread.  I'd like to refresh my memory. Sorry, it's in this thread - but you can't see it here under - ahem - 'current circimstances'. You can see the UserID of the thread originator in the index page. Open the account page of one of your friends, or some other poster like me (but not your own account page), and replace the userid in the browser address bar with the one from the index. Then you can read from "Message boards 344 posts". |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
A few options come to mind. 1. The CPU limit could be applied based on processors and determined value. Something along the lines of (host # of processors/4)*task limit. 2. Perhaps modifying the JobLimits options to allow for specified host # of processor range limits. Something along the lines of: <project> <cpu_limit> if set, limit is applied to all hosts unless another values applies to the host <jobs>N</jobs> <cpu_limit_16> if set, limit is applied to host with 16+ processors <jobs>N</jobs> <cpu_limit_32> if set, limit is applied to host with 32+ processors <jobs>N</jobs> <cpu_limit_64> if set, limit is applied to host with 64+ processors <jobs>N</jobs> <cpu_limit_128> if set, limit is applied to host with 128+ processors <jobs>N</jobs> </project> 3. A graduated max CPU tasks in progress could be derived using Number of tasks today from the CPU apps. It might be necessary to take the Number of tasks today and then create an Average daily Number of tasks to use. Then a using the specified value set by the project a dynamic limit could be applied based on how productive the machine is rather than by the indicated number of processors. I think this might be the most complicated method & with each app version change the average would be reset. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13770 Credit: 208,696,464 RAC: 304 |
Any increase in allocation limits should come with greater cuts in allocations for systems that have high percentages of invalids/errors in relation to work in progress. No point giving these system more work to mangle. Grant Darwin NT |
MarkJ Send message Joined: 17 Feb 08 Posts: 1139 Credit: 80,854,192 RAC: 5 |
Most of my machines are i7's so they get 100 / 8 threads = 12.5 WU per thread. If we expand on that by saying we get 12.5 x number of threads then it could work for the smaller (single thread) machines as well as the larger (56 thread) machines. I would suggest we make it something like 15 per thread. Its simplistic and achievable with the current infrastructure. A more long term approach might be to increase that number based upon the average turnaround if the host is considered reliable. It could also be applied the other way to reduce the number if the host is unreliable. BOINC blog |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
A more long term approach might be to increase that number based upon the average turnaround if the host is considered reliable. It could also be applied the other way to reduce the number if the host is unreliable. Honestly, I really think that this would be the best way, it doesn't care about cores, CPU or GPU speed, or anything else. It just sees that you are returning X returns per minute/hour/day/whatever, and ramp it up (to a reasonable limit, of course) based upon reliability and productivity, which in the end is all that really matters. And as mentioned before, a corresponding decrease to those who are returning pretty much nothing but junk. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Most of my machines are i7's so they get 100 / 8 threads = 12.5 WU per thread. The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13770 Credit: 208,696,464 RAC: 304 |
The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient. I don't recall ever having per processor or per core WU limits before. I do remember them making the GPU limit per GPU instead of for all GPUs. Grant Darwin NT |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
There are at least 3 ('Advanced') methods I know to overcome the 100+100 tasks limits. Someone dare to list them? ;) Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
Ruh Roh! We don't want the Banhammer swinging around today now, do we? ;-) lol |
Dr Who Fan Send message Joined: 8 Jan 01 Posts: 3251 Credit: 715,342 RAC: 4 |
Ruh Roh! We don't want the Banhammer swinging around today now, do we? ;-) lol No, we do not want the whole thread vanishing into the ether at Berkeley. Let's just say a skilled person knowing how & what to look for on their favorite search tool should be able to find the magic ways. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient. I believe it was around the end of 2011 or start of 2012. I seem to recall when they first set the task limits they had accidentally set 50 total per host. Then changed it to per processor for CPU. Then after some time the limits were removed, db when splat again, & then the limits were implemented again. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
AMDave Send message Joined: 9 Mar 01 Posts: 234 Credit: 11,671,730 RAC: 0 |
Some links on the WU limit:
>Â Â Message 1785937Â Â <-- posted by HAL9000 >Â Â Message 1791681 |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14656 Credit: 200,643,578 RAC: 874 |
I'd suggest adding message 1307567 to that list. |
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
I think it would be useful to know what kind of hit the DB took when the change was made from 100 GPU tasks per host to 100 tasks per GPU. Whatever that increase was, and how well the DB handled it, might be informative in the current discussion. However, I don't think that was ever looked at or, if it was, I don't remember ever seeing it mentioned here. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Some links on the WU limit: I was thinking of posts a bit older. 1185411 1197674 1229214 The posts I can find where the staff told us the values of the task limits are from 2010. Other posts are just notes like "task limits were raised". SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.