Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in /disks/centurion/b/carolyn/b/home/boincadm/projects/beta/html/inc/boinc_db.inc on line 147
Tests of new scheduler features.

Tests of new scheduler features.

Message boards : News : Tests of new scheduler features.
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 17 · Next

AuthorMessage
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 45907 - Posted: 16 May 2013, 21:27:02 UTC - in response to Message 45905.  

I found a typo in the ati_opencl_100 plan class that was causing some ATI machines without OpenCL to get opencl work. I fixed it.


ID: 45907 · Report as offensive
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 45908 - Posted: 16 May 2013, 21:31:09 UTC - in response to Message 45905.  


It was: http://setiweb.ssl.berkeley.edu/beta/show_host_detail.php?hostid=18439 and I bet because of full quota.


That was part of it. I can see why the reason isn't being transmitted. There's a separate reason for each app version that's considered. You didn't get work some versions because of your driver revision. If BOINC printed out the reason for every app version it would get confusing, unless it were done very carefully.

I'll think about it.
ID: 45908 · Report as offensive
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 45909 - Posted: 16 May 2013, 21:40:22 UTC

And yes, the astropulse results from the last two tapes have been mostly outliers (98.6% to be precise). I've put on a new tape and hope for better luck.
ID: 45909 · Report as offensive
Alex Storey
Volunteer tester
Avatar

Send message
Joined: 10 Feb 12
Posts: 107
Credit: 305,151
RAC: 0
Greece
Message 45911 - Posted: 16 May 2013, 22:53:58 UTC

However did you manage to get my driver version to show!?



It's been "unknown" for over 2 and a half years!:)

Awesome...
ID: 45911 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 45913 - Posted: 16 May 2013, 23:39:13 UTC - in response to Message 45908.  

It was: http://setiweb.ssl.berkeley.edu/beta/show_host_detail.php?hostid=18439 and I bet because of full quota.

That was part of it. I can see why the reason isn't being transmitted. There's a separate reason for each app version that's considered. You didn't get work some versions because of your driver revision. If BOINC printed out the reason for every app version it would get confusing, unless it were done very carefully.

I'll think about it.

For Beta at least, it would be hugely useful if you could enable click-through from the host listing page to the most recent server log for each host, as Einstein have done. I'm sure it's a big server load, but with the recent new hardware, I think it might be worth a try. Then the - relatively few - active testers here could look at the - version by version - decision-making process followed by each scheduler, and pick out anomalies without pestering you to extract and post each server event that comes under scrutiny.

I'm sure Bernd would lend a hand with implementation.
ID: 45913 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 45916 - Posted: 17 May 2013, 6:27:19 UTC - in response to Message 45913.  
Last modified: 17 May 2013, 6:29:25 UTC

It was: http://setiweb.ssl.berkeley.edu/beta/show_host_detail.php?hostid=18439 and I bet because of full quota.

That was part of it. I can see why the reason isn't being transmitted. There's a separate reason for each app version that's considered. You didn't get work some versions because of your driver revision. If BOINC printed out the reason for every app version it would get confusing, unless it were done very carefully.

I'll think about it.

For Beta at least, it would be hugely useful if you could enable click-through from the host listing page to the most recent server log for each host, as Einstein have done. I'm sure it's a big server load, but with the recent new hardware, I think it might be worth a try. Then the - relatively few - active testers here could look at the - version by version - decision-making process followed by each scheduler, and pick out anomalies without pestering you to extract and post each server event that comes under scrutiny.

I'm sure Bernd would lend a hand with implementation.


Yeah, good idea! This way we also would have a chance to learn more about server side of project :)
Sure it can be done ONLY on beta. On main project not needed.


And regarding reason of no work given - yes, I had same thought that BOINC's answer was " need driver upgrade". And BOINC right, if I would upgrade driver more app versions would be available and host did not reach quota there... but from user point of view it would be wrong answer still. Cause such answer doesn't make difference between fulfilled and refused request. host gets driver upgrade recommendation on each and every request. We need something like "quota reached for app versions available for this particular host" answer.
Smth like BOINC says when there is work for MB and not for AP for example. With hint to user what current problem is (no work for AP) and hint what user can do (enable MB work or, in my case, upgrade drivers to recive CUDA50 for example).
ID: 45916 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 45921 - Posted: 17 May 2013, 9:48:22 UTC

Eric, after more considerations I think that temporary effect of limited quota on app version allocation is positive feature, not negative one. And should not be "fixed" anyhow. This binds even very fast hosts to real world time scale. And we can spot and fix issues with screwed hosts only in real world time scale no matter how fast host is. But fastest host leaving in screwed (for example GPU downclock) state for same amount of real world time will complete many tasks with distorted timings. So allocating tasks from all (even slower) apps in case of quota shrinking is good - it gives BOINC a chance to probe host for new conditions and not to stuck in wrong state.
ID: 45921 · Report as offensive
Urs Echternacht
Volunteer tester
Avatar

Send message
Joined: 18 Jan 06
Posts: 1038
Credit: 18,734,730
RAC: 0
Germany
Message 45922 - Posted: 17 May 2013, 9:50:12 UTC

Two of my hosts are getting too much work assigned now.
Local cache settings on both are Min 0.1 days + Max additional 0.5 days.

But host id 51991 got already 3 times (450+ wus) that amount
and host id 50380 got even more (1500 + wus). Stopped workfetch on both hosts manually.

Shouldn't there be a limitation because of cache settings ?
_\|/_
U r s
ID: 45922 · Report as offensive
William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 606
Credit: 588,843
RAC: 0
Message 45923 - Posted: 17 May 2013, 10:48:14 UTC - in response to Message 45922.  

Two of my hosts are getting too much work assigned now.
Local cache settings on both are Min 0.1 days + Max additional 0.5 days.

But host id 51991 got already 3 times (450+ wus) that amount
and host id 50380 got even more (1500 + wus). Stopped workfetch on both hosts manually.

Shouldn't there be a limitation because of cache settings ?

Not if flops and therefore estimates are hopelessly wrong.

If flops underestimate at 1/3 e.g. APR is 6 but you're getting flops of 18e9 you'll get three times the amount you actually need.
Compare estimates on the host (taking DCF into account for boinc 6) with known actual runtimes. Or look up flops received in scheduler_reply or Client_state.xml.

And of course once APR estimates kick in boinc starts to panic :D
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 45923 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 45924 - Posted: 17 May 2013, 11:21:22 UTC - in response to Message 45923.  

I find it helpful to enable the <shed_op_debug> logging flag, and compare the values (number of seconds) between what the client requested, and what the same client estimated the server's response to be.
ID: 45924 · Report as offensive
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 45928 - Posted: 17 May 2013, 18:14:50 UTC - in response to Message 45922.  


Shouldn't there be a limitation because of cache settings ?


There should have been a limitation because of the app version max results per day, unless you're asking for 7+ days worth of work.
ID: 45928 · Report as offensive
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 45930 - Posted: 17 May 2013, 19:11:09 UTC - in response to Message 45922.  

Two of my hosts are getting too much work assigned now.
Local cache settings on both are Min 0.1 days + Max additional 0.5 days.


Last time host 50380 got work, it asked for 0.6 days of GPU work. The time before that it asked for 2 days of CPU work. It's not obeying your cache settings for some reason or your computer thinks that the work you have is going to take zero time.

2013-05-16 22:00:23.2456 [PID=25843]    Sending reply to [HOST#50380]: 3 results, delay req 7.00
2013-05-16 22:01:29.3086 [PID=28539]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:02:35.7364 [PID=29746]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:03:38.0991 [PID=29875]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:04:57.1270 [PID=30032]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:06:33.9883 [PID=30862]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:07:52.8436 [PID=31073]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:09:09.4220 [PID=31263]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:10:26.5875 [PID=31427]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:11:58.2080 [PID=426  ]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:13:40.9137 [PID=635  ]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:14:58.2322 [PID=901  ]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:15:54.1178 [PID=1646 ]    Sending reply to [HOST#50380]: 5 results, delay req 7.00
2013-05-16 22:34:02.4328 [PID=9818 ]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 22:54:10.1618 [PID=19954]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-16 23:06:09.5097 [PID=25762]    Sending reply to [HOST#50380]: 11 results, delay req 7.00
2013-05-16 23:57:46.2671 [PID=17817]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-17 01:17:54.1977 [PID=23106]    Sending reply to [HOST#50380]: 9 results, delay req 7.00
2013-05-17 01:49:07.7491 [PID=5520 ]    Sending reply to [HOST#50380]: 11 results, delay req 7.00
2013-05-17 02:10:24.7012 [PID=17028]    Sending reply to [HOST#50380]: 9 results, delay req 7.00

ID: 45930 · Report as offensive
Urs Echternacht
Volunteer tester
Avatar

Send message
Joined: 18 Jan 06
Posts: 1038
Credit: 18,734,730
RAC: 0
Germany
Message 45931 - Posted: 17 May 2013, 19:23:02 UTC
Last modified: 17 May 2013, 19:47:42 UTC

Lol, computers and thinking, lol!

Will try to set in cc_config some flags and try again to enable work fetch on 50380
_\|/_
U r s
ID: 45931 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 45932 - Posted: 17 May 2013, 19:47:48 UTC - in response to Message 45931.  

Lol, with such freq and CU numbers your computer will not only think, it will write books and teach us new theories ;D
ID: 45932 · Report as offensive
Urs Echternacht
Volunteer tester
Avatar

Send message
Joined: 18 Jan 06
Posts: 1038
Credit: 18,734,730
RAC: 0
Germany
Message 45935 - Posted: 17 May 2013, 20:10:12 UTC

The question is : why such numbers ? Some another host on main has even higher numbers and is not getting too much using the same BOINC version 7.0.65

Here is the debug output of work_fetch:
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] work fetch start
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] choose_project() for ATI: buffer_low: no; sim_excluded_instances 0
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] choose_project() for CPU: buffer_low: no; sim_excluded_instances 0
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] ------- start work fetch state -------
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] target work buffer: 8640.00 + 43200.00 sec
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] --- project states ---
Fr 17 Mai 2013 22:00:49 CEST	SETI@home Beta Test	[work_fetch] REC 151466.771 prio -2.914135 can't req work: "no new tasks" requested via Manager
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] --- state for CPU ---
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] shortfall 119324.77 nidle 0.00 saturated 16471.96 busy 0.00
Fr 17 Mai 2013 22:00:49 CEST	SETI@home Beta Test	[work_fetch] fetch share 0.000
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] --- state for ATI ---
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] shortfall 0.00 nidle 0.00 saturated 1361018.69 busy 0.00
Fr 17 Mai 2013 22:00:49 CEST	SETI@home Beta Test	[work_fetch] fetch share 0.000
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] ------- end work fetch state -------
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] No project chosen for work fetch


_\|/_
U r s
ID: 45935 · Report as offensive
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 45937 - Posted: 17 May 2013, 20:56:32 UTC - in response to Message 45935.  

I think I see what the issue is. The target work buffer is per core and per GPU. Since it's 4 processors and 2 GPUs it wants a buffer of 207360 seconds of CPU work and 103680 seconds of GPU work.

That should still only be about 44 S@H results on your GPU. So it looks like it was estimating that your GPUs could do a result in 20 seconds.

I'll back up further in the logs to see what the server side estimates were.

ID: 45937 · Report as offensive
Urs Echternacht
Volunteer tester
Avatar

Send message
Joined: 18 Jan 06
Posts: 1038
Credit: 18,734,730
RAC: 0
Germany
Message 45938 - Posted: 17 May 2013, 23:00:50 UTC - in response to Message 45937.  

I think I see what the issue is. The target work buffer is per core and per GPU. Since it's 4 processors and 2 GPUs it wants a buffer of 207360 seconds of CPU work and 103680 seconds of GPU work.

That should still only be about 44 S@H results on your GPU. So it looks like it was estimating that your GPUs could do a result in 20 seconds.

I'll back up further in the logs to see what the server side estimates were.


No idea why BOINC detects that a single Opteron processor with 2 dual cores (4 siblings) is 4 processors instead.
That's something that one can ask himself when looking at some stats sides, but never would think this could have such a negative sideeffect.

20 seconds per task would be 120 times less than real world shows (with the shorties that are around). Good that fetching was manually stopped.

If remember correct, the first estimates from server had be 7:13 minutes per task.

_\|/_
U r s
ID: 45938 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 45939 - Posted: 17 May 2013, 23:18:39 UTC - in response to Message 45935.  

The question is : why such numbers ? Some another host on main has even higher numbers and is not getting too much using the same BOINC version 7.0.65

Here is the debug output of work_fetch:

Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] work fetch start
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] choose_project() for ATI: buffer_low: no; sim_excluded_instances 0
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] choose_project() for CPU: buffer_low: no; sim_excluded_instances 0
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] ------- start work fetch state -------
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] target work buffer: 8640.00 + 43200.00 sec
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] --- project states ---
Fr 17 Mai 2013 22:00:49 CEST	SETI@home Beta Test	[work_fetch] REC 151466.771 prio -2.914135 can't req work: "no new tasks" requested via Manager
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] --- state for CPU ---
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] shortfall 119324.77 nidle 0.00 saturated 16471.96 busy 0.00
Fr 17 Mai 2013 22:00:49 CEST	SETI@home Beta Test	[work_fetch] fetch share 0.000
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] --- state for ATI ---
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] shortfall 0.00 nidle 0.00 saturated 1361018.69 busy 0.00
Fr 17 Mai 2013 22:00:49 CEST	SETI@home Beta Test	[work_fetch] fetch share 0.000
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] ------- end work fetch state -------
Fr 17 Mai 2013 22:00:49 CEST		[work_fetch] No project chosen for work fetch

Urs, could you try <sched_op_debug>, please?

18/05/2013 00:03:05 | SETI@home Beta Test | [sched_op] NVIDIA work request: 6576.98 seconds; 0.00 devices
18/05/2013 00:03:08 | SETI@home Beta Test | Scheduler request completed: got 17 new tasks
18/05/2013 00:03:08 | SETI@home Beta Test | [sched_op] estimated total NVIDIA task duration: 6662 seconds

is actually more helpful than <work_fetch> for this sort of checking.

@ Eric,

It's quite subtle, and needs to be checked carefully, which values are 'wall time', and which are 'device time' - especially where multiple devices are in play.

Urs' "shortfall 119324.77" would have been a request for 1 day 9 hours of device-time work (if work fetch hadn't been disabled) even though the cache setting was for 2.4 + 12 hours of wall-time. I presume that host can crunch at least three CPU tasks in parallel, so there are three (or more) device-hours (CPU-core-hours) in every wall-hour.
ID: 45939 · Report as offensive
Urs Echternacht
Volunteer tester
Avatar

Send message
Joined: 18 Jan 06
Posts: 1038
Credit: 18,734,730
RAC: 0
Germany
Message 45942 - Posted: 18 May 2013, 1:34:51 UTC - in response to Message 45939.  

Urs, could you try <sched_op_debug>, please?

18/05/2013 00:03:05 | SETI@home Beta Test | [sched_op] NVIDIA work request: 6576.98 seconds; 0.00 devices
18/05/2013 00:03:08 | SETI@home Beta Test | Scheduler request completed: got 17 new tasks
18/05/2013 00:03:08 | SETI@home Beta Test | [sched_op] estimated total NVIDIA task duration: 6662 seconds

is actually more helpful than <work_fetch> for this sort of checking.

@ Eric,

It's quite subtle, and needs to be checked carefully, which values are 'wall time', and which are 'device time' - especially where multiple devices are in play.

Urs' "shortfall 119324.77" would have been a request for 1 day 9 hours of device-time work (if work fetch hadn't been disabled) even though the cache setting was for 2.4 + 12 hours of wall-time. I presume that host can crunch at least three CPU tasks in parallel, so there are three (or more) device-hours (CPU-core-hours) in every wall-hour.

Was already active, but did not ask for more GPU work anymore. Had to hit the button (see below). Estimates for opencl_ati_sah are now near real duration of wus, but no sign of "high priority" by BOINC.
Sa 18 Mai 2013 03:16:04 CEST SETI@home Beta Test update requested by user
Sa 18 Mai 2013 03:16:04 CEST [work_fetch] Request work fetch: project updated by user
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] work fetch start
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] choose_project() for ATI: buffer_low: no; sim_excluded_instances 0
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] choose_project() for CPU: buffer_low: yes; sim_excluded_instances 0
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [work_fetch] set_request() for CPU: ninst 4 nused_total 94.090000 nidle_now 0.000000 fetch share 1.000000 req_inst 0.000000 req_secs 7716.433403
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] ------- start work fetch state -------
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] target work buffer: 8640.00 + 43200.00 sec
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] --- project states ---
Sa 18 Mai 2013 03:16:08 CEST SETI@home [work_fetch] REC 40991.853 prio 0.000000 can't req work: suspended via Manager
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [work_fetch] REC 152672.826 prio -3.901548 can req work
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] --- state for CPU ---
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] shortfall 7716.43 nidle 0.00 saturated 46911.97 busy 0.00
Sa 18 Mai 2013 03:16:08 CEST SETI@home [work_fetch] fetch share 0.000
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [work_fetch] fetch share 1.000
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] --- state for ATI ---
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] shortfall 0.00 nidle 0.00 saturated 1341798.63 busy 0.00
Sa 18 Mai 2013 03:16:08 CEST SETI@home [work_fetch] fetch share 0.000
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [work_fetch] fetch share 1.000
Sa 18 Mai 2013 03:16:08 CEST [work_fetch] ------- end work fetch state -------
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [sched_op] Starting scheduler request
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [work_fetch] request: CPU (7716.43 sec, 0.00 inst) ATI (0.00 sec, 0.00 inst)
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test Sending scheduler request: Requested by user.
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test Reporting 8 completed tasks
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test Requesting new tasks for CPU
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [sched_op] CPU work request: 7716.43 seconds; 0.00 devices
Sa 18 Mai 2013 03:16:08 CEST SETI@home Beta Test [sched_op] ATI work request: 0.00 seconds; 0.00 devices

Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test Scheduler request completed: got 1 new tasks
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] Server version 701
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test Project requested delay of 7 seconds
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] estimated total CPU task duration: 15294 seconds
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] estimated total ATI task duration: 0 seconds

Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.160_0
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.107_1
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.116_1
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.169_0
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.142_1
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.105_1
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.26243.14386.16.16.134_1
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] handle_scheduler_reply(): got ack for task 01mr13ab.32491.16022.16.16.83_0
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] Deferring communication for 7 sec
Sa 18 Mai 2013 03:16:53 CEST SETI@home Beta Test [sched_op] Reason: requested by project
Sa 18 Mai 2013 03:16:53 CEST [work_fetch] Request work fetch: RPC complete
Sa 18 Mai 2013 03:16:55 CEST SETI@home Beta Test Started download of 28se11ac.15329.14107.3.16.50.vlar
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] Request work fetch: Backoff ended for SETI@home Beta Test
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] work fetch start
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] choose_project() for ATI: buffer_low: no; sim_excluded_instances 0
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] choose_project() for CPU: buffer_low: no; sim_excluded_instances 0
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] ------- start work fetch state -------
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] target work buffer: 8640.00 + 43200.00 sec
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] --- project states ---
Sa 18 Mai 2013 03:17:06 CEST SETI@home [work_fetch] REC 40989.851 prio 0.000000 can't req work: suspended via Manager
Sa 18 Mai 2013 03:17:06 CEST SETI@home Beta Test [work_fetch] REC 152676.625 prio -3.904249 can req work
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] --- state for CPU ---
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] shortfall 2748.39 nidle 0.00 saturated 49091.61 busy 0.00
Sa 18 Mai 2013 03:17:06 CEST SETI@home [work_fetch] fetch share 0.000
Sa 18 Mai 2013 03:17:06 CEST SETI@home Beta Test [work_fetch] fetch share 1.000
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] --- state for ATI ---
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] shortfall 0.00 nidle 0.00 saturated 1341679.50 busy 0.00
Sa 18 Mai 2013 03:17:06 CEST SETI@home [work_fetch] fetch share 0.000
Sa 18 Mai 2013 03:17:06 CEST SETI@home Beta Test [work_fetch] fetch share 1.000
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] ------- end work fetch state -------
Sa 18 Mai 2013 03:17:06 CEST [work_fetch] No project chosen for work fetch

_\|/_
U r s
ID: 45942 · Report as offensive
Profile Mike
Volunteer tester
Avatar

Send message
Joined: 16 Jun 05
Posts: 2531
Credit: 1,074,556
RAC: 0
Germany
Message 45946 - Posted: 18 May 2013, 10:51:06 UTC

I`m wondering this host is getting AP 6.04 units.

http://setiweb.ssl.berkeley.edu/beta/results.php?hostid=60626

Drivers in use 186.18.
If i`m not mistaken first NV OpenCL driver is 195.55.

With each crime and every kindness we birth our future.
ID: 45946 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 17 · Next

Message boards : News : Tests of new scheduler features.


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.