Message boards :
Number crunching :
SETI & MW on one NV GPU simultaneously?
Message board moderation
Author | Message |
---|---|
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I let run 2 SETI (MB) WUs simultaneously on my NV GT730. In the setiathome.berkeley.edu project folder is an app_info.xml file with CUDA/0.5. In the milkyway.cs.rpi.edu_milkyway project folder is an app_config.xml file with: <app_config> <app> <name>milkyway</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.1</cpu_usage> </gpu_versions> </app> <app> <name>milkyway_separation__modified_fit</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.1</cpu_usage> </gpu_versions> </app> </app_config> If I suspend all SETI CUDA WUs, just let run one SETI CUDA WU, BOINC don't ask at Milkyway (project share 0) for a new WU. If BOINC have no SETI CUDA WUs, BOINC ask at Milkyway for new WUs. If SETI have no new CUDA WUs ready for download, it's just one SETI CUDA WU in BOINC, it's possible that SETI and Milkyway run simultaneously on my NV GT730? Thanks. PS. If I add a 'setiathome_v7 <app>' entry to the Milkyway app_config.xml file BOINC say 'unknown app'. I need to make an app_config.xml file also for SETI? |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I think you just have to add the app entries in the file.
|
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
PS. If I add a 'setiathome_v7 <app>' entry to the Milkyway app_config.xml file BOINC say 'unknown app'. You have to make separate app_config.xml files for each project. Claggy |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14653 Credit: 200,643,578 RAC: 874 |
PS. If I add a 'setiathome_v7 <app>' entry to the Milkyway app_config.xml file BOINC say 'unknown app'. And put each one in the matching project folder. But they are entirely optional, and having one for one project shouldn't affect the behaviour of the other project. Work fetch issues are best handled by enabling the <work_fetch_debug> log flag, and working through the many and various possible reasons why work fetch is disabled, delayed, or simply not needed for a particular project. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
ahh you're right Claggy, the are different directories, my bad. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Thanks. So I understand it correct? 1 SETI and 1 Milkyway WU should run simultaneously on my NV VGA card? Next time I'll try the <work_fetch_debug>. How should look an app_config.xml file for Intel iGPU (1 WU for SETI or AP) and NV GPU (2 WUs for SETI or 1 WU for AP), like this? <app_config> <app> <name>setiathome_v7</name> <gpu_versions> <gpu_usage>1.0</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> <app_version> <app_name>setiathome_v7</app_name> <plan_class>opencl_intel_gpu_sah</plan_class> <avg_ncpus>0.04</avg_ncpus> <ngpus>1.0</ngpus> <cmdline></cmdline> </app_version> <app> <name>setiathome_v7</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> <app_version> <app_name>setiathome_v7</app_name> <plan_class>cuda50</plan_class> <avg_ncpus>0.04</avg_ncpus> <ngpus>0.5</ngpus> <cmdline></cmdline> </app_version> <app> <name>astropulse_v7</name> <gpu_versions> <gpu_usage>1.0</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> <app_version> <app_name>astropulse_v7</app_name> <plan_class>opencl_intel_gpu_102</plan_class> <avg_ncpus>0.04</avg_ncpus> <ngpus>1.0</ngpus> <cmdline></cmdline> </app_version> <app> <name>astropulse_v7</name> <gpu_versions> <gpu_usage>1.0</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> <app_version> <app_name>astropulse_v7</app_name> <plan_class>opencl_nvidia_100</plan_class> <avg_ncpus>0.04</avg_ncpus> <ngpus>1.0</ngpus> <cmdline></cmdline> </app_version> </app_config> Or the Intel iGPU and NV GPU don't need entries, if just 1 WU? Then just an app_config.xml file with entries for SETI NV GPU? <app_config> <app> <name>setiathome_v7</name> <gpu_versions> <gpu_usage>0.5</gpu_usage> <cpu_usage>0.04</cpu_usage> </gpu_versions> </app> <app_version> <app_name>setiathome_v7</app_name> <plan_class>cuda50</plan_class> <avg_ncpus>0.04</avg_ncpus> <ngpus>0.5</ngpus> <cmdline></cmdline> </app_version> </app_config> (in both cases the app_info.xml file entries not changed, still with all...? ... <avg_ncpus>****</avg_ncpus> <max_ncpus>****</max_ncpus> <plan_class>****</plan_class> <coproc> <type>****</type> <count>****</count> </coproc> ... Or I need to delete this entries in one or both of this two mentioned app_config.xml file versions? Thanks. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
An app_config.xml file provides a way of modifying the app_version information supplied by an app_info.xml for anonymous platform, or by the scheduler reply for stock. Within the file, an <app>...</app> section modifies all app_versions for that app. Multiple <app> sections with the same name don't make any sense, I assume parsing ends up using only the last one. If your app_info.xml is basically unchanged from what the Lunatics installer supplied (all GPU <count> fields at 1), and you merely want the app_config.xml to control the CUDA NVIDIA GPU usage, you need no more than <app_config> <app_version> <app_name>setiathome_v7</app_name> <plan_class>cuda50</plan_class> <avg_ncpus>0.04</avg_ncpus> <ngpus>0.5</ngpus> </app_version> </app_config> OTOH, if you previously edited app_info.xml to give the CUDA plan classes a count of 0.5, the app_config.xml is not needed at all. Getting the host to ask for a Milkyway task to fill the other half of the GPU when there's only one SaH CUDA task on the host is difficult. The share 0 setting for Milkyway won't ask for work until the GPU is idle (not even half used). Perhaps a very small share like 0.01 would work, I don't know. Joe |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
What's the smallest value for resource share (which will work ;-)? If I write 0.0001 in the Milkyway prefs it's shown also after saving. 0.00001 is then 1.0E-5. Both times it's 0 in BOINC. Thanks. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
IIRC the values are interpreted and used as double precision. For Windows OS, the smallest processing scheulde timeslice will be on the order of 1-10ms, so with added processing some amount bigger than that will be maximum switches. Note that such frequent switching would have overheads, so if that's your game them optimisation [which involves math] becomes the challenge. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I'm sorry... it looks like I didn't used correct words... I meant the resource share of/at/between projects. - - - - - - - - - - Example: at SETI I use 1000000. At Milkyway I use 0. The MW app 'GPU Load' is just ~50%. If a SETI WU is available in BOINC, it run the SETI or the MW app/WU, not both simultaneously. Also if no SETI WU is in BOINC, BOINC let run just 1 MW WU on the NV GT730. BOINC don't ask for more (2 WUs/GPU) WUs. I set 0.0001 project resource share in BOINC for MW, BOINC downloaded immediately 12 CPU and 45 GPU WUs. Is there maybe a way to let run project resource share 0 and BOINC let run 2 Milkyway WUs simultaneously (maybe via cc_config.xml file or something)? Maybe the smallest resource share (which will be used from Milkyway) is 1.0E-100? ;-) Thanks. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I'm sorry... it looks like I didn't used correct words... Have you tried setting resource share for both projects to 0? I'm at least 72% sure that will not work like you want either, but perhaps it may. I think you may need to use <max_concurrent>1</max_concurrent> with <plan_class>mt</plan_class> in your app_config.xml to get what you would like. However I'm not sure that those two options can be used together. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
KLiK Send message Joined: 31 Mar 14 Posts: 1304 Credit: 22,994,597 RAC: 60 |
Usually I crunch SETi@home exclusively on my GPUs...it gives me enough freedom 4 work on PC & simple tasks - like writing this post now! :D When all work is done on SETi@home I switch to MW or A@h or maybe GPUgrid... ;) non-profit org. Play4Life in Zagreb, Croatia, EU |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.