Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in /disks/centurion/b/carolyn/b/home/boincadm/projects/beta/html/inc/boinc_db.inc on line 147
Total running time for CUDA/GPU tasks

Total running time for CUDA/GPU tasks

Message boards : SETI@home Enhanced : Total running time for CUDA/GPU tasks
Message board moderation

To post messages, you must log in.

AuthorMessage
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 35879 - Posted: 15 Dec 2008, 18:53:59 UTC

Someone on the BOINC message boards has just pointed out that GPUGRID add an extra line to

<stderr_txt>
...
# Approximate elapsed time for entire WU: 25733.974 s
called boinc_finish
</stderr_txt>

While we're waiting for BOINC to officially record the information and display it on the result pages here, this would be a helpful initiative for SETI to copy.
ID: 35879 · Report as offensive
Profile Phil Klassen
Volunteer tester

Send message
Joined: 16 Dec 08
Posts: 5
Credit: 5,088,939
RAC: 0
Canada
Message 35910 - Posted: 16 Dec 2008, 11:30:42 UTC - in response to Message 35879.  

I have seen this on my gpugrid workunits also. It sure helped me to fine tune the OC functions on my cards.

ID: 35910 · Report as offensive
Thamir Ghaslan
Volunteer tester

Send message
Joined: 15 Dec 08
Posts: 3
Credit: 1,671
RAC: 0
Saudi Arabia
Message 35922 - Posted: 16 Dec 2008, 14:55:34 UTC - in response to Message 35879.  

Someone on the BOINC message boards has just pointed out that GPUGRID add an extra line to

<stderr_txt>
...
# Approximate elapsed time for entire WU: 25733.974 s
called boinc_finish
</stderr_txt>

While we're waiting for BOINC to officially record the information and display it on the result pages here, this would be a helpful initiative for SETI to copy.



That was me. :)

The guys at gpugrid only uses cpu time to query the GPU of its status, all the actual computation happens at the GPU and barely anything runs on the CPU.

In the past the CPU and GPU times were identical at gpugrid, but high credits were still granted based on some GPU flops formula. This created a 100% CPU utilization that was basically wasteful.

Now the CPU time is lower and so is the cpu queries of the gpu.

Like I said on the other board, it wont harm seti to learn a trick or two from gpu grid. And seti can also work with nvidia as gpugrid have a good level of coordination and support with nvidia for code optimization and bug ironing purposes.

This is a beta project, and as with all betas it is expected to be unstable and immature.
ID: 35922 · Report as offensive
Profile Eric J Korpela
Volunteer moderator
Project administrator
Project developer
Project scientist
Avatar

Send message
Joined: 15 Mar 05
Posts: 1547
Credit: 27,183,456
RAC: 0
United States
Message 35990 - Posted: 17 Dec 2008, 19:27:49 UTC - in response to Message 35922.  

I agree, we should implement this.

Even better would be for BOINC to add a concept of "coprocessor time" to the client/app communication.

Eric
ID: 35990 · Report as offensive
Richard Haselgrove
Volunteer tester

Send message
Joined: 3 Jan 07
Posts: 1451
Credit: 3,272,268
RAC: 0
United Kingdom
Message 36151 - Posted: 20 Dec 2008, 23:49:08 UTC - in response to Message 35990.  

I agree, we should implement this.

Even better would be for BOINC to add a concept of "coprocessor time" to the client/app communication.

Eric

Eric,

While you're picking up ideas for (relatively quick and easy) improvements, could I draw your attention to Raistmer's 'increased thread priority' idea, released for testing at SETI Main, and described in principle at Lunatics (pre-release area, login required).

I'm afraid I don't have source-code references for the changes required, but it sounds minor and effective.
ID: 36151 · Report as offensive
Profile Raistmer
Volunteer tester
Avatar

Send message
Joined: 18 Aug 05
Posts: 2423
Credit: 15,878,738
RAC: 0
Russia
Message 36152 - Posted: 21 Dec 2008, 0:07:32 UTC - in response to Message 36151.  

It's effective indeed. My 9600GSO doing each task ~20min while all four cres busy with einstein@home (no degradation in speed for einstein, net gain for both projects).
And modification really VERY easy to implement.

int seti_analyze (ANALYSIS_STATE& state) {
sah_complex* DataIn = state.savedWUData;
int NumDataPoints = state.npoints;
sah_complex* ChirpedData = NULL;
sah_complex* WorkData = NULL;
float* PowerSpectrum = NULL;
float* tPowerSpectrum; // Transposed power spectra if used.
//R: Added set priority win-API call to allow full usage of GPU in non-idle systems.
#if _WIN32
if(!SetThreadPriority(GetCurrentThread(),THREAD_PRIORITY_BELOW_NORMAL)){
DWORD error=GetLastError();
LPSTR lpBuffer;
FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM|FORMAT_MESSAGE_ALLOCATE_BUFFER,NULL,error,0,lpBuffer,0,NULL);

fprintf(stderr,"ERROR:can't set priority: %s\n",lpBuffer);
}
#endif

#ifdef USE_CUDA
.....

That's all! YOu even not need that error handle.
It could be just easy as

#if _WIN32
SetThreadPriority(GetCurrentThread(),THREAD_PRIORITY_BELOW_NORMAL);
#endif

News about SETI opt app releases: https://twitter.com/Raistmer
ID: 36152 · Report as offensive

Message boards : SETI@home Enhanced : Total running time for CUDA/GPU tasks


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.