Message boards :
News :
SETI@home v8 beta to begin on Tuesday
Message board moderation
Previous · 1 . . . 46 · 47 · 48 · 49 · 50 · 51 · 52 . . . 99 · Next
Author | Message |
---|---|
Send message Joined: 3 Jan 07 Posts: 1451 Credit: 3,272,268 RAC: 0 ![]() |
According to the Applications page, the new intel_gpu app for OS X loaded 7 April is getting plenty of exercise: Mac OS X/64-bit Intel 8.10 (opencl_intel_gpu_sah) 7 Apr 2016, 1:01:54 UTC 30 GigaFLOPS And according to Eric posting at the main project, The VLARs go out to ATI GPUs, but are held back from NVIDIA. If I could find a way to send VLAR to only NVIDIA OpenCL and not NVIDIA CUDA, I would... No specific mention of intel there, but the usual interpretation is that "VLAR go to all except NVidia". But some people are saying their ATIs aren't getting data either. Perhaps we need to re-check that theory is matching practice on that one - I have no OS X hosts to test with. |
Send message Joined: 18 May 06 Posts: 280 Credit: 26,477,429 RAC: 0 ![]() |
FWIW, I have had 6 mac minis with Intel HD graphics that have been crunching for a while now. But they are currently out of work. Not sure how much that accounts for the 30 GigaFLOPS that page claims. Also, my macs with nvidia are still getting tasks just fine. Maybe that "all but nvidia" rule doesn't apply to OSX? Dublin, California Team: SETI.USA ![]() |
Send message Joined: 30 Dec 13 Posts: 258 Credit: 12,340,341 RAC: 0 ![]() |
Has something changed in regards to the work units? I've noticed an increase in the time required to complete over the last 2 hours. This was precipitated by very short work units. |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
It seems iGPU OpenCL doesn't get GUPPI tasks News about SETI opt app releases: https://twitter.com/Raistmer |
![]() Send message Joined: 18 Jan 06 Posts: 1038 Credit: 18,734,730 RAC: 0 ![]() |
I also received Errors running the App SETI@home v8 v8.10 (opencl_ati5_SoG_nocal) in Linux; SIGSEGV: segmentation violation One error (SIGSEGV, inside fglrx driver) only on that host. If it happens again please try rerunning offline to see if the error is repeatable. I just do an offline rerun of your blc3_2bit_guppi_57451_20612_HIP62472_0007.4654.831.18.21.242.vlar task. _\|/_ U r s |
Send message Joined: 12 Nov 10 Posts: 1149 Credit: 32,460,657 RAC: 1 ![]() |
It seems iGPU OpenCL doesn't get GUPPI tasks Not sure if this was referring to the query re OS X below but my Windows host isn't getting any new work for Intel GPU either. Are any GPUs running the current crop of GUPPI's? |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
It seems iGPU OpenCL doesn't get GUPPI tasks Both ATi and NV OpenCL running GBT dta under Windows. But iGPU doesn't. News about SETI opt app releases: https://twitter.com/Raistmer |
Send message Joined: 12 Nov 10 Posts: 1149 Credit: 32,460,657 RAC: 1 ![]() |
It seems iGPU OpenCL doesn't get GUPPI tasks Is there a reason why not or is there a config file I can change somewhere to make them run? |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
It seems iGPU OpenCL doesn't get GUPPI tasks Lets await Eric for answer. iGPU can do GBT just as other GPUs can. News about SETI opt app releases: https://twitter.com/Raistmer |
![]() Send message Joined: 15 Mar 05 Posts: 1547 Credit: 27,183,456 RAC: 0 ![]() |
I plan to make the mods to the scheduler to allow all GPUs to run GBT VLAR workunits this week. The mod will go to beta first. We may have to restrict by GPU model if there are problems. My Intel GPU has ceased receiving any workunits, either ALFA or GBT. Not sure why that would be. ![]() |
Send message Joined: 3 Jan 07 Posts: 1451 Credit: 3,272,268 RAC: 0 ![]() |
My Intel GPU has ceased receiving any workunits, either ALFA or GBT. Not sure why that would be. Just set up a test host. 18/04/2016 21:25:05 | SETI@home Beta Test | Requesting new tasks for Intel GPU 18/04/2016 21:25:05 | SETI@home Beta Test | [sched_op] Intel GPU work request: 12179.23 seconds; 1.00 devices 18/04/2016 21:25:07 | SETI@home Beta Test | Scheduler request completed: got 0 new tasks 18/04/2016 21:25:07 | SETI@home Beta Test | No tasks sent 18/04/2016 21:25:07 | SETI@home Beta Test | Tasks for CPU are available, but your preferences are set to not accept them 18/04/2016 21:25:07 | SETI@home Beta Test | Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them 18/04/2016 21:25:07 | SETI@home Beta Test | Project requested delay of 7 seconds That looks OK this end. Host is # 72559, if you need to check the server logs. |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
Eric, please check your mailbox. There is the issue that requires your decision. News about SETI opt app releases: https://twitter.com/Raistmer |
Send message Joined: 30 Dec 13 Posts: 258 Credit: 12,340,341 RAC: 0 ![]() |
Raistmer, Is there any difference between your dropbox version of SoG and v8.12 stock here on Beta? |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
Raistmer, Should be same binaries, same revision. News about SETI opt app releases: https://twitter.com/Raistmer |
Send message Joined: 30 Dec 13 Posts: 258 Credit: 12,340,341 RAC: 0 ![]() |
I've notice the stock runs about 9 mins faster than when I used the dropbox version. Not sure why. Ok, thanks. |
![]() ![]() Send message Joined: 18 Aug 05 Posts: 2423 Credit: 15,878,738 RAC: 0 ![]() |
I've notice the stock runs about 9 mins faster than when I used the dropbox version. Just compare CL file rev numbers in file names. Same? Maybe when you transiting stock/anonymous FFTW wisdom regenarated? It can affect speed especially if regeneration was in wrong phase and not optimal codelets were selected as defaults. {offtopic] BTW, it's not dropbox, it's Mail.ru cloud. And Mail.ru is the company that that famous Milner owns ;) So, download from there, look ads there, generate revenue to Mail.ru - seems it ultimately goes to right cause :D News about SETI opt app releases: https://twitter.com/Raistmer |
![]() Send message Joined: 7 Jun 09 Posts: 285 Credit: 2,822,466 RAC: 0 ![]() |
Eric J Korpela wrote: I plan to make the mods to the scheduler to allow all GPUs to run GBT VLAR workunits this week. The mod will go to beta first. We may have to restrict by GPU model if there are problems. My PC have no problem to crunch 24/7 with the currently mix of guppi and normal tasks at Main. Two Intel Xeon E5-2630v2 (6 Cores/12 Threads each) with four AMD Radeon R9 Fury X VGA cards. Fully loaded (HT off for faster GPU app calculation, so 6 Cores + 6 Cores, running @ 2.9 GHz turbo): 3 CPU-Cores reserved for 4 GPU apps 9 CPU-Cores CPU tasks On one FuryX: mid-AR in 6 mins This means, 100 tasks(/process) = 10 hrs (if no new tasks would be available, after 10 hrs idle or 2ndary projects) VH AR in 3 mins This means after 5 hrs idle or 2ndary projects. With opti cmdline settings a bit faster. So the currently mix of guppi and normal tasks aren't a problem for my above mentioned PC for 24/7 (example for fastest VGA card available currently, much CPU-Cores). I guess a FuryX is RAC-equal (tasks/day) compared to a GTX980Ti or Titan. So guppi and normal .vlar tasks just to CPU isn't a problem. Not needed to repair a well running system. ;-) On CPU with Lunatics opti AVX x64 app: blc0_2bit_guppi_57403_69832_HIP11048_0006.29397.831.21.44.252.vlar_1 / AR=0.012257 http://setiathome.berkeley.edu/result.php?resultid=4872040127 Run time = 55 min 13 sec CPU time = 52 min 17 sec 24ap10ad.22909.211699.3.30.43.vlar_1 / AR=0.009909 http://setiathome.berkeley.edu/result.php?resultid=4871972661 Run time = 1 hours 34 min 50 sec CPU time = 1 hours 29 min 46 sec 24ap10ad.18110.221515.4.31.31_0 / AR=0.391145 http://setiathome.berkeley.edu/result.php?resultid=4872350972 Run time = 1 hours 41 min 53 sec CPU time = 1 hours 36 min 21 sec On GPU with stock SETI@home v8 v8.12 (opencl_ati5_SoG_nocal): blc3_2bit_guppi_57451_20612_HIP62472_0007.22580.831.17.20.60.vlar_0 / AR=0.008175 http://setiweb.ssl.berkeley.edu/beta/result.php?resultid=23610097 Run time = 17 min 22 sec CPU time = 5 min 26 sec 24mr10ac.30923.18083.5.39.155.vlar_1 / AR=0.016670 http://setiweb.ssl.berkeley.edu/beta/result.php?resultid=23602696 Run time = 22 min 45 sec CPU time = 7 min 5 sec 24mr10ac.30923.19310.5.39.173_1 / AR=0.429040 http://setiweb.ssl.berkeley.edu/beta/result.php?resultid=23605054 Run time = 5 min 59 sec CPU time = 4 min 56 sec SETI is my 1st (and primary) project, I participate with my heart blood - because of this I built a €5,000 PC (above mentioned (which is very much money for me)) just for SETI. I would like to get out (give SETI) the max performance out of my machine. (*_guppi_*).vlar's to GPUs would be counterproductive. In the time a (FuryX) VGA card calculate a *_guppi_*.vlar task, the VGA card could calculate 3 mid-AR tasks. Bad. (or in this time could calculate tasks of other projects) Two *_guppi_*.vlar tasks on CPU lasts like one mid-AR task. Very good. So it would be very bad for the whole performance of a PC to send *_guppi_*.vlar tasks to GPU. If it will be decided to send (*_guppi_*).vlar's also to GPUs, I worry the tool which push around tasks will get a revival (currently I don't know if it work with SETIv8, if not - maybe a new tool will come (or an overworked of the currently)). I worry a lot of people will send the (*_guppi_*).vlar tasks for their GPUs to their CPUs... - this will screw up CreditNew. (If it will be decided to send (*_guppi_*).vlar tasks to GPUs, it's possible to add an option in the project prefs (?) : un-/check = .vlar tasks send to GPU (?) So all members could decide on their own about their own PC performance.) Just my humble opinion. Thanks. ![]() |
![]() ![]() Send message Joined: 9 Jan 16 Posts: 51 Credit: 1,038,205 RAC: 0 ![]() |
So all members could decide on their own about their own PC performance. I think in the perfect world it would be made possible for folks to make these decisions for themselves, within limits that protect data integrity. For myself, I'd love to be able to suck down WUs to a central client, and then write rules that would distribute them to my various crunchers per WU type based on such things as what CPU, GPU, etc. I am running in a particular place. Sort of a "Super-BOINCTasks", if you will. Problem is, as I understand it, this is a BOINC Issue, not a SETI Issue, and when the concept was developed, the types of things we want to do now were never anticipated and are now not easy to do. Don't know where the balance is between capabilities and the time/effort required to expand them. But I do know this is not an issue that will go away. With the investments folks make in rigs and operating costs, they want the ability to customize and maximize their investment. Just my .002 ... Jim If I can help out by testing something, please let me know. Available hardware and software is listed in my profile here. |
Send message Joined: 27 Aug 12 Posts: 56 Credit: 127,133 RAC: 0 ![]() |
In a perfect world the credit system wouldn't "punish" you for doing certain types of work units. It shouldn't matter if I work 3 shorties in 10 minutes or a vlar in 10 minutes. An equal amount of GFLOPs should be worth the same amount of credit. Then people wouldn't fuss about what kind of wu went to which device... Moving from MB 6 to 7, credit granted dropped Moving from 7 to 8, dropped again Adding greenbank, dropped again and it hasn't even bottomed out yet. Then you have Astropulse which quadruples the RAC of my AMD cards I know the whys associated with it all, but doesn't change the fact that it's lame, and that is that people are really peeved about, not what kind of wu's get sent to their machines. Chris |
![]() Send message Joined: 10 Mar 12 Posts: 1700 Credit: 13,216,373 RAC: 0 ![]() |
I'm still running SoG Build 3401 on main. Any real reason to upgrade it to SoG Build "whatever the latest version is"? |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.