Enabling Intel GPU with two Nvidia Titan XP installed

Questions and Answers : Windows : Enabling Intel GPU with two Nvidia Titan XP installed
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1949448 - Posted: 13 Aug 2018, 21:35:15 UTC - in response to Message 1948716.  

Tom, you bring up an interesting question: how do I force more parallel work unit tasks on a GPU? at the moment my entire SaH experience only has one work task per GPU, i've never seen more than 1 (as opposed to CPU where you have one work unit task per thread, and typically 2 threads per core (on Intels))

Thanks!

Mark


Create a app_config.xml file in your hidden \ProgramData\Boinc\Projects\Setiathome.... folder.

Include the following for 3 tasks per gpu.

<app_config>
<app>
<name>setiathome_v8</name>
<gpu_versions>
<gpu_usage>0.33</gpu_usage>
<cpu_usage>1.0</cpu_usage>
</gpu_versions>
</app>
</app_config>


Discusssion: The Gpu # is a bit counter intuitive. The smaller the % the more the tasks. Virtually no one runs more than 3 tasks. If you want 2 tasks you would put in 0.50 in place of 0.33. For one tasks you would put in 1.0 This will use 1 cpu per task. Which means if you don't have more than 6 cpu cores your cpu-based production will go down.

Also the tasks will take more time (maybe 30%? more) but your production will increase. Since you have 3 tasks / card in place of 1 task / card.

Once you have the file in place you should restart the Boinc Manager including shutting down the tasks as part of the restart.

Also in the hidden directory you should also have MB*SOG.txt parameters in place for better performance. So the MB*sog document file a starting point. There is an empty MB*SOG.txt file if you haven't already modified it.

HTH,
Tom
A proud member of the OFA (Old Farts Association).
ID: 1949448 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1949606 - Posted: 14 Aug 2018, 10:03:11 UTC - in response to Message 1949448.  

Include the following for 3 tasks per gpu.
The trouble here is that the built-in Intel GPU will also get to do three tasks at the same time, which it cannot do. app_config.xml is not intuitive enough yet that you can specify which GPUs it's for, and which to exclude or ignore.
ID: 1949606 · Report as offensive
Mark Seeger

Send message
Joined: 16 May 99
Posts: 47
Credit: 16,558,494
RAC: 116
United States
Message 1949684 - Posted: 14 Aug 2018, 21:30:45 UTC - in response to Message 1949606.  

Dear Jord,

Thank you greatly for this info! My friend and I will spend the next week or two to experiment with this and learn how to optimize across our various GPU systems (everything from nvidia Titan Xp’s on the high end to nvidia Tegra’s on the low end).

One thing we (he) learned was the following (and perhaps this belongs in a different thread to open this up to a broader audience..): when experimenting with making a BOINC project, it appears one can specify GPU resources (registers, cores, etc.). That seems to indicate that one can create a BOINC project specifically optimized to a specific GPU or even a class of GPU’s. He is still experimenting and learning.

We are building a mega cluster specifically for SETI (we share this passion), and are keen to optimize the hardware we choose to scale.

To that end, do you know who we should contact if we are keen to donate to create optimized SETI@Home apps that are specifically optimized for certain GPU’s? Whatever efforts we make we’d like to share with everyone for the benefit of all. (I’ve done this before when I hired someone at my own cost to enable GPU compute via OpenCL for S@H for the Mac OS X platform some years ago; I no longer recall who I worked with).

Thanks!

Mark
ID: 1949684 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1949807 - Posted: 15 Aug 2018, 4:59:40 UTC - in response to Message 1949684.  


We are building a mega cluster specifically for SETI (we share this passion), and are keen to optimize the hardware we choose to scale.


If you look at "Computing" -> statistics -> top computers list on the Seti website you will notice that the top performers are heavy on the Gpu hardware and all over the place on the cpu's.

The only things they have in common is a tendency to use the Lunatics beta distro under Linux or use the "Secret Sauce" implementation under Linux and NVidia gpus.

If you look at the top participant he doesn't have any especially fast machines but he has a LOT of machines.

HTH,
Tom
A proud member of the OFA (Old Farts Association).
ID: 1949807 · Report as offensive
Mark Seeger

Send message
Joined: 16 May 99
Posts: 47
Credit: 16,558,494
RAC: 116
United States
Message 1949808 - Posted: 15 Aug 2018, 5:01:52 UTC - in response to Message 1949807.  

Tom, I've read much about 'secret sauce' but I can't decode what it actually is, nor is it even officially recognized by SaH. Can you explain?

Thank you,
Mark
ID: 1949808 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22228
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1949831 - Posted: 15 Aug 2018, 6:26:55 UTC

"Secret Sauce", or "Special Sauce" is a CUDA application for processing Multi-Beam tasks.
It was initially developed by Petri33, and uses some rather clever tricks to achieve very a high performance, about double that of the best SoG application.
It does however suffer from producing a rather high level of "inconclusive" tasks, but Petri and others have been working on bringing this down to an acceptable level.
It is only available for (fairly recent) nVidia GPUs running Linux. The reason for only being available for Linux is that it uses features of the operating system that are not available in Windows to achieve very high data synchronisation rates.
It is not "for general use", as it is so limited in its application, has a high inconclusive rate, and is still being developed.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1949831 · Report as offensive
Nathan Leefer

Send message
Joined: 25 Jul 18
Posts: 1
Credit: 1,487,405
RAC: 0
United States
Message 1950066 - Posted: 16 Aug 2018, 6:20:43 UTC - in response to Message 1949831.  

Hi Rob and Tom. I'm the friend Mark refers to. Some more background to this discussion: around the time Mark and I started talking about seti, I happened to have a developer kit of the credit-card sized Jetson TX2 sitting around: https://developer.nvidia.com/embedded/buy/jetson-tx2.

It's a decent little arm-based cpu, but it also has a remarkably powerful little nvidia GPU integrated into the SoC. Naturally we were interested in how it would perform on seti. There was no official SaH app for a gpu+arm processor, but after a lot of digging around the svn seti repository I found what seemed to be the most up-to-date, cuda-enabled client repository in the sah_v7_opt/Xbranch folder. I did a bit of fiddling with the make files and managed to compile an app that can run on this board in the anonymous compute framework. Is this the secret-sauce cuda+linux app you're referring to?

The absolute compute performance of the board is nothing remarkable. It will probably stabilize to between 3k-4k RAC. The power consumption, on the other hand, is only 20 W with everything running.
ID: 1950066 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22228
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1950068 - Posted: 16 Aug 2018, 6:43:44 UTC

Nathan, the source you have found is retired version (clue is in the version number "v7...."). It will probably trash any task you try to run, but only after a good few hours of running :-(
You need to look for a v8.xxxx with CUDA support. The trouble is that in recent years there has been nobody around doing any serious ARM development for SETI, so you will have to take the most recent in that branch, and bend it to suit.
Good Luck
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1950068 · Report as offensive
Previous · 1 · 2

Questions and Answers : Windows : Enabling Intel GPU with two Nvidia Titan XP installed


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.