How does BOINC decide to identify video card name?

Message boards : Number crunching : How does BOINC decide to identify video card name?
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878146 - Posted: 14 Jul 2017, 0:33:35 UTC

How does BOINC decide to identify video card names? I just added a new GTX 1060 6GB shorty card to my dual GTX 1070s. I assumed that BOINC would identify my HOST 8030022 as having (3) GTX 1070s as the 1070s have been in there for quite a while. Or logically by most powerful card type. However it seems that BOINC is identifying the host as having (3) GTX 1060s 6GB now. Does BOINC just look at the last card installed? Is there anyway to get BOINC to identify the host having (3) GTX 1070s? I looked at the gpudetect.txt file in the SETI directory but it has a 2016 date so obviously hasn't been updated recently and by examining the contents obviously has nothing to do with identifying card type.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878146 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1878147 - Posted: 14 Jul 2017, 0:36:27 UTC - in response to Message 1878146.  

From what I have seen, the bottom card is usually 'dominant'
ID: 1878147 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878154 - Posted: 14 Jul 2017, 1:19:16 UTC - in response to Message 1878147.  

From what I have seen, the bottom card is usually 'dominant'

Brent, do you mean the "bottom" card as the card in the physically lowest slot on the motherboard? Or do you mean the lowest capability card?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878154 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1878155 - Posted: 14 Jul 2017, 1:21:15 UTC - in response to Message 1878154.  

On the board.
ID: 1878155 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1878156 - Posted: 14 Jul 2017, 1:23:53 UTC

I believe it is in order by the PCIe bus ID. Which should be listed in the coproc_info.xml. It will be something like this:
<pci_info>
   <bus_id>1</bus_id>
   <device_id>0</device_id>
   <domain_id>0</domain_id>
</pci_info>

I seem to recall that someone had found their motherboards had the primary PCIe x16 slot last in the bus ID order for some reason. Perhaps that is standard practice?

I'm not sure what a gpudetect.txt file is used for. I don't have one on any of my hosts.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1878156 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878157 - Posted: 14 Jul 2017, 1:30:31 UTC - in response to Message 1878155.  

On the board.

OK, thanks. Then that is what is happening. The 1060 is in the lowest PCIe X 16 slot (PCIeX16_3) and is running at X4 speeds. The 1070s are in the highest and middle X16 slots (PCIe X16_1 and PCIe X16_2) and are running at X8 speeds. The shorty card fits .... sort of ... I had to pull the front panel USB 3.0 cable from the motherboard until I get the right-angle adapter.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878157 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878164 - Posted: 14 Jul 2017, 1:44:41 UTC
Last modified: 14 Jul 2017, 1:48:16 UTC

Thanks Hal, that is the file I was looking for I guess. The 1060 is identified in <bus_id>33</bus_id> and the 1070s are identified in <bus_id>36</bus_id> and <bus_id>37</bus_id>

So in effect the 1060 has the "lowest" bus ID enumerator. Now it all makes sense.

Now for the part that doesn't make sense so far. I have been looking at the stderr.txt outputs for completed and validated tasks and it seems that BOINC is mixing up the number of compute units assigned to the card type in the output. The 1070s have 15 CU and the 1060 has 10 CU. The clock frequencies don't match up with the card names and the CU's are swapped.

Name:						 GeForce GTX 1070
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts
  Max compute units:				 10
  Max work group size:				 1024
  Max clock frequency:				 1835Mhz
  Max memory allocation:			 1610612736
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 163840
  Global memory size:				 6442450944
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1060 6GB
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts
  Max compute units:				 15
  Max work group size:				 1024
  Max clock frequency:				 1683Mhz
  Max memory allocation:			 2147483648
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 245760
  Global memory size:				 8589934592
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1070
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts

Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878164 · Report as offensive
Profile betreger Project Donor
Avatar

Send message
Joined: 29 Jun 99
Posts: 11361
Credit: 29,581,041
RAC: 66
United States
Message 1878167 - Posted: 14 Jul 2017, 1:56:55 UTC

IIRC, which is questionable I thought it was the lowest compute capability.
.
ID: 1878167 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1878170 - Posted: 14 Jul 2017, 2:22:16 UTC - in response to Message 1878157.  

On the board.

OK, thanks. Then that is what is happening. The 1060 is in the lowest PCIe X 16 slot (PCIeX16_3) and is running at X4 speeds. The 1070s are in the highest and middle X16 slots (PCIe X16_1 and PCIe X16_2) and are running at X8 speeds. The shorty card fits .... sort of ... I had to pull the front panel USB 3.0 cable from the motherboard until I get the right-angle adapter.


Which right angle adapter are you getting?
ID: 1878170 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1878172 - Posted: 14 Jul 2017, 2:26:56 UTC - in response to Message 1878164.  

Mixing up the CUs maybe a big problem. A driver reinstall maybe in order.
ID: 1878172 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878173 - Posted: 14 Jul 2017, 2:27:05 UTC - in response to Message 1878170.  

On the board.

OK, thanks. Then that is what is happening. The 1060 is in the lowest PCIe X 16 slot (PCIeX16_3) and is running at X4 speeds. The 1070s are in the highest and middle X16 slots (PCIe X16_1 and PCIe X16_2) and are running at X8 speeds. The shorty card fits .... sort of ... I had to pull the front panel USB 3.0 cable from the motherboard until I get the right-angle adapter.


Which right angle adapter are you getting?

I'm getting the Type B version of this:
modDIY90 Degree Angled USB 3.0 19-Pin 20-Pin Internal Header Mini Connector
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878173 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1878174 - Posted: 14 Jul 2017, 2:27:38 UTC

When I had a Gtx 750Ti installed in the second video card slot along side a Gtx 1060 the website page decided I had two 1060's. So I think I am supporting the results if the first video card slot is AKA the "bottom" slot.

I can't attest on the rest of the discussion but since I was un-successful in telling Boinc about the different parameters that should apply to each card, I did notice the 750 card got stuck with a very long running gpu task that would have run nicely on the 1060 (It may have been a GPUGrid task, if it was, it takes a 1080 12 hours+ and my poor 750 took something like 36 hours... :)

I know we should not mix and non-match ram modules. I am beginning to think the same applies for gpu cards.

Tom
A proud member of the OFA (Old Farts Association).
ID: 1878174 · Report as offensive
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5124
Credit: 276,046,078
RAC: 462
Message 1878175 - Posted: 14 Jul 2017, 2:33:23 UTC - in response to Message 1878164.  

Now for the part that doesn't make sense so far. I have been looking at the stderr.txt outputs for completed and validated tasks and it seems that BOINC is mixing up the number of compute units assigned to the card type in the output. The 1070s have 15 CU and the 1060 has 10 CU. The clock frequencies don't match up with the card names and the CU's are swapped.


That sounds like either a "bug" or a reporting "bug". If the report is accurate then there are some coding issues for the crunching. If it is a reporting bug, then the question is does that impact how the reported WU is used/analyzed etc?

Sounds like a potential "Furball".

Tom
A proud member of the OFA (Old Farts Association).
ID: 1878175 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878176 - Posted: 14 Jul 2017, 2:36:37 UTC - in response to Message 1878172.  

Mixing up the CUs maybe a big problem. A driver reinstall maybe in order.

I wondered about that. I was worried I was reading the output wrong but it has been consistent. I didn't re-install the drivers. I just plugged in the card and Win10 came right up. I had to restart the system for SIV to pick up the new card and I had to redo the NvidiaInspector command files because the cards got enumerated differently from the previous 1070s and didn't match up. The 1060 is GPU_1 in SIV, GPU-Z and NVI now where the 1070s used to be GPU_0 and GPU_1. Now the 1070s are GPU_0 and GPU_2.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878176 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878181 - Posted: 14 Jul 2017, 2:56:08 UTC

OK, I just did a clean install of the Nvidia graphics drivers and it didn't change the enumeration one bit from before. I will take a look at the stderr.txt outputs again after some new tasks have validated after the system restart and see if anything changed.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878181 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1878195 - Posted: 14 Jul 2017, 6:44:16 UTC - in response to Message 1878181.  

I just looked at one of your pendings, and it looks ok now.
ID: 1878195 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878231 - Posted: 14 Jul 2017, 15:44:19 UTC - in response to Message 1878195.  

I just looked at one of your pendings, and it looks ok now.

OK, I must be reading the stderr.txt output wrong. I still see GTX 1070 names listed with 10 CU and 1835 Mhz clock frequencies. They should be listed with 15 CU and 1683 Mhz clock frequency. The GTX 1060 is listed with 15 CU and 1683 Mhz clock frequency. I've looked at a dozen so far that were started and finished AFTER I reinstalled the drivers and rebooted.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878231 · Report as offensive
Juha
Volunteer tester

Send message
Joined: 7 Mar 04
Posts: 388
Credit: 1,857,738
RAC: 0
Finland
Message 1878247 - Posted: 14 Jul 2017, 16:41:15 UTC - in response to Message 1878231.  

You are reading it wrong. The listing for each of the cards starts with Max compute units and ends with Extensions.
ID: 1878247 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1878253 - Posted: 14 Jul 2017, 17:03:20 UTC - in response to Message 1878164.  
Last modified: 14 Jul 2017, 17:11:07 UTC

Thanks Hal, that is the file I was looking for I guess. The 1060 is identified in <bus_id>33</bus_id> and the 1070s are identified in <bus_id>36</bus_id> and <bus_id>37</bus_id>

So in effect the 1060 has the "lowest" bus ID enumerator. Now it all makes sense.

Now for the part that doesn't make sense so far. I have been looking at the stderr.txt outputs for completed and validated tasks and it seems that BOINC is mixing up the number of compute units assigned to the card type in the output. The 1070s have 15 CU and the 1060 has 10 CU. The clock frequencies don't match up with the card names and the CU's are swapped.

Name:						 GeForce GTX 1070
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts
  Max compute units:				 10
  Max work group size:				 1024
  Max clock frequency:				 1835Mhz
  Max memory allocation:			 1610612736
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 163840
  Global memory size:				 6442450944
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1060 6GB
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts
  Max compute units:				 15
  Max work group size:				 1024
  Max clock frequency:				 1683Mhz
  Max memory allocation:			 2147483648
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 245760
  Global memory size:				 8589934592
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1070
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts

That is the output from CLinfo and it is correct. However it looks like you decided that the value of Name: is the start of the data for each card instead of Max compute units:
Note: The CLinfo output values are what they say. Max clock frequency is the maximum clock frequency defined by the manufacture and not the current clock frequency. I would expect the current clock frequency in a value called something like Current clock frequency:

Here is the full output from one of your tasks:
 OpenCL Platform Name:					 NVIDIA CUDA
Number of devices:				 3

  Max compute units:				 15
  Max work group size:				 1024
  Max clock frequency:				 1683Mhz
  Max memory allocation:			 2147483648
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 245760
  Global memory size:				 8589934592
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1070
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts


  Max compute units:				 10
  Max work group size:				 1024
  Max clock frequency:				 1835Mhz
  Max memory allocation:			 1610612736
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 163840
  Global memory size:				 6442450944
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1060 6GB
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts

  Max compute units:				 15
  Max work group size:				 1024
  Max clock frequency:				 1683Mhz
  Max memory allocation:			 2147483648
  Cache type:					 Read/Write
  Cache line size:				 128
  Cache size:					 245760
  Global memory size:				 8589934592
  Constant buffer size:				 65536
  Max number of constant args:			 9
  Local memory type:				 Scratchpad
  Local memory size:				 49152
  Queue properties:				 
    Out-of-Order:				 Yes
  Name:						 GeForce GTX 1070
  Vendor:					 NVIDIA Corporation
  Driver version:				 378.92
  Version:					 OpenCL 1.2 CUDA
  Extensions:					 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts

SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1878253 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1878256 - Posted: 14 Jul 2017, 17:54:34 UTC

Thanks Hal and Juha for straightening me out. Yes, I had assumed each block section started with Name. With now understanding each cards block of information section starts with max compute units, the information all aligns. The max clock frequency is the manufacturer base clock frequency, not the current clock frequency or max Boost 3.0 clock frequency. The new 1060 is boosting on its own to over 2 Ghz.

It is also running about 10° C. lower in temperature compared to the 1070s. The 1070s are all reference blower designs and the 1060 is using the ACX 2.0 fan/heatsink design. The 1060 is the first custom fan design on a card I have bought since my GTX 460/560Ti's. I will see whether that fan/heatsink design lasts longer than the fans on the 460s which is what put them out of their misery. At least the shorty 1060 card doesn't cover up the fan blower inlet on the 1070 it is nestled up next to on the motherboard.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1878256 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : How does BOINC decide to identify video card name?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.