Monster GPU Cruncher Build

Message boards : Number crunching : Monster GPU Cruncher Build
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 6 · Next

AuthorMessage
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1664177 - Posted: 11 Apr 2015, 18:11:32 UTC
Last modified: 11 Apr 2015, 18:21:21 UTC

Maybe you remember my old thread '4x HD7990 + 2x E5-2630v2'?
Now it's time to continue and finish this build (if it's technically possible).

So, the 24pin ATX power connector from the mobo withstand 250W.

The ASUS Z9PE-D8 WS mobo have a Molex 4pin power connector (Molex 8981). (mobo pic) So how the picture is, bottom right hand. If mounted on the top right hand.
I thought it's for power from the mobo to other hardware (HDD, fans ...).
No, it's an additional power connector from the PSU to the mobo for to feed PCIe slots.

I searched the web and it looks like (from a forum):
Typical Molex 4-pin power connectors (Molex 8981) are rated at up to 11 Amperes per pin. That comes to 55W on the 5V pin and 132W on the 12V pin.

As for the the PSU and the wires from the PSU to the connectors, that is more difficult. You should start by looking up the specs on your power supply to see how much 5V and 12V current it can supply. If it is not single-rail, then see how much it can supply to each of your modular cables.

For the cables themselves, if it uses the same wire as a typical Molex connector, that would be 18 AWG, which is typically used up to 16 Amperes for chassis wiring. So that is 80W on the 5V wire and 192W on the 12V wire. If you are lucky, they might have used 14 AWG wire on the cables, since they have two Molex connectors on each cable, and 14 AWG wire is typically used up to 32 Amperes. That would give you 160W on the 5V wire and 384W on the 12V wire.


AFAIK, all PSUs which I saw have 2 molex connectors on one cable.

So it looks like the mobo can get over this molex connector 384W additional for the PCIe slots. This is power for 5.12 slots.
If I use 4 PCIe slots, this are then 96W for each slot.

So the 250W from/to the 24pin mobo connector are not for PCIe slots, it's for mobo itself, system-RAM and so on.

This is all correct, the power feeding would work if 2 Xeon's and 4 HD7990's are installed? Yes?


Other problem ...
If for exmaple 2 HD7990's are installed in the 2 first PCIe slots, they are too close together. Less than 1mm space. The GPU cards have a metal backplate mounted, I guess for better cooling. So not possible for the 3 fans per GPU card that they can get enough fresh air for cooling.
So at least 2 cards must use PCIe slot extender (slot #1 & #3).
But I'll use for all cards extender, for better cooling.

I searched the web, there are PCIe slot extender and riser.
What's the difference?
This are very thin cables. There can 75W go through? Or the plastic will melt immediately?

For example have someone a GPU card connected with help of such an extender and it work (just the 75W over PCIe slot, no additional PCIe 6/8pin power connector from the PSU)?

I found for example 15cm cables. If I would use them they would reduce the performance of the hardware if OpenCL crunching on the cards?


Other problem ...
I searched the web, and I saw someone wrote something like this:
A GPU card just work correct if the power comes from one PSU (PCIe slot and PCIe 6/8pin).
If you have two PSUs and the GPU card get the power over the PCIe slot from PSU#1 and power over PCIe 6/8pin connector from the PSU#2 the GPU card don't work properly or can be destroyed. Also if you use two identical PSUs.

Is this correct?


I thought to use at least two PSUs. But maybe three.
2 HD7990 with 1 PSU (1.200W) each and the mobo an own PSU (1.000W).
I like to let run PSUs with an ~50% max usage (best efficiency, I guess also longest lifetime).

If the above is correct then this can't work.


I searched the web, and found the 2.000 Watt Super Flower Leadex 80 Plus Platinum 8Pack Edt. PSU. AFAIK available since January '15.
AFAIK Super Flower is an UK company. But unknown for me.
It's a good brand?

To now I found no manufacturer website. Do you?

I found a website with news about and they said it don't have an overheating protection. Is this correct? If yes, is this protection really needed?

I calculate with 300W each HD7990, and 250W mobo/system-RAM/CPUs (2x 80W Xeon) and so on (if crunching OpenCL) ...
This are 1.450W total.

This would mean the 2kW PSU would be 72.5% (of max) used.
This would go well if 24/7?

Worst case, the HD7990's use the max 375W if OpenCL, then +300W.
Total 1.750W.
This would mean the 2kW PSU would be 87.5% (of max) used.
This would go well if 24/7?

AFAIK, this PSU have a 5 yrs warranty.
But I don't want to send the PSU back every 12 months ... if it goes up in flames periodly.
... and if it damage then other hardware also?
And I guess after the 2nd warranty case the seller won't do it again.


Enough for now from me. :-)

Thanks.
ID: 1664177 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1664214 - Posted: 11 Apr 2015, 19:47:07 UTC - in response to Message 1664177.  

Lot of information there....

2 PSU..It can be done, others have done it.

Question of protection from surges. You would need to ground them together if I remember correctly. That would require splicing 1 of the molex from both together to keep them grounded together.

Or go with a 1600 Watt PSU and not worry about it.( I'm sure I'm overlooking something as far as rails and such but this is just a quick answer)

You can use a riser from a PCIe. I used to do it.

But there is question of if you need more than 75 W for the card.

Better to go with a PCI Riser with a Molex to solid state capacitor.

I ran some of those. Still have a few in the closet. Got them off eBay.

It's find for lower end GPUs but not really good for higher end that need more bandwidth. (example...tried a riser with a GTX 780Ti. The 780 didn't like the riser and refused to work. 750ti (lower end gpu) didn't have a problem with the riser and worked fine. (I did test the 780Ti by connecting directly onto the board and it runs fine)

my 2 cents

Zalster
ID: 1664214 · Report as offensive
Profile GTP

Send message
Joined: 5 Jul 99
Posts: 67
Credit: 137,504,906
RAC: 0
United States
Message 1664247 - Posted: 11 Apr 2015, 20:41:11 UTC

I am running 2 780's with riser cards. I would just to make sure they are POWERED risers. I tried the standard non powered version and just like you finding, only my 750TI would work.

All the best,
Aaron Lephart

TechVelocity.com
ID: 1664247 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1669488 - Posted: 24 Apr 2015, 19:50:10 UTC
Last modified: 24 Apr 2015, 20:04:47 UTC

Thanks, I guess this will make the 'trick'.
I made an internet search with 'pcie x16 riser molex' and found inter alia this (as example). [EDIT: I think now, why are the first (on picture right hand) 11 connections with cable - the mobo will deliver also power?]
It looks like it have 'solid state capacitor' right?


PSU, PSUs, PSU, PSUs?

I'm confused.

I wrote to a few PSU manufacturer, e.g. for to name a few, Corsair, Seasonic, Enermax & LEPA, Thermaltake, be quiet!, and so on ...

I'm not smarter now! *more confused*

Because I got answers from to like following - or no answer:
It's possible to let run a PSU without to connect to the motherboard. No problem. The two pins connection, 'paperclip trick' - and you are fine.

The activation point of protection features (OPP, OVP, UVP, OCP, OTP, SCP) will work but might be affected if something does go wrong.
The PSU don't deliver the power to the components, it only allows the power to be drawn from the PSU.

1 PSU and/in 1 PC.

A PSU just work if it's connected to a motherboard.


And so on, and so on ...


What does this all mean?

From the internet I know, this 'paperclip trick' work. So the PSU will run.

BUT - this is a sure thing, if the protection features will react later and then the PC will burn up (e.g. the motherboard, a GPU card PCB or the chip, mosfet (voltage converter), solid state capacitor, and so on ... have an error, e.g. short circuit)?

What you think? It's a sure work if 2+ PSUs in one PC? You would build it?
Don't forget it's for SETI@home and the PC (workstation) will run 24/7 full load, I guess at least 1,500W (1.5kW) or worst case up to 1,750W+ (1.75kW+)!

Thanks.
ID: 1669488 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65740
Credit: 55,293,173
RAC: 49
United States
Message 1669526 - Posted: 24 Apr 2015, 21:33:00 UTC - in response to Message 1664214.  

Lot of information there....

2 PSU..It can be done, others have done it.

Question of protection from surges. You would need to ground them together if I remember correctly. That would require splicing 1 of the molex from both together to keep them grounded together.

Or go with a 1600 Watt PSU and not worry about it.( I'm sure I'm overlooking something as far as rails and such but this is just a quick answer)

You can use a riser from a PCIe. I used to do it.

But there is question of if you need more than 75 W for the card.

Better to go with a PCI Riser with a Molex to solid state capacitor.

I ran some of those. Still have a few in the closet. Got them off eBay.

It's find for lower end GPUs but not really good for higher end that need more bandwidth. (example...tried a riser with a GTX 780Ti. The 780 didn't like the riser and refused to work. 750ti (lower end gpu) didn't have a problem with the riser and worked fine. (I did test the 780Ti by connecting directly onto the board and it runs fine)

my 2 cents

Zalster

A board called 'Add2psu' can be used to add more power to a case, what helps here is a place to mount a 2nd or even as demonstrated, a 3rd psu, incredible as that seems, I don't know of any case that will take 3 psu's, but 2 psu's?

Yeah they do exist.

Sometimes a 2nd psu is a 450w or 650w video card only psu, I have the 650w psu in question, that is mounted in a 5.25" drive bay in a slightly modded HAF-X, which is where this 650w Tt designed psu is supposed to reside at, each of the two rails can supply 30A continuous power, over current protection starts at 35A and goes to 45A, at least according to Thermaltake.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1669526 · Report as offensive
Darth Beaver Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Australia
Message 1669584 - Posted: 24 Apr 2015, 23:17:05 UTC

You can run 2 psu's one not being connected to the computer . All you do is short out pin 15 and 16 with a paper clip but it works better if you can use a piece of wire .
I have run my machine with 2 psu's 1 powering just the GPU with no problems

If i was you just get a 1500watt psu with 2 8 pin connectors and i 24 pin connector and 8 pcie plugs or use 2 psu's 1 running just 4 Gpu's but you will need at least a 750 watt with 1 single 12 volt rail on both psu's .

Don't use PSU's that don't have the single rail 12 volt there just not up to the task .

Running you psu's at 50% is not good . Psu's are to run at 70% otherwise they won't last as long . You can have to much power and it works the same as not having enough power when it comes to degrading the psu

try this link as it will explain things better .

http://www.tomshardware.com/reviews/power-supply-psu-review,2916.html
ID: 1669584 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65740
Credit: 55,293,173
RAC: 49
United States
Message 1669618 - Posted: 25 Apr 2015, 0:17:48 UTC

In the case of My HAF-X I have to run two psus, one can be no more than 170mm long, ideally that is a 1000-1050w psu which could be 160mm long and so the second is needed and is a specialist psu made for the purpose. I just don't feel like buying another case, I've got enough to get now and My list is not a small one either, but then I've got lots of time.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1669618 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1669685 - Posted: 25 Apr 2015, 5:40:31 UTC
Last modified: 25 Apr 2015, 6:18:10 UTC

Thanks to all.


Now it gets serious. Final sprint. *nervous* :-)


Already have:
2x Intel Xeon E5-2630v2 (CPU 80W TDP)
1x ASUS Z9PE-D8 WS (Motherboard)
4x AMD Radeon HD7990 (dual GPU card, out of stock)

Will order Monday:
1x 2000 Watt Super Flower Leadex 80 Plus Platinum 8Pack Edt. PSU - SF-2000F14HP(BK)
2x Corsair Vengeance Low Profile 16GB Kit DDR3 PC3-12800 CL8 - CML16GX3M4X1600C8 (4x4GB Kit. So 16GB/CPU, 32GB/system.)
2x Intel Thermal Solution - BXTS13A (up to 140W TDP)
1x Thermaltake Core X9 - CA-1D8-00F1WN-00 (PC Case)

HDD? Until now no idea.

I guess I'll try it first with the 2kW Super Flower PSU. Measure at the wall plug and will decide if it's OK. <1.5kW OK, 1.8kW or more not OK and buy a 2nd PSU.

I guess I'll let idle the CPUs, just for to feed the GPU apps. Also that the 'turbo' will work. And for to reduce whole power consumption, for to increase the PSUs lifespan.
I like to cool also the motherboards heatsinks (chipset & voltage converter) with the CPU heatsink airflow.
So I guess the Intel BXTS13A would be OK (one CPU up to 80W TDP, heatsink is for up to 140W TDP).

I guess with this Thermaltake Core X9 PC case I can use the PCIe slot riser cables, and nevertheless can close the door.

Hints and tips are very welcome.

Thanks. :-)

[URLs for pictures and specs]
ID: 1669685 · Report as offensive
Profile Woodgie
Avatar

Send message
Joined: 6 Dec 99
Posts: 134
Credit: 89,630,417
RAC: 55
United Kingdom
Message 1669792 - Posted: 25 Apr 2015, 14:46:28 UTC - in response to Message 1669685.  

Dirk, This is awesome and something I was one day looking at doing (need to save a boatload of money first...) with a couple of changes.

I was thinking of a 4U rack mounted case (mainly because I have a rack to mount it in) and ising Galaxy's single slot 750Ti to fill the 7 slots (price/performance seems right) which I can then upgrade to 4xwhatever later when I save more money for beefier cards.

What OS are you going to run? I was thinking Linux... But then I'm a sucker for spending time trying to make things work...

Dreams, they're good to have.
~W

ID: 1669792 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1669827 - Posted: 25 Apr 2015, 15:58:59 UTC
Last modified: 25 Apr 2015, 16:09:23 UTC

Thanks.
It's a dream which will come true now. :-)
That's a lot of wood for me, and I saved money long.

AFAIK, 'Win Pro' at least needed for multi-CPU systems. I will go with Windows, it's 'easier' (for me ;-) than Linux. Although the PC will do just SETI 24/7 (if the project servers can feed him ;-), no other work on it.


Already have:
2x Intel Xeon E5-2630v2 (CPU 80W TDP)
1x ASUS Z9PE-D8 WS (Motherboard)
4x AMD Radeon HD7990 (dual GPU card, out of stock)

Will order Monday:
1x 2000 Watt Super Flower Leadex 80 Plus Platinum 8Pack Edt. PSU - SF-2000F14HP(BK)
2x Corsair Vengeance Low Profile 16GB Kit DDR3 PC3-12800 CL8 - CML16GX3M4X1600C8 (4x4GB Kit. So 16GB/CPU, 32GB/system.)
2x Intel Thermal Solution - BXTS13A (up to 140W TDP)
1x Thermaltake Core X9 - CA-1D8-00F1WN-00 (PC Case)
Update #01:
256GB Samsung 850 PRO 2.5" SATA 6Gb/s MLC Toggle - MZ-7KE256BW (SSD) (Would active cooling increase the lifespan? Because read that they warm up.)
Microsoft Windows 8.1 Pro 64 Bit German OEM - FQC-06942 (OS)

After reading more in forum, I guess I'll go with a SolidStateDrive.
This 256GB Samsung 850 PRO SSD comes with manufacturer warranty: 10 years limited (before this is reached I guess I have build a bigger monster ;-).

Hints and tips are very welcome.

Thanks. :-)
ID: 1669827 · Report as offensive
Bruce
Volunteer tester

Send message
Joined: 15 Mar 02
Posts: 123
Credit: 124,955,234
RAC: 11
United States
Message 1669878 - Posted: 25 Apr 2015, 18:04:14 UTC

Dirk

I am thinking of doing a build very similar to yours.

How did you determine that you could use 4xHD7990 cards? According to AMD you can not do that! Crossfire will only support 4 physical GPUs, and according to them if you use more than a single card, Crossfire is mandatory.(BS I think).
Has someone else already used four of these cards?
I do not see why it would not work without Crossfire, but I have never used ATI/AMD cards before, and this will be my first time.

If you could let myself and others know how you decided this, I'm sure we would all appreciate it.

Thanks.

Bruce
Bruce
ID: 1669878 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65740
Credit: 55,293,173
RAC: 49
United States
Message 1670012 - Posted: 25 Apr 2015, 21:40:10 UTC
Last modified: 25 Apr 2015, 22:12:24 UTC

There are monster builds and there are builds built by Dr Frankenstein, asap there is a motherboard, two cases and a psu coming this way, an Azza GT1, an NZXT Switch 810 in Matte Black, an EVGA P55 FTW motherboard and a Corsair TX series 950w psu. The NZXT is hard to get and so is the motherboard and the psu too. As soon as My credit line has $200-300 paid down on the bill, I'm starting on the parts that I'll need, 1st on the menu, 2 GTX590 cards, a second NZXT switch 810 case(in Matte Black, Gun Metal or most likely, ugh, in White), an Asrock 2011-v3 Extreme3 motherboard with only 3 pcie slots, 16GB G.Skill DDR4 ram(in blue to match the motherboard if possible), a 5820K cpu, then a whole lotta Asus GTX980 Strix cards(1 at a time), an EVGA SuperNOVA G2 850w psu, NZXT aio cpu radiators, misc parts and adapter rings will follow in short order.

So is this monster enough?
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1670012 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65740
Credit: 55,293,173
RAC: 49
United States
Message 1670061 - Posted: 25 Apr 2015, 23:09:47 UTC - in response to Message 1670012.  
Last modified: 25 Apr 2015, 23:10:12 UTC

Scratch the EVGA, it's longer than I need, a Corsair Corsair AX860 at 160mm will do just fine, plus it's a seasonic platform, it can do what the EVGA can do and go platinum too.

Now as to 1050w, I'll be using My Enermax Revolution85+ 1050w psu, once the main 24pin cable connector is replaced. Or if I don't have enough pci-e cables to support 4 GTX980 cards and it might not, then a Corsair AX-1200i will be needed, the 1200i can support 4 video cards out of the box(according to Newegg), according to Johnny Guru the AX-1200 Gold supports only 3. Finding a shorter psu that is as capable as the AX-1200i would be nice and so to that end I'll need to do some more research.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1670061 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1670130 - Posted: 26 Apr 2015, 0:17:29 UTC - in response to Message 1669878.  
Last modified: 26 Apr 2015, 0:40:28 UTC

Bruce wrote:
Dirk

I am thinking of doing a build very similar to yours.

How did you determine that you could use 4xHD7990 cards? According to AMD you can not do that! Crossfire will only support 4 physical GPUs, and according to them if you use more than a single card, Crossfire is mandatory.(BS I think).
Has someone else already used four of these cards?
I do not see why it would not work without Crossfire, but I have never used ATI/AMD cards before, and this will be my first time.

If you could let myself and others know how you decided this, I'm sure we would all appreciate it.

Thanks.

Bruce


AFAIK, the ASUS Z9PE-D8 WS motherboard work like this ...
If just #1 + #3 + #5 + #7 PCIe x16 slots are used all PCIe slots work with v3.0 x16 speed (if GPU card support v3.0).
Slot #1 + #3 are physical connected to CPU#0 (slots #1 - #4),
Slot #5 + #7 are physical connected to CPU#1 (slots #5 - #7).
So it's like you have two mobos, each with one CPU and 4 GPU chips (in my case, I think).

AFAIK, NVidia's SLI and AMD's CrossFireX should be disabled if (CUDA (just NVidia))/OpenCL crunching (AFAIK, new NVidia drivers do this automatically. It's my 1st ATI/AMD GPU, so don't know if I need to do it manually or it will go also automatically).
So BOINC see and use 8 single GPUs at my build.

I don't know if someone have already tried this 4x HD7990 combi in one PC. ;-)
But in past SETI member '-= Vyper =-' had 4x GTX295 (also dual GPU cards) with AMD's Phenom II X4 920 in one PC (with same motherboard which I used also in 2009 with 4x GTX260 and AMD's Phenom II X4 940 BE).

You will use 4x AMD Radeon R9 295 X2 (also dual GPU cards)?
I would go with them, if I wouldn't have already the HD7990's. ;-)
ID: 1670130 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1670215 - Posted: 26 Apr 2015, 4:05:33 UTC
Last modified: 26 Apr 2015, 4:15:10 UTC

My last statement I must rectify.
If PCIe x16 slots #1 + #3 + #5 + #7 are used they run not @v3.0 x16 speed, they all run @v2.0 x16 (respectively v3.0 x8) speed.


Already have:
2x Intel Xeon E5-2630v2 (CPU 80W TDP)
1x ASUS Z9PE-D8 WS (Motherboard)
4x AMD Radeon HD7990 (dual GPU card, out of stock)

Will order Monday:
1x 2000 Watt Super Flower Leadex 80 Plus Platinum 8Pack Edt. PSU - SF-2000F14HP(BK)
2x Corsair Vengeance Low Profile 16GB Kit DDR3 PC3-12800 CL8 - CML16GX3M4X1600C8 (4x4GB Kit. So 16GB/CPU, 32GB/system.)
2x Intel Thermal Solution - BXTS13A (up to 140W TDP)
1x Thermaltake Core X9 - CA-1D8-00F1WN-00 (PC Case)
1x Microsoft Windows 8.1 Pro 64 Bit German OEM - FQC-06942 (OS)
Update #02:
Instead of: 256GB Samsung 850 PRO 2.5" SATA 6Gb/s MLC Toggle - MZ-7KE256BW (SSD)
I will go with: 250GB WD VelociRaptor 64MB 2.5" SATA 6Gb/s - WD2500HHTZ (HDD) (2.5" drive in a 3.5" 'heatsink')

I guess the HDD would be 'better'.
I read a test about the 256GB Samsung 850 PRO SSD and they said the 10 years warranty would be like: You can write every day 40GB to the drive and it shouldn't brake.

My consideration:
1x AP task last 1 hour on one GPU, so 3 tasks/chip, so 6 tasks/card, so 24 tasks/all GPUs.
Each AP task have 8MB and during downloading written to SSD.
24x 8MB= 192MB/hour x24 (hrs/day)= 4,608MB (4.608GB)
This would mean the SSD would lasts at least 86.8 years?! ;-)

Currently 100 tasks/ATI GPU x8= 800 tasks for my build - AFAIK.
This affects the size of the client_state.xml file.
IIRC every minute the BOINC Client look if tasks are needed (In past with debug entries seen).
I don't know what happens in the background additional also, also read/write activity on HDD?

BTW, in this calculation isn't considered that CPU tasks could run also.

215.3MB on Raistmer's PC after a few days. I don't know which PC it was, the performance (RAC). (BTW, thanks to Raistmer.)
Maybe the size of the related files to the RAC in relation, and so on ...

So, hm, OK, and now, hm, yes, maybe a HDD would be 'better'?


Hints and tips are very welcome.

Thanks.
ID: 1670215 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1670309 - Posted: 26 Apr 2015, 10:29:42 UTC - in response to Message 1670215.  
Last modified: 26 Apr 2015, 10:36:29 UTC

It's this host: http://setiathome.berkeley.edu/show_host_detail.php?hostid=6914153 Definitely, not too speedy one for modern ones.
HDD activity definitely will be RAC-related too. Some housekeeping required after each task completion (even if checkpointing effectively switched off). The more tasks in fly/cached the bigger client_state.xml will be. And, though modern BOINC versions keep task info while it runs in separate files inside slot, on task finish that info should be merged back into client_state.xml (and here the point where running inside RAM-drive can be good thing indeed, if host is stable enough to neglect possible data loss). With proportionally bigger number of devices and proportionally bigger number of tasks in fly + tasks in cache, client_state will be ~proportionally bigger and be written ~proportionally often. That is, HDD load increases as N square (N*N) where N is GPU number.
ID: 1670309 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1670330 - Posted: 26 Apr 2015, 13:02:05 UTC
Last modified: 26 Apr 2015, 13:33:46 UTC

Thanks.


The 'problem' is that I'm a perfectionist.
Before I buy something I need to inform me a lot.
And then I can't decide. ;-)

I made an internet search with 'Haltbarkeit von Festplatten' (Durability of hard drives) and found:

This Cloud company use regular home user HDDs not professionally server HDDs for to reduce the costs.
But it's nevertheless safe, because of RAID systems.
http://www.pcwelt.de/news/Begrenzte_Haltbarkeit_von_Festplatten-Dauerbetrieb-8429918.html
Here the company Backblaze (the final conclusion, article from 24. January 2014): (Google's Translator)
The survival rate after three years of continuous operation paints a similar picture to the AFR:
Hitachi is slightly ahead of WD, Seagate far behind.
With the WD drives have a relatively high mortality rate is observed at the beginning,
surviving copies then keep a long time. At Seagate, it is rather the opposite - a few failures at the beginning,
but after about 20 months breaks, the survival rate significantly.

The knowledge gained from the BackBlaze statistics is low for home users - the application scenario in a PC is completely different.
For NAS systems and the server application in the company information on the durability are more interesting.
However, you never know when buying a copy (or a small number), if so can reproduce the values determined by BackBlaze with several thousand copies failure rates.
Apart from that, some models have now been replaced by successor and hard to get.

Hitachi's hard disk drive business was taken over by Western Digital in early 2012, as Toshiba 2.5 "manufacturing in Thailand.
Because of conditions of competition authorities had WD in return production facilities for 3.5" HDDs sell to Toshiba.
How this will continue to influence the statistics of the disk shelf life must be seen. Backblaze will continue to publish its findings to anyway.


BTW. I have in the J1900 PC a WD 500GB 'Green' 3.5" (IIRC, WD50000AZRX).
In this article: Unsuitable for cloud scenario, for example, the models WD Green 3 TB and Seagate LP (low power) 2TB have proved to.
They are designed for power-save, switch off constantly and need to to start again. This creates vibrations in the chassis and reduces the shelf life.

Maybe I need to inform me how to disable this function, because my PC is running 24/7. ;-)

http://www.tomshardware.de/Festplatten-Lebensdauer,testberichte-240582.html
Here the company Storelab (the final conclusion, article from 25. June 2010: (Google's Translator)
The producer of the safest hard drives in this test of Storelab the manufacturer Hitachi.
Of the more than 200 consigned plates had no failure due to manufacturing or design faults.
All defects were summoned by physical impacts by the user.
Together with the highest run-time and the best ratio between default share and market share can therefore consider Hitachi as the winner of the comparison.
The market leader Seagate lost in rating a primarily by the failures of the series 7200.11. The evaluation of the new series 7200.12 is pending.



This was the result of few hours internet search.
Then I made a quick search in an online shop, where I can choose the 'average access time' from lowest to largest, Hitachi, SATA III, so:
4.16ms - 6000GB (6TB) Hitachi Deskstar NAS 128MB 3.5" SATA 6Gb/s - 0S03840 (currently at least €280.71 (specially for 24/7 uptime))
5.5ms - 500GB Hitachi Travelstar Z7K500 32MB 2.5" SATA 6Gb/s - 0J26005 (currently at least €44.96 (but it's laptop (2.5") drive, so maybe not so big lifetime?))

Now I need to go away from the PC and will come back later and will search again in the internet HDD vs. SSD, and so on ... ;-)


Hints and tips are very welcome.

Thanks.
ID: 1670330 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1670339 - Posted: 26 Apr 2015, 13:33:33 UTC - in response to Message 1670215.  
Last modified: 26 Apr 2015, 13:47:36 UTC

My last statement I must rectify.
If PCIe x16 slots #1 + #3 + #5 + #7 are used they run not @v3.0 x16 speed, they all run @v2.0 x16 (respectively v3.0 x8) speed.
(...)

Hm, Oh well ...

From the specs:
4 x PCIe 3.0/2.0 x16 (dual x16 or quad x8) *2
2 x PCIe 3.0/2.0 x16 *2
1 x PCIe 3.0/2.0 x16 (x8 mode) *2
*2: This motherboard is ready to support PCIe 3.0 SPEC. Functions will be available when using PCIe 3.0-compliant devices. Please refer to www.asus.com for updated details.


From what I know:
Slots #1 - #4 are connected to CPU#0.
Slots #5 - #7 are connected to CPU#1.

I think this means (because of this picture):
If #1 + #3 are used, both run @v3.0 x16 speed.
If #1 + #2 + #3 + #4 are used, all 4 run @v3.0 x8 (respectively @v2.0 x16) speed.

If #5 + #7 are used, both run @v3.0 x16 speed. If additional #6 is used, the max speed of this slot is @v3.0 x8 (respectively @v2.0 x16), and the other slots are not reduced in the speed.

So if PCIe x16 slots #1 + #3 + #5 + #7 are used they run all @v3.0 x16 speed.

The HD7990 have a PCIe v3.0 x16 connector. ;-)

:-)
ID: 1670339 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1670372 - Posted: 26 Apr 2015, 15:46:03 UTC - in response to Message 1670339.  

IMHO if you build BOINC-only host and stability not an issue, "perfect" way would be to go with RAM-drive backed by SSD (or simple HDD, not really matters) to write each hour/two/three (depends on real stability level). RAM-drive would give best possible BOINC performance I/O-wise.

From other side, such RAM-drive for your config would be too big (too costly). Hence I would look into ways to combine SDD storage for workunits/results and RAM-drive storage for client_state.xml and slots directories.

This can be done via mount point under Unix OS. AFAIK currently something similar is possible under Windows with modern NTFS too. That is, different dirs of BOINCdata should reside on different media. That will be ultimate solution IMHO. As fast as RAM-drive where most writes go, as big as HDD where storage needed.
ID: 1670372 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22191
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1670422 - Posted: 26 Apr 2015, 17:33:43 UTC

Your biggest bottleneck is not going to be the disk drives - even with the planned number of GPUs each running three tasks there won't be much more than one write every second or so which even a slow, modern, HDD will manage with ease. But the PCI bus interface which in the worst case of doing a heavily blanked AP will be very heavily loaded.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1670422 · Report as offensive
1 · 2 · 3 · 4 . . . 6 · Next

Message boards : Number crunching : Monster GPU Cruncher Build


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.