Building a 32 thread xeon system doesn't need to cost a lot

Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 12 · Next

AuthorMessage
Profile Dr Grey

Send message
Joined: 27 May 99
Posts: 154
Credit: 104,147,344
RAC: 21
United Kingdom
Message 1776402 - Posted: 5 Apr 2016, 23:08:04 UTC

This article says 8 core xeons are going pretty cheap on ebay right now
I've got to say it's a tempting build if you don't mind the power bills.
ID: 1776402 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1776419 - Posted: 6 Apr 2016, 0:27:39 UTC

There is a thread over on Einsten@home where someone wanted to make a CPU only cruncher. They were initially looking to use older 5000 series Xeons. Some users talked them into a system with s pair of E5-2670 or E5-2660's given how cheap they are. If you are lucky you can even manage to get them for under $50. Paired with a workstation MB instead of a server one & you can probably spend even less for 16c/32t of crunching fun.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1776419 · Report as offensive
Sidewinder Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 Nov 09
Posts: 100
Credit: 79,432,465
RAC: 0
United States
Message 1776525 - Posted: 6 Apr 2016, 9:38:08 UTC - in response to Message 1776402.  
Last modified: 6 Apr 2016, 9:48:43 UTC

It's pretty much what I did with one of my crunchers. I actually bought (2) Xeon E5-2670s (Sandy Bridge-EP) and am looking to replace my old Q8300 with the 2nd Xeon. The most expensive piece was, and usually is, the motherboard.

One word of caution if you're building with one of these is that many motherboards for this socket (especially the server-grade mobos) come with Narrow ILM sockets (e.g. this one) and thus require a Narrow ILM CPU cooler. There are very few (and fewer good) Narrow ILM coolers. So I actually spent a little more money and bought a AIO water cooler for it. If you buy water coolers that use the Asetek plate assembly, you can grab a cheap Narrow ILM retention kit from their eBay store (here).
ID: 1776525 · Report as offensive
Gamboleer

Send message
Joined: 3 Jun 06
Posts: 29
Credit: 12,391,598
RAC: 0
United States
Message 1776783 - Posted: 7 Apr 2016, 6:04:36 UTC
Last modified: 7 Apr 2016, 6:21:07 UTC

I recently built two of these, one with dual 2660's and one with dual 2670's. I mostly do Einstein@Home and wanted to put more CPU time on Gravitational Wave searches. All processors ran me about $60 each on eBay. Although I'm sure the vendors are all about the same, I bought from "gfsi". He accepted Best Offer on his CPUs when I offered about 90% of his asking price.

The 2670 system is on an ATX board, ASUS Z9PA-D8. It goes in and out of stock on Amazon and Newegg. You do need the narrow CPU coolers on this one; I'm running dual Cooler Master Hyper 212 EVOs. Fans go on the outside of each radiator, and there's enough room to put a third inbetween. One CPU runs slightly hotter than the other because it's sucking the first one's exhaust, but both are under 60c running 100%.

The 2660 system is on an ASRock EP2C602-4L/D16 board. This one is SSI/EEB form factor, which requires a special oversized case if you're doing a tower. The CPUs on this one are far enough apart that you have a wide choice of CPU coolers.

I picked these boards because they were dual E5 boards that were upgradeable to the V2 series, which I hope means I have an inexpensive upgrade path when the V2 CPUs get replaced and show up cheaply on eBay. Both boards are running fine with normal DDR3-1600 non-ECC gamer RAM (the fastest you can use with a V1 series E5; V2's can handle 1866).

Both boards can also handle dual PCI-E 2.0 16x full length graphics cards for some GPU computing, another factor in my decision.

I'm running Windows 10 Pro for the OS.

There is very little performance gain going from 2660 to 2670, but the 2670 has a 115w TDP versus 95w for the 2660. I would recommend the 2660 for a build.
ID: 1776783 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1777177 - Posted: 8 Apr 2016, 15:18:22 UTC
Last modified: 8 Apr 2016, 15:19:34 UTC

For those multi-core maniacs out there, I would recommend you view the May 2016 edition of PC PRO magazine, pgs 84-85, "i7 or Xeon." It mentions Haswell, Broadwell, and Skylake chip families. It lists specs on a new chip --> Broadwell-EP Xeon-E5 2602 v4.

Also, it touches upon chips with 4, 8, 10, 12, 22, and wait for it...wait for it...26 cores. The latter is an upcoming skylake Xeon, which is rumored to have 26 cores, and 65MB L3 cache. It is slated for release in Q1 2017.

Imagine dropping two of these HTT beasts on a mobo with 4 graphics cards. That's 100< SETI WUs crunching concurrently on a single box.

Of course, you'd need to win the lottery first.
ID: 1777177 · Report as offensive
Profile Dr Grey

Send message
Joined: 27 May 99
Posts: 154
Credit: 104,147,344
RAC: 21
United Kingdom
Message 1777209 - Posted: 8 Apr 2016, 17:24:16 UTC - in response to Message 1777177.  

Of course, you'd need to win the lottery first.


Well my money's on the Grand National for tomorrow. Fingers crossed.
ID: 1777209 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1777272 - Posted: 8 Apr 2016, 20:21:24 UTC - in response to Message 1777177.  

For those multi-core maniacs out there, I would recommend you view the May 2016 edition of PC PRO magazine, pgs 84-85, "i7 or Xeon." It mentions Haswell, Broadwell, and Skylake chip families. It lists specs on a new chip --> Broadwell-EP Xeon-E5 2602 v4.

Also, it touches upon chips with 4, 8, 10, 12, 22, and wait for it...wait for it...26 cores. The latter is an upcoming skylake Xeon, which is rumored to have 26 cores, and 65MB L3 cache. It is slated for release in Q1 2017.

Imagine dropping two of these HTT beasts on a mobo with 4 graphics cards. That's 100< SETI WUs crunching concurrently on a single box.

Of course, you'd need to win the lottery first.

I hope they didn't bite on the E5-2602v4 5.1GHz rumor that has been going around for months. To fit in line below the E5-2603v4 it will likely be a 4c or 6c at 1.5GHz.

I would estimate the E5-2699v5 will be 26c/52t @ 2.1GHz. Based on how the previous generations has progressed thus far. With the E5-2699v4 having 22c/44t @2.2GHz and the E5-2699v3 having 18c/36t @ 2.3GHz.
Running 100 SETI@home tasks at once with 0 cache would not be ideal. My 12c/24t server often would switch to backup projects when there were only minor SETI@home server hiccups.


However back the E5-2660/E5-2670 that weer flooded onto the market and can be had for cheap.

Now if someone would just flood dual LGA2011 boards into the market so they price on them would drop below $200 that would be nice. In specing out the parts for these CPUs the MB is the single most expensive part. Unless you go nuts on memory. Then you can easily spend over $1,000 just on memory. 32GB & 64GB simms are not cheap.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1777272 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1777303 - Posted: 8 Apr 2016, 22:20:13 UTC - in response to Message 1777272.  
Last modified: 8 Apr 2016, 22:20:39 UTC

I hope they didn't bite on the E5-2602v4 5.1GHz rumor that has been going around for months.

Y-u-u-p. The author did not indicate that it was a rumor. However, he did write that the yet-to-be-releaesed Skylake Xeon was rumored to have 26 cores.

Imagine dropping two of these HTT beasts on a mobo with 4 graphics cards. That's 100< SETI WUs crunching concurrently on a single box.

In hindsight, that would be wasteful. It would complete 100 WUs in what, a couple of hours, then it wouldn't be able to download any more until the next day.
ID: 1777303 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13740
Credit: 208,696,464
RAC: 304
Australia
Message 1777311 - Posted: 8 Apr 2016, 22:45:17 UTC - in response to Message 1777303.  

It would complete 100 WUs in what, a couple of hours, then it wouldn't be able to download any more until the next day.

Why not?
The 100 WU limit is per device so you would have 100 WUs per GPU and 100 WUs per CPU. As WUs are returned, you would download more to fill the cache to the limit of the cache or the server side limit; whichever comes first (which for most current hardware is the server side limit).
Of course if you returned nothing but errors, then you would eventually be limited to 1 WU per day until you stared returning valid work.
Grant
Darwin NT
ID: 1777311 · Report as offensive
Ulrich Metzner
Volunteer tester
Avatar

Send message
Joined: 3 Jul 02
Posts: 1256
Credit: 13,565,513
RAC: 13
Germany
Message 1777320 - Posted: 8 Apr 2016, 23:06:52 UTC
Last modified: 8 Apr 2016, 23:10:25 UTC

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

[edit]
So it seems useless to have more than 100 cores, because they will stop processing at the slightest server hiccup.
Aloha, Uli

ID: 1777320 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13740
Credit: 208,696,464
RAC: 304
Australia
Message 1777323 - Posted: 8 Apr 2016, 23:13:17 UTC - in response to Message 1777320.  
Last modified: 8 Apr 2016, 23:16:57 UTC

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

I still don't see the issue.
If you had a second CPU, you would get another 100WU.
Once you finish a WU, you can then download another. The 100WU limit per device is not a per day limit.


EDIT-
So it seems useless to have more than 100 cores, because they will stop processing at the slightest server hiccup.


If it were 100 cores on a single CPU, yes. But as it's on multiple CPUs then it's not an issue (if my understanding of the limits is correct).



Anyone with a multi-CPU system able to advise whether the CPU limit is 100WUs per system, or per CPU?
Grant
Darwin NT
ID: 1777323 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1777324 - Posted: 8 Apr 2016, 23:16:24 UTC - in response to Message 1777311.  
Last modified: 8 Apr 2016, 23:21:43 UTC

It would complete 100 WUs in what, a couple of hours, then it wouldn't be able to download any more until the next day.

Why not?
The 100 WU limit is per device so you would have 100 WUs per GPU and 100 WUs per CPU. As WUs are returned, you would download more to fill the cache to the limit of the cache or the server side limit; whichever comes first (which for most current hardware is the server side limit).
Of course if you returned nothing but errors, then you would eventually be limited to 1 WU per day until you stared returning valid work.

I didn't know it was so granular. I thought device = computer.

So, with 4 GPUs, each one utilizing a logical cpu, such a system would complete @10,400 WUs/day, which in turn would be @1,040,000 credits/day. Rough guesstimates, of course. Although it would depend on the GPUs installed, I shudder to think of the amount of electricity needed/day.

EDIT:
Scratch my guesstimating. I composed during the posting of the previous two messages.
ID: 1777324 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13740
Credit: 208,696,464
RAC: 304
Australia
Message 1777325 - Posted: 8 Apr 2016, 23:19:33 UTC - in response to Message 1777324.  
Last modified: 8 Apr 2016, 23:21:30 UTC

I didn't know it what so granular. I thought device = computer.

To me device = CPU and GPU. It certainly is that way for GPUs, but now I'm not so sure about CPUs.

Hoping someone with a multi socket system will inform us.
Is the CPU limit per CPU or per system?
Grant
Darwin NT
ID: 1777325 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1777327 - Posted: 8 Apr 2016, 23:22:27 UTC - in response to Message 1777325.  

I didn't know it what so granular. I thought device = computer.

To me device = CPU and GPU. It certainly is that way for GPUs, but now I'm not so sure about CPUs.

Hoping someone with a multi socket system will inform us.
Is the CPU limit per CPU or per system?

Ditto
ID: 1777327 · Report as offensive
Ulrich Metzner
Volunteer tester
Avatar

Send message
Joined: 3 Jul 02
Posts: 1256
Credit: 13,565,513
RAC: 13
Germany
Message 1777328 - Posted: 8 Apr 2016, 23:23:54 UTC - in response to Message 1777325.  

Is the CPU limit per CPU or per system?

Indeed, a good question!
I didn't recognized it's 2 CPUs.
Aloha, Uli

ID: 1777328 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1777332 - Posted: 8 Apr 2016, 23:32:11 UTC - in response to Message 1777323.  
Last modified: 8 Apr 2016, 23:40:25 UTC

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

I still don't see the issue.
If you had a second CPU, you would get another 100WU.
Once you finish a WU, you can then download another. The 100WU limit per device is not a per day limit.

EDIT-
So it seems useless to have more than 100 cores, because they will stop processing at the slightest server hiccup.


If it were 100 cores on a single CPU, yes. But as it's on multiple CPUs then it's not an issue (if my understanding of the limits is correct).

Anyone with a multi-CPU system able to advise whether the CPU limit is 100WUs per system, or per CPU?

Actually it's a hard limit of 100 CPU tasks. Not per CPU socket. When I was running my dual 6c/12t server and we transitioned from 100 GPU tasks to 100 GPU tasks per device per vendor I asked if the CPU limit could be treated the same. However I'm unsure that BOINC has a detection method to determine the number of CPUs. I believe BOINC only detects the number of cores/threads present in the system.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1777332 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1777333 - Posted: 8 Apr 2016, 23:33:11 UTC - in response to Message 1777320.  
Last modified: 8 Apr 2016, 23:42:02 UTC

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

I run 1 WU at a time on my GPU and, according to SETIspirit, the GPU completes (100 < WUs)/day.

...we transitioned from 100 GPU tasks to 100 GPU tasks per device per vendor...

Not quite following you. Does this mean with 4 cards installed, the limit would be 400 (GPU) WUs/day?
ID: 1777333 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1777336 - Posted: 8 Apr 2016, 23:38:27 UTC - in response to Message 1777333.  
Last modified: 8 Apr 2016, 23:38:51 UTC

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

I run 1 WU at a time on my GPU and, according to SETIspirit, the GPU completes (100 < WUs)/day.

If you look at the Application details on one of your hosts you can see the number of tasks it has completed for each app in a given day. Number of tasks today
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1777336 · Report as offensive
AMDave
Volunteer tester

Send message
Joined: 9 Mar 01
Posts: 234
Credit: 11,671,730
RAC: 0
United States
Message 1777343 - Posted: 8 Apr 2016, 23:46:38 UTC - in response to Message 1777336.  

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

I run 1 WU at a time on my GPU and, according to SETIspirit, the GPU completes (100 < WUs)/day.

If you look at the Application details on one of your hosts you can see the number of tasks it has completed for each app in a given day. Number of tasks today

970 WUs today?!? I've never felt so productive in my life :)
ID: 1777343 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1777344 - Posted: 8 Apr 2016, 23:47:50 UTC - in response to Message 1777343.  

Yes but it's 100 WUs for all cores, not 100 WUs per core.
I get exactly 300 WUs for 2 GPUs and 4 cores, that's 100 per GPU and only 100 for my quad core...

I run 1 WU at a time on my GPU and, according to SETIspirit, the GPU completes (100 < WUs)/day.

If you look at the Application details on one of your hosts you can see the number of tasks it has completed for each app in a given day. Number of tasks today

970 WUs today?!? I've never felt so productive in my life :)

Well, that's obviously not one of your systems. I just grabbed one form the top hosts list as you have your systems hidden.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1777344 · Report as offensive
1 · 2 · 3 · 4 . . . 12 · Next

Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.