Overclocking

Message boards : Number crunching : Overclocking
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Profile Skip Da Shu
Volunteer tester
Avatar

Send message
Joined: 28 Jun 04
Posts: 233
Credit: 431,047
RAC: 0
Message 75715 - Posted: 1 Feb 2005, 4:28:29 UTC
Last modified: 1 Feb 2005, 4:36:25 UTC

Can someone explain to me why overclocking as "bad" for BOINC projects. This came to mind again after reading the NATURE article based on CPDN results. In that article they mention 1.6% of the data is invalid due to computer crashes or "overclocking".

I run two "overclocked" crunchers. In this case the overclocking is nothing more than cranking the FSB up to 165Mhz(limit of the MB/BIOS) and installing 1 stick of 256M DDR400 memory(what I had on hand). The AMD XP 2000+ CPU is then recognized as an AMD 2600+. These things run 24/7 and seem to function just like a normal AMD XP 2600+ (Sandra agrees).

Since I am generaly not getting errors or crashes, are these causing any projects problems that I'm not aware of? If so, please explain how.

Thanx, Skip
- da shu @ HeliOS,
"A child's exposure to technology should never be predicated on an ability to afford it."
ID: 75715 · Report as offensive
Profile Keck_Komputers
Volunteer tester
Avatar

Send message
Joined: 4 Jul 99
Posts: 1575
Credit: 4,152,111
RAC: 1
United States
Message 75721 - Posted: 1 Feb 2005, 5:29:53 UTC
Last modified: 1 Feb 2005, 5:30:14 UTC

The FPU is usually the first section of the CPU to start thowing errors. These errors will normally not affect the OS, but do a heavy math problem and use those results in a new problem and the errors become significant. So it may appear to run prefectly normally yet still return bad work. There have even been cases at CPDN where people had to underclock computers to get them to run without errors.
BOINC WIKI

BOINCing since 2002/12/8
ID: 75721 · Report as offensive
tito
Volunteer tester

Send message
Joined: 28 Jul 02
Posts: 24
Credit: 19,536,875
RAC: 138
Poland
Message 75738 - Posted: 1 Feb 2005, 6:45:38 UTC - in response to Message 75715.  

> Can someone explain to me why overclocking as "bad" for BOINC projects. This
> came to mind again after reading the NATURE article based on CPDN results. In
> that article they mention 1.6% of the data is invalid due to computer crashes
> or "overclocking".
>

The way to check if overlocked CPU's are working fine is very simple - just run prime95 in torture test for several hours. If there is a problem You should change values of overclocked computers. I use barton 2500+ overclocked to 2275 Mhz (about 3200+) and everything is OK. (few tests were done - prime95 (3 different tests), memtest and seti classic with test WU (final data were the same as other machines generated). So overclock Your CPU without fear - just check it later.
Regards.

ID: 75738 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 75741 - Posted: 1 Feb 2005, 7:13:56 UTC - in response to Message 75715.  


> Since I am generaly not getting errors or crashes, are these causing any
> projects problems that I'm not aware of? If so, please explain how.
>
> Thanx, Skip

You may not be generating crashes, but are you sure about errors? How do you know?

You may want to visit this thread here.
ID: 75741 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 75776 - Posted: 1 Feb 2005, 13:06:17 UTC - in response to Message 75715.  

Skip,

Dipping my oar in here ...

I am going out on a limb here with a first assertion, all BOINC Science projects are "iterative" in nature. Which means that we do something over, and over, and over, and over, and over ... I hope you get the idea ...

So, next assertion, floating point numbers are in-exact representations of number values and therefore have a certain amount of error "built-in".

In theory, a computer will always return the exact same numbers when a calculation seeries is repeated. In practice this is not always the case and is usually more common to see diversion at the "end" of the calcualted result (least significant digits). Causes include problems with the FPU not repeating can include improper initialization, bias in results because of interaction between other running processes that use the FPU, and so forth.

Next assertion, the design of FPUs, though compliant with IEEE 754, and later does not guarentee that the outputs of those FPUs will be identical. This is true even within processor families and steppings of those processors.

With all of this, what we see is chaotic behavior in the calculation of values.
meaning, noise.

LHC@Home, for example, has no plans for a Macintosh computer, not becaue they are "bad", but simply a matter of pragmatic considerations because of the numberical consistency required. It is not that one answer is "better" it is just that they have a better chance of getting comparible results with sticking with one basic architecture and compiler.

Ok, I have more in the Glossary under FPU, Floating Point, and the like that will give you a better feel for what I am talking about.

Conclusion, (in Paul's opinion) overclocking is bad because the point is the science, and not how many results we return. Returning more results with questionable equipment is worse (remember Paul's opinion) because of the possibility of error. An over-clockers assurance that the results are accurate because of a test begs the question.

If the point is the science, then the accuracy, consistency, reliability, and repeatability of the process is of prime concern. Over-clocking is done for one reason and one reason only, to increase the absolute number of answers. But it does this at the cost of decreasing everything else related to the processing of work. If the thousands of engineers tell you that this processor should run at speed "x", how can I believe that a test of "x" hours of running program "y" is going to do anything to conclusively prove them wrong.

Anyway, if you are into credit, over-clocking to just before the point of the results being rejected is the way to go. If you are into the science, well, then you don't ...
ID: 75776 · Report as offensive
Scott Brown

Send message
Joined: 5 Sep 00
Posts: 110
Credit: 59,739
RAC: 0
United States
Message 75785 - Posted: 1 Feb 2005, 13:48:23 UTC

A very nice and thorough answer from Paul just about says it all...but I would add one point. Not all BOINC projects are created equally. That is, the computational demands of each project vary such that overclocking may pose considerably different risks for corruption. CPDN has been noted to be the most computationally intense of the active projects (Thus JKeck's comment regarding the necessity of underclocking some machines). So, while Paul's comments regarding overclocking and result reliability are true, they are more true for some projects than for others. Thus, just because overclocking is not causing problems with SETI results does not mean that that same overclock will not cause invalid results on other projects.

ID: 75785 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 75808 - Posted: 1 Feb 2005, 16:48:17 UTC - in response to Message 75785.  

> A very nice and thorough answer from Paul just about says it all...but I would
> add one point. Not all BOINC projects are created equally. That is, the
> computational demands of each project vary such that overclocking may pose
> considerably different risks for corruption. CPDN has been noted to be the
> most computationally intense of the active projects (Thus JKeck's comment
> regarding the necessity of underclocking some machines). So, while Paul's
> comments regarding overclocking and result reliability are true, they are more
> true for some projects than for others. Thus, just because overclocking is
> not causing problems with SETI results does not mean that that same overclock
> will not cause invalid results on other projects.

I'm sure it depends on the calculations as Paul suggests, and I'm sure that it depends also on how subtle the result. A calculation that is iterative (where the same calculation is done repeatly on the last result) is going to be more susceptible to small errors.

To me, the real question is: just what are we doing when we overclock?

Here is the image from an earlier post:



The black line is a slightly exaggerated version of a signal transitioning from 0 to 1. The green line represents normal clocking.

A really good waveform is just beyond my graphic abilities, but there should be a little "ringing" at the top of the wave, and the edges aren't really that square, but it'll do.

The slope of the black line will change a bit with voltage and temperature. This is why you need to raise the voltage sometimes when you overclock -- and one of the reasons overclockers are often obsessed with cooling. Running cold will make the black line steeper and you can move the green line to the left (by increasing the clock) and still "sample" on the top of the waveform.

The green line is where it is because the chip manufacturer has determined that, under virtually all circumstances (specified temperature range, etc.) that the signal will be stable, and if you sample the waveform at that point you'll get a good solid reliable "1" or a good solid reliable "0".

The yellow line represents a machine that has been overclocked. This line is most of the way up the slope, and will probably always be a "1" -- sometimes, the rise time will be a little slow and the value may be interpreted as a zero.

The red line is a machine that has been overclocked too much. What should be a "1" will almost always be interpreted as a "0" and the machine likely won't run.

The main point is, as you increase the clock speed, you are trading performance for margins. Margins allow for reliable operation as environmental variables change (voltage, temperature), quantum effects, whatever randomness might sneak in.

Overclocking done right isn't necessarily even overclocking: you may be lucky and get a part that performs well at higher clock speeds. If you properly characterize that part you may find that it clocks reliably at 20% over the marked speed -- if you run it at 95% of the fastest reliable speed, you'll get the "free" perfomance boost and still have enough margin for good results.

If you find the absolute top speed, and then stay there, you may fall of the corner once in a while and have a machine that is mostly reliable.
ID: 75808 · Report as offensive
Profile Giordano Kaczynski
Avatar

Send message
Joined: 16 Jan 05
Posts: 12
Credit: 1,183
RAC: 0
Italy
Message 75810 - Posted: 1 Feb 2005, 16:51:03 UTC

Correct me if I'm wrong (and why).
Overclocking only increases the speed of computation, does not make the CPU make errors in calculation IF the system after overclocking is stable.
For example, if you have an Athlon 1400+ and get it to work as a 1800+, and it is stable (you can run on it a scientific calculation that get 100% CPU time for 40 hrs and no rebooting, hung up, and strange behaviours) then YOU HAVE an Athlon 1800+, in other words IT IS THE SAME AS A CPU bought in a shop labeled Athlon XP 1800+.
Any result given by such a computer (overclocked) has the same validity of any computer around here (overclocked or not).

> Next assertion, the design of FPUs, though compliant with IEEE 754, and later
> does not guarentee that the outputs of those FPUs will be identical. This is
> true even within processor families and steppings of those processors.

The difference between the identical CPUs behind the precision fixed by IEEE 754 is not regulated, as quoted. The error comparison between two computer must/should be made only within this standard, because the rest (in terms of deeper precision) is random/non significant result (of whatever CPU, oveclocked or not, you are talking about) that should not be considered in a serious scientific calculation.

As long as an overclocked computer gives correct results in the boundaries of precision desired then it is equivalent to another one's result.


<i>"I've seen things, you people wouldn't believe...hmmm... attack ships on fire off the shoulder of Orion.
I've watched C Beams glitter in the dark near the Tannhauser Gate.
All those moments, will be lost in time like tears in rain..."</i>
ID: 75810 · Report as offensive
Profile Giordano Kaczynski
Avatar

Send message
Joined: 16 Jan 05
Posts: 12
Credit: 1,183
RAC: 0
Italy
Message 75814 - Posted: 1 Feb 2005, 17:02:20 UTC - in response to Message 75776.  

Sorry for this second post.

> If the point is the science, then the accuracy, consistency, reliability, and
> repeatability of the process is of prime concern.

Yes, but not with our machines.
Our machines are used to not perform a scientific calculation in the meaning that our result will be THE RESULT.
We must process as many data as we can, to make the amount of processed SETI data as bigger as it is possible through the use of distributed computation, so the data can be validated by SERENDIP for example or whatever else they use. Yes the results must be correct, but if you don't have a computation error in your BOINC client ad your data is sent with succes to SETI server so you have done your work, the best that you can do is this. The speed is only the thing you can improve in your computer to help more the SETI program, the rest it's not your matter.
You get the data, your computer compute it as fast as he can, no errors in computation, send them, Your work is done.
Do it quickly. And not for credits, but for science.
<i>"I've seen things, you people wouldn't believe...hmmm... attack ships on fire off the shoulder of Orion.
I've watched C Beams glitter in the dark near the Tannhauser Gate.
All those moments, will be lost in time like tears in rain..."</i>
ID: 75814 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 75817 - Posted: 1 Feb 2005, 17:11:46 UTC - in response to Message 75810.  

> Correct me if I'm wrong (and why).
> Overclocking only increases the speed of computation, does not make the CPU
> make errors in calculation IF the system after overclocking is stable.

You're right, mostly.

The problem is that it isn't just "stable" or "instable" -- there is a place in-between that can be labelled "mostly stable"

So, for your example, the manufacturer says "1400" and you find that 1800 "appears not to fail" -- but 2000 is clearly dead.

It is entirely possible that 1800 throws an error on, oh, something like 1 in 50 trillion clocks. If I did the math correctly, that is one error every 8 hours or so. From there it depends on just what the error does: if we're incrementing the instruction counter, then that's a guaranteed crash, but if it's in the last couple of bits on a floating-point result, the system would be stable (and inaccurate).

The same machine running at 1700 might be 100% stable and 100% reliable.
ID: 75817 · Report as offensive
karthwyne
Volunteer tester
Avatar

Send message
Joined: 24 May 99
Posts: 218
Credit: 5,750,702
RAC: 0
United States
Message 75819 - Posted: 1 Feb 2005, 17:21:41 UTC - in response to Message 75810.  

> Correct me if I'm wrong (and why)

that goes for me as well.

last i heard, which was a while ago, and at least as far as Intel is concerned, all the chips are manufactured as the fastest chip of that type. Therefore, assuming that a socket 775 P4 3.6 is the fastest of that line (and ignoring the EEs) all the 775s are made to be that chip. They then test the chips and label them as where they are consistent.
back in the P2 days i know, most of the chips would actually work at the fastest speed, but the prices were too high and the supply was too high for the demand, so they marked them lower.
so if you bought a 200MHz, you could probably clock it to 550, but maybe not.

i presume that that is similar to now. If you buy a chip labeled as 3.0 Ghz, you might be able to get 3.6 (doubtful) and you get more likely to get a stable speed the closer to 3.0 that you get. of course, there is the possibility that your 3.0 chip is really a 3.0, that anything over that will cause failure.

again, i could be wrong, they may have changed the manufacturing process entirely...

S@h Berkeley's Staff Friends Club
ID: 75819 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 75826 - Posted: 1 Feb 2005, 18:06:22 UTC - in response to Message 75819.  

> > Correct me if I'm wrong (and why)
>
> that goes for me as well.
>
> last i heard, which was a while ago, and at least as far as Intel is
> concerned, all the chips are manufactured as the fastest chip of that type.
> Therefore, assuming that a socket 775 P4 3.6 is the fastest of that line (and
> ignoring the EEs) all the 775s are made to be that chip. They then test the
> chips and label them as where they are consistent.

There are variations in the manufacturing process, and while I don't know what Intel (or AMD) gets as far as yields and etc. I do know that over the years the slower parts were generally parts that failed at the higher speeds.

... and that kind of makes sense when you see chips with "strapping" that locks the CPU to a certain speed/multiplier on the OUTSIDE of the part -- so they can apply the jumpers after they've tested the part.

If you really want to overclock and do it right, you should be basically following the procedures that the CPU maker used to qualify the parts in the first place.
ID: 75826 · Report as offensive
shady

Send message
Joined: 2 Feb 03
Posts: 40
Credit: 2,640,527
RAC: 0
United Kingdom
Message 75827 - Posted: 1 Feb 2005, 18:07:23 UTC - in response to Message 75776.  
Last modified: 1 Feb 2005, 18:15:37 UTC

>
>
> Anyway, if you are into credit, over-clocking to just before the point of the
> results being rejected is the way to go. If you are into the science, well,
> then you don't ...
>

To put it simply that is just wrong.

You have made a lot of incorrect assumptions in your post .

First point is that just because a machine is "only" running at its stated level of performance , this does not guarantee that it is doing accurtate calculations.

It is perfectly possibly to have a none overclocked processor that could be throwing out garbage results. The consistency of both the temperature and the voltage supplied can effect the actual calculations done. ie the change in ambient temperatue v the level of cooling and the quality of the power supply v how many devices it is powering can make a standard machine throw out garbage on occasions.

The next point is that if a manufacturer produces a batch of processors and decides to put different labels on them , just becuase they have a different label does not mean they are any different. It can be cheaper for a manufacturer to just produce 1000's of one type of processor and then for example badge some as xp3200 , some as xp3000 , some as xp2500 rather than having a different manufacturing process for each type of processor.The ones with the lower badge can run just as fast as the ones with the higher badge, but market conditions mean they can not sell every processor as the top model , so have to badge some as a lower model to be sold at a reduced price.
(Intel Dothan's would seem to be another example of this , because no matter what chip you buy from 1.5ghz to 2.1ghz they all seem to be happy running at around 2.4ghz on standard cooling when used on a desktop board.)

Most people who overclock , will pay more attention to the quality of the voltage supply and temperature range of the cpu than someone who just buys a "shop" built box and then sticks it in a room without anythought to the ventilation /heating/cooling of that room.

Most people who overclock and run any Distributed processing project will submit there machines to various tests to confirm (as far as possible)that they are producing correct calculations. Running a prime95 torture test for 24 hours is a very common test. This test does calculations and compares the actual results against what it expected to get, if a machine can do that for 24 hours without any errors , then that is a very good sign that all is well.

I agree that each project can place different demmands on a system and sugested to Dr Anderson along with Carl from CPDN that in my view it would be a good idea to have a test unit or batch of units for each project that could be run in a "standalone" loop and have the actual results compared to known correct results.
So that any participant of any project could be as sure as possible that their machine or machines are caclulating the correct results for the projects that they wish to contribute to.

I think that at present we would all agree that they have more pressing things to deal with , but long term it would be a good idea and is a possibility.

Shady







<img src='http://www.boincsynergy.com/images/stats/comb-1527.jpg'>
ID: 75827 · Report as offensive
Profile Paul D. Buck
Volunteer tester

Send message
Joined: 19 Jul 00
Posts: 3898
Credit: 1,158,042
RAC: 0
United States
Message 75829 - Posted: 1 Feb 2005, 18:22:21 UTC - in response to Message 75827.  

Shady,

> You have made a lot of incorrect assumptions in your post .

And you are also making one. You assume that because you have no detected errors with whatever test you run, that it certifies the CPU at that speed for any purpose. Because I tested your car moving a bag of potting soil from the garden shop, I can use it to haul 4 cubic yards of dirt in one swell foop ...

> First point is that just because a machine is "only" running at its stated
> level of performance , this does not guarantee that it is doing accurtate
> calculations.

And I never asserted that it did.

The bottom line, you believe that it is ok, I don't. I will submit that for projects like SETI@Home it is lot of "who cares" because it is not really a meaningful project. Fun true, but hardly worth worrying about.

I am still waiting to hear any case for overclocking that does not make an unfounded assertion that if the computer does not fail test "x" it must be Ok for purpose "y" ...
ID: 75829 · Report as offensive
shady

Send message
Joined: 2 Feb 03
Posts: 40
Credit: 2,640,527
RAC: 0
United Kingdom
Message 75835 - Posted: 1 Feb 2005, 19:21:26 UTC - in response to Message 75829.  

> Shady,
>
> > You have made a lot of incorrect assumptions in your post .
>
> And you are also making one. You assume that because you have no detected
> errors with whatever test you run, that it certifies the CPU at that speed for
> any purpose. Because I tested your car moving a bag of potting soil from the
> garden shop, I can use it to haul 4 cubic yards of dirt in one swell foop ...
>

Not sure which post you were reading , but nowhere in my post did I state that running any test on a pc is confirmation that it will perform ok in any other test. In fact I have said that it would be a good idea to have project specific tests/test units to help check that a pc is producing valid results for each and every project that a participant wonts to contribute to.

To put it simply it is practically impossibly to state that any pc (overclocked or not) will calculate every type of calculation in every type of circumstance perfectly. It is possible to check that a pc wont calculate something correctly , ie that it fails test x , y or z .

I would place more trust in a pc that has passed tests , over one that has not even been tested but is just assumed to ok because it is not overclocked.

> > First point is that just because a machine is "only" running at its
> stated
> > level of performance , this does not guarantee that it is doing accurtate
>
> > calculations.
>
> And I never asserted that it did.
>
> The bottom line, you believe that it is ok, I don't. I will submit that for
> projects like SETI@Home it is lot of "who cares" because it is not really a
> meaningful project. Fun true, but hardly worth worrying about.
>
> I am still waiting to hear any case for overclocking that does not make an
> unfounded assertion that if the computer does not fail test "x" it must be Ok
> for purpose "y" ...


At the end of the day we each do what we feel is correct.I was just taking issue with your statement that if you are interested in the science you dont or should not overclock, because that is your personal opinion only and is not a statement of scientific fact.

My personal opinion is that any machine that is doing any distributed processing should be subjected to and pass a torture test first. Given that there is no extra test that can be done to check the computational accuracy of an overclocked machine verses a none overclocked machine , logic sugests that every machine that can pass the tests that are available to us should be deemed as reliable as it is possible to check for. The closer the tests are to the work to be done the better (hense my view on project specific tests) ,but it is not very scientific to decide that some pc's are less reliable than others,just because they have been overclocked.

Shady









>
<img src='http://www.boincsynergy.com/images/stats/comb-1527.jpg'>
ID: 75835 · Report as offensive
Profile Dunc
Volunteer tester

Send message
Joined: 3 Jul 02
Posts: 129
Credit: 2,166,460
RAC: 0
United States
Message 75840 - Posted: 1 Feb 2005, 20:03:55 UTC
Last modified: 1 Feb 2005, 20:04:22 UTC

Plus if you are sending back b0rked results they will not get validated, hence no credit will be granted. People who overclock will probably check this more frequently than those who don't for that very reason.

In seticlassic you could send back widley b0rked results and still get credit for them.

Therefore I think that overclocking is ok as long as the person doing it is mindful of the potential pitfalls that come with it.

Dunc
ID: 75840 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 75842 - Posted: 1 Feb 2005, 20:13:43 UTC - in response to Message 75840.  


> Therefore I think that overclocking is ok as long as the person doing it is
> mindful of the potential pitfalls that come with it.

Sadly, the majority of overclockers probably aren't competent to make this determination.

You can read it on the various threads: the ongoing quest to shave off the last picosecond and the idea that imperfect results will be perfectly ok.

I think it's pretty safe to say that most overclockers stay with the very highest clock they found would work, they don't back down a step or two to get any kind of margin.
ID: 75842 · Report as offensive
Profile kinnison
Avatar

Send message
Joined: 23 Oct 02
Posts: 107
Credit: 7,406,815
RAC: 7
United Kingdom
Message 75848 - Posted: 1 Feb 2005, 20:32:54 UTC

I know I've attempted overclocking myself, with varying success. I was playing around with my athlon 2800+ today, trying a 200Mhz FSB, and different multipliers. I used prime95 everytime and hit problems within minutes.
So it's back to it's normal parameters, I checked with prime95 for 2 hrs, and all was OK, so I'll take that as red!
I also have a Sempron 2400, I o/ced that a few days ago to 200Mhz FSB (not able to alter the multiplier) and ran the stress test for several hours with no probs, so I left it like that. It's about 15-20% faster now.

I *have* found that CPDN is much more fussy than SETI about overclocking errors. I've had CPDN completely die on me and restart with a new work unit, wasting several days CPU time. I've actually automated a backup of the BOINC folders now everyday just in case this happens again.

I would suggest anyone who wants to o/c their PC and run BOINC should run prime95 on it's torture test for at least a few hours to make sure everything is OK. I used to think that as long as a PC didn't crash it was OK, but I've changed my mind now!


<img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=268&amp;prj=1&amp;trans=off" /><img border="0" src="http://boinc.mundayweb.com/one/stats.php?userID=268&amp;prj=4&amp;trans=off" />
ID: 75848 · Report as offensive
Scott Brown

Send message
Joined: 5 Sep 00
Posts: 110
Credit: 59,739
RAC: 0
United States
Message 75849 - Posted: 1 Feb 2005, 20:42:46 UTC - in response to Message 75835.  

> To put it simply it is practically impossibly to state that any pc
> (overclocked or not) will calculate every type of calculation in every type of
> circumstance perfectly. It is possible to check that a pc wont calculate
> something correctly , ie that it fails test x , y or z .

While this is true, this does not appear to be the point that Paul was arguing. I believe that the point being made was that overclocking adds an additional risk of errors to any given system (yes...some normal or underclocked systems are abused in other ways that can cause errors--improper cooling, etc.--but so are some overclocked machines). I believe that Paul was suggesting that such risk (of errored results) outweighs the potential scientific benefits of overclocking (i.e., more quickly computed results).

I do, however, agree with you that some sort of project-specific verification test would be very beneficial.

ID: 75849 · Report as offensive
Profile Giordano Kaczynski
Avatar

Send message
Joined: 16 Jan 05
Posts: 12
Credit: 1,183
RAC: 0
Italy
Message 75852 - Posted: 1 Feb 2005, 20:57:19 UTC - in response to Message 75829.  
Last modified: 1 Feb 2005, 21:07:12 UTC

> I am still waiting to hear any case for overclocking that does not make an
> unfounded assertion that if the computer does not fail test "x" it must be Ok
> for purpose "y" ...

Does Intel (or AMD) run SETI@Home for 2-3 days and then if they are ok they put the processor on the market?

> I believe that the point being made was that overclocking adds an additional
> risk of errors to any given system (yes...some normal or underclocked systems
> are abused in other ways that can cause errors--improper cooling, etc.--but so
> are some overclocked machines). I believe that Paul was suggesting that such
> risk (of errored results) outweighs the potential scientific benefits of
> overclocking (i.e., more quickly computed results).

When you overclock you purchase better memory, motherboard, cooler, and so on, that those that Intel or AMD use at their laboratory (their reference cards), so if you overclock well (with good parts) the overclocked 1400 to 1800 probably (if not surely) is better performing that a 1800+ 'original' putted in a standard system (with standard components). Why? Because a good overclocking permits to the processor to work at a lower temperature (this is the big point).
The probability, for example, that a HP brand desktop computer (made entirely by them with their choice in parts) fails in computation is higher of a custom buil, greatly cooled, with state of the art components, computer with overclocked CPU.




<i>"I've seen things, you people wouldn't believe...hmmm... attack ships on fire off the shoulder of Orion.
I've watched C Beams glitter in the dark near the Tannhauser Gate.
All those moments, will be lost in time like tears in rain..."</i>
ID: 75852 · Report as offensive
1 · 2 · 3 · Next

Message boards : Number crunching : Overclocking


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.