amd vs intel

Message boards : Number crunching : amd vs intel
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Profile Tern
Volunteer tester
Avatar

Send message
Joined: 4 Dec 03
Posts: 1122
Credit: 13,376,822
RAC: 44
United States
Message 189228 - Posted: 15 Nov 2005, 15:08:13 UTC - in response to Message 189216.  

For majority of wu the variation is of course much smaller, but since re-running the same wu on same computer can give over 30% variation, how would you expect to calibrate anything?


This is very bothersome information... how does the flop counting come out for these WUs? Does that value remain within the expected range of accuracy in spite of the variation in CPU time?
ID: 189228 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 189247 - Posted: 15 Nov 2005, 16:43:07 UTC - in response to Message 189228.  
Last modified: 15 Nov 2005, 16:57:19 UTC

This is very bothersome information... how does the flop counting come out for these WUs? Does that value remain within the expected range of accuracy in spite of the variation in CPU time?


BOINC alpha uses the normal seti-application, it's only Seti_Enhanced running in beta that uses flops-counting.


Due to long-running wu, few users, and even fewer that runs v5.2.6 or later, it's very difficult to find wu in beta there atleast two has been using flops-counting. But, being paired-off with Tetsuji Maverick Rai running optimized seti-application indicates upto 1% difference in claimed credit.

Ah found one example there none is running optimized application:
60.6510490617153
60.6537686345174
Difference: 0.004484%

Another example:
62.3131028703056 (p4-ht, XP)
62.0348941663368 (p3, win2k)
62.3130984694757 (p4-ht, linux, optimized)
Difference for p3: 0.448%
Difference for unoptimized-XP and optimized-linux: 0.0000071%


The method Seti_Enhanced is using to "count flops" isn't perfect, but with upto 1% variation and probably more commonly 0.01% it's little point to use more cpu-time to make it more accurate.


Only problem have seen with flops-counting in Seti_Enhanced is it's only claiming 60 Cobblestones, but if there's no other problems it's easily fixed by multiplying with a constant.
ID: 189247 · Report as offensive
Profile Tern
Volunteer tester
Avatar

Send message
Joined: 4 Dec 03
Posts: 1122
Credit: 13,376,822
RAC: 44
United States
Message 189251 - Posted: 15 Nov 2005, 17:02:28 UTC - in response to Message 189247.  
Last modified: 15 Nov 2005, 17:03:48 UTC

BOINC alpha uses the normal seti-application, it's only Seti_Enhanced running in beta that uses flops-counting.


It sounds like we have two possible situations with flops-counting. If it is indeed repeatable, such that the same computer running the same WU comes up with the same number, and different computers on different OSes also come up with the same number (within some reasonable standard), then the "accuracy" of the number is irrelevant. It doesn't matter if the application says "this was 1000 flops" or "100,000 flops" as long as the conversion from that figure into credits is done such that the hosts get about the same credit per CPU-hour as they were getting (on a project-wide average, not a specific machine) with the benchmark approach. This eliminates the Linux/Windows benchmark discrepancy problem, the need for an optimized BOINC when running an optimized science app, etc. It effectively answers the project-dependent, platform-independent, benchmarking issues quite nicely. It does NOT reduce the possibility of cheating as effectively as Paul's proposal does, and _by_itself_ would not necessarily allow the reduction of redundancy (although that would be up to the individual project to make that call).

If the flops-counting is NOT repeatable, if it varies anywhere near as much as you say the BOINC-alpha CPU times do, then while it may be an improvement over the benchmarks, it's not going to have _as_ significant an effect. This is something that will definitely need to be tested. My impression however from what I've heard, is that repeatability has been quite good...

I suppose the ideal would be to have flops-counting to eliminate all the benchmark uncertainties, and then use those figures for the calibration of hosts as in Paul's proposal, to eliminate cheating. If the flops-counting truly accomplishes all that it _should_, then the calibration could all be on the server end, with no additional load for the hosts. Hmm. One step at a time! :-)

EDIT:: Ingleside added the repeatability info after I'd posted this. I think the question of repeatability has been answered.
ID: 189251 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 189253 - Posted: 15 Nov 2005, 17:07:29 UTC - in response to Message 189228.  

For majority of wu the variation is of course much smaller, but since re-running the same wu on same computer can give over 30% variation, how would you expect to calibrate anything?


This is very bothersome information... how does the flop counting come out for these WUs? Does that value remain within the expected range of accuracy in spite of the variation in CPU time?

If we were running in a pure, non-tasking environment (i.e. DOS) then the variation would likely go away.

Why? Because we have a system that is managing memory. For those of us who don't have dedicated crunchers, we have other applications (my single cruncher is my workstation, sometimes it is running several operating systems at once).

We can have a perfect claimed-credit calculation, but it is expensive. The science application must keep track of some key operation(s).

The more you keep track, the better the claimed credit -- and the more time you spend on accounting and less on science.

I'm not even sure that "flops" are a good measure. Simple operations like multiply are not going to take the same time as cosine.

So the big question is: is the credit calculation fair? Is it a reasonable approximation?
ID: 189253 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19075
Credit: 40,757,560
RAC: 67
United Kingdom
Message 189255 - Posted: 15 Nov 2005, 17:17:11 UTC

I've done one SetiB unit 1054604 recently with Tetsuji. As far as I can tell, I got exactly the same results as he did, he's on a P4 2.8 HT running Linux 2.6.14 and mine is a Pent M 1.86 running Win XP Pro. But on claimed credits I claimed 251.96 and he claimed 75.62.
ID: 189255 · Report as offensive
Profile Tern
Volunteer tester
Avatar

Send message
Joined: 4 Dec 03
Posts: 1122
Credit: 13,376,822
RAC: 44
United States
Message 189256 - Posted: 15 Nov 2005, 17:21:20 UTC - in response to Message 189255.  
Last modified: 15 Nov 2005, 17:22:55 UTC

But on claimed credits I claimed 251.96 and he claimed 75.62.


It appears that your claimed credit was based on benchmark and not flops... You're running V5.2.2 - isn't that prior to the code needed for reporting flops? If so, it indicates that the constant needed to multiply flop claims by is somewhere between 3 and 4...
ID: 189256 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 189260 - Posted: 15 Nov 2005, 17:27:36 UTC - in response to Message 189251.  

It does NOT reduce the possibility of cheating as effectively as Paul's proposal does, and _by_itself_ would not necessarily allow the reduction of redundancy (although that would be up to the individual project to make that call).


As long as a majority of users doesn't cheat, the system works, since two users must cheat on the same wu for it to be successful.


Anyway, since it looks like variation is 1%, would expect it's fairly easy to add a test like:
"If highest claimed credit > 1.05x lowest claimed, increase userid_cheater_count of highest claimer"
"If userid_cheater_count > N, deny credit for the next 2N results, and set userid_cheater_count = N-1"
Use N = 50 or something...

Well, it's unlikely any project will use this automated punishment-system, but atleast in this system anyone trying to cheat once too much very likely looses much more than gained by trying to cheat. :evil-grin:
ID: 189260 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19075
Credit: 40,757,560
RAC: 67
United Kingdom
Message 189261 - Posted: 15 Nov 2005, 17:30:10 UTC - in response to Message 189256.  

It appears that your claimed credit was based on benchmark and not flops... You're running V5.2.2 - isn't that prior to the code needed for reporting flops? If so, it indicates that the constant needed to multiply flop claims by is somewhere between 3 and 4...


Yeah, I was running V5.2.2 and didn't change to latest until that unit had finished. With the long crunch times I didn't want to f**k things up in the middle of it.

ID: 189261 · Report as offensive
Ingleside
Volunteer developer

Send message
Joined: 4 Feb 03
Posts: 1546
Credit: 15,832,022
RAC: 13
Norway
Message 189263 - Posted: 15 Nov 2005, 17:32:51 UTC - in response to Message 189255.  

I've done one SetiB unit 1054604 recently with Tetsuji. As far as I can tell, I got exactly the same results as he did, he's on a P4 2.8 HT running Linux 2.6.14 and mine is a Pent M 1.86 running Win XP Pro. But on claimed credits I claimed 251.96 and he claimed 75.62.


None of the systems used flops for deciding claimed credit on this wu, meaning any variation is due to the BOINC benchmark and optimized application.
ID: 189263 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 20331
Credit: 7,508,002
RAC: 20
United Kingdom
Message 189280 - Posted: 15 Nov 2005, 18:28:47 UTC - in response to Message 189251.  
Last modified: 15 Nov 2005, 18:32:33 UTC

... with flops-counting. If it is indeed repeatable, ... and different computers on different OSes also come up with the same number (within some reasonable standard), then the "accuracy" of the number is irrelevant. ... as long as the conversion from that figure into credits...

That works well enough provided that the flops number NOT counted is a small proportion of the total flops or that the uncounted stuff stays proportionately constant.

If for example the libraries flops are not counted and then someone goes and optimises the libraries better, we're back to the skewed credits claims again.

This is also open for someone to try inflating their credit claims by whatever percentage tolerance is given to trap such "bad guys cheaters".

... I suppose the ideal would be to have flops-counting to eliminate all the benchmark uncertainties, and then use those figures for the calibration of hosts as in Paul's proposal, to eliminate cheating. ...

There's the question of whether all flops should be counted equal. A double float multiply takes a lot longer than a single float add. And then there's trig and other iterative functions...

The main argument for calibration against a reference golden "cobblestone computer" is that you can gain (traceable) calibration within a project and across different projects without instrumenting anything. You're referencing against a real physical system that can be (easily) measured. OK, so this involves a little work server-side. Saves on additional project development (for the majority that need it ;-) ) and kills credits claims cheating dead.

Regards,
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 189280 · Report as offensive
Profile Mr.Pernod
Volunteer tester
Avatar

Send message
Joined: 8 Feb 04
Posts: 350
Credit: 1,015,988
RAC: 0
Netherlands
Message 189288 - Posted: 15 Nov 2005, 19:02:07 UTC - in response to Message 189255.  

I've done one SetiB unit 1054604 recently with Tetsuji. As far as I can tell, I got exactly the same results as he did, he's on a P4 2.8 HT running Linux 2.6.14 and mine is a Pent M 1.86 running Win XP Pro. But on claimed credits I claimed 251.96 and he claimed 75.62.

you should ignore Tetsuji's machines on Enhanced Beta, he is working on optimizing the application.
if you check the output in the result-ID details, you will see you are running the "4.09" standard application under a non-optimized 5.2.2 BOINC core client and he is testing the "4.09 TMR rev. 10.5" app under the 5.2.4 optimized BOINC core client.
ID: 189288 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13746
Credit: 208,696,464
RAC: 304
Australia
Message 189294 - Posted: 15 Nov 2005, 19:31:53 UTC - in response to Message 189247.  
Last modified: 15 Nov 2005, 19:32:59 UTC

The method Seti_Enhanced is using to "count flops" isn't perfect, but with upto 1% variation and probably more commonly 0.01% it's little point to use more cpu-time to make it more accurate.

That's better than most "recognised" benchmarks, multiple runs of the same benchmark on the same hardware starting from the same image can vary by as much as 5%, apparently it's usually in the 2-3% range.
Yet the number of people that almost wet themselves with excitement when they tweak their RAM etc to the limit & get a .5% increase in performance is almsot disturbing.


EDIT- fixed typos.
Grant
Darwin NT
ID: 189294 · Report as offensive
Pete Mason

Send message
Joined: 5 Jun 02
Posts: 9
Credit: 4,700
RAC: 0
United Kingdom
Message 190754 - Posted: 19 Nov 2005, 21:37:32 UTC

I am a HomeBuilder. I (Hobby) Build PCs for myself and others.

http://setiathome2.ssl.berkeley.edu/fcgi-bin/fcgi?cmd=view_feedback&id=30596

That means that I am, like, a 46 year old guy who goes for grunt; not flash or speed 8-) And I got 630 units on Classic that I can't claim on Boinc. Bastard.

And I have been running Seti for some time.
I admit to being an AMD fan. K6/2 500 anyone LOL.

I ran a 1.4 TBird (shortly after it's release) with ABit MB and 512 Mem and it did Classic Seti unit in 6Hrs (See above)

My new Unit is an AMD Sempron 2800 On Gigabyte MB with 768 Mem. It does the same units in 3Hrs.

TBird was 1.4Ghz. Sempron is 1.67Ghz.

Average is 1.53Ghz. Show me an Intel Pentium at that speed that will do a unit in that time 8-). But Intel kicks my ass on 3DMark. Ferrari or Truck?

Seti is actually a Benchmark in it's own right, although probably not by design ROFL.

Benchmarks like 3D Mark, SiSoft Sandra Et Al are pointless on a desktop Home PC.

Because variables like on chip cache, Mem Latency and a whole host of other My Ride is better than Your Ride BS gets in the way.

Wanna go fast in Multimedia? Getta Pentium.

Wanna do lotsa work powerfully? Getta AMD.

Regards

http://setiathome.berkeley.edu/view_profile.php?userid=325721

Pete Mason

PS Need GMail?

I have 100 available.




ID: 190754 · Report as offensive
kevint
Volunteer tester

Send message
Joined: 17 May 99
Posts: 414
Credit: 11,680,240
RAC: 0
United States
Message 190874 - Posted: 20 Nov 2005, 6:25:23 UTC - in response to Message 182827.  

so maybe slower comps claim for more credits?

mcbeth



I do think this is correct, I run a couple of fast pc's and a couple of slow ones, and the fast ones always claim less than the slow ones.
It all comes out in the wash.
ID: 190874 · Report as offensive
kevint
Volunteer tester

Send message
Joined: 17 May 99
Posts: 414
Credit: 11,680,240
RAC: 0
United States
Message 190875 - Posted: 20 Nov 2005, 6:29:00 UTC - in response to Message 187196.  


I think it is quite simple. Faster cpu's seem to claim less credits than slower cpu's. Does this mean the AMD chips are slower ? Don't know since I only have a couple of AMD's and most of the crunching machines are intel p4's. I have knoticed that the Intel Zenon chip seems to have a much higher RAC than most other chips. Just a thought.

Me thinks the point has been lost. Don't care if you are Optimized on anything. Don't care what speed processor you have. Don't care how the credit is applied.

The POINT of thistopic is that intels Notoriously get lower average claimed credits than AMD processors. Plain and simply put I notice a huge bias low for for Intel processors. and a slightly higher apple claimed credit. Now you may wish to speculate on the futures market for mangos, but what I am pointing out and would like explained(by somebody that actually does the work for seti or isn't blowing smoke up my butt) as to how come their is such a discrepancy. Certainly, I am not the only one to notice this discrepancy. Its really easy to see, too. go to your results pages and look at a completed result. if their is a result with a significantly higher claimed credit i would be that its an AMD processor. This isn't rocket science, it is statistics. Not one person has posted gave a valid reason for this. I have read how optimizing.., how credit is applied..., etc what I havent seen is why? here is an http://setiweb.ssl.berkeley.edu/workunit.php?wuid=34274464

guess which results were from an intel vs. AMD. If you use what I said its pretty clear which is which


ID: 190875 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19075
Credit: 40,757,560
RAC: 67
United Kingdom
Message 190886 - Posted: 20 Nov 2005, 8:20:28 UTC - in response to Message 190754.  

.... My new Unit is an AMD Sempron 2800 On Gigabyte MB with 768 Mem. It does the same units in 3Hrs.

TBird was 1.4Ghz. Sempron is 1.67Ghz.

Average is 1.53Ghz. Show me an Intel Pentium at that speed that will do a unit in that time 8-). But Intel kicks my ass on 3DMark. Ferrari or Truck?

Seti is actually a Benchmark in it's own right, although probably not by design ROFL......


Try a Pentium M (Dothan). A 1.86GHz does a unit in 1hr:15min average. ;-)
ID: 190886 · Report as offensive
Astro
Volunteer tester
Avatar

Send message
Joined: 16 Apr 02
Posts: 8026
Credit: 600,015
RAC: 0
Message 190897 - Posted: 20 Nov 2005, 10:57:32 UTC
Last modified: 20 Nov 2005, 10:58:49 UTC

Andy, Andy, Andy

AMD Baby

AMD64 3700+ 939 socket sandiego OCed

146289559 2,779.52 11.11 14.56
145938389 2,645.75 10.57 15.53

46 min each my friend

lol, just had to, :), not bad for a $200 processor
ID: 190897 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19075
Credit: 40,757,560
RAC: 67
United Kingdom
Message 190991 - Posted: 20 Nov 2005, 15:15:35 UTC - in response to Message 190897.  

Andy, Andy, Andy

AMD Baby

AMD64 3700+ 939 socket sandiego OCed

146289559 2,779.52 11.11 14.56
145938389 2,645.75 10.57 15.53

46 min each my friend

lol, just had to, :), not bad for a $200 processor


Damned expensive in my book, my cpu came from laptop dropped down three flights of concrete stairs, i.e. £0. But the mobo did cost £160. I got the cpu because I was able, with one of those laptop HDD to IDE coverters, to transfer all the files to the replacement for a friend. Wonder the HDD survived as it was dented. Nothing else recovered.
ID: 190991 · Report as offensive
Profile Prognatus

Send message
Joined: 6 Jul 99
Posts: 1600
Credit: 391,546
RAC: 0
Norway
Message 191360 - Posted: 21 Nov 2005, 10:37:09 UTC

AMD challenged Intel to a duel about double-cores earlier. It is now 40 days left before the duel is taking place (according to AMD). In the mean time, AMD has published a PDF-document, which is both taunting and humorous! I find it amusing. :)

ID: 191360 · Report as offensive
Profile Reaper13
Volunteer tester
Avatar

Send message
Joined: 4 Mar 04
Posts: 64
Credit: 672,781
RAC: 0
United States
Message 191502 - Posted: 21 Nov 2005, 20:30:50 UTC
Last modified: 21 Nov 2005, 20:35:20 UTC

I have an X2 and i am very pleased with the way it works. I have Seti crunching on both cores all the time. Even when I am playing games online like Battlefield 2. It has never slowed down and my temps on the processor are beween 43 and 45 celcius. Just depending on the temp in the room. AMD has a great thing going with these X2's, only thing is that they are expensive.

I am running Rosetta now on my laptop which is an Athlon 64 3800. Was running Seti on it, but decided to switch to Rosetta. I don't like running more than one Boinc project on a computer. I don't like the way they are divided up.


AMD Athlon 64 X2 4400+
AMD Athlon 64 3800+
AMD AthlonXP 3200+
ID: 191502 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : amd vs intel


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.