Announcement

Collapse
No announcement yet.

Explain this one to me then........

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Explain this one to me then........

    I've been playing about with 3DMark 2000, my G400 32/DH, and various processors recently (Namely a Celery 333 o/c'ed to 416, a Celery 433 o/c'ed to 541 and a PIII/450 o/c'ed to 504). All these tests were done at 1024x768x32 32bpp TB 32-Z and D3D T&L (The PIII gave slightly better results with the D3D T&L on instead of the PIII optimisations....Explain that one if you can..:-) ) Anyway, the scores were such....

    Celery 333@416: 1158
    Celery 433@541: 1538
    PIII/450@504: 2357

    What I wanna know is why does the PIII run all over the Celery 433@541 by so much? I know that the PIII has the extra SSE instruction set, but I didn't think that it counted for THAT much, plus all tests were run with D3D T&L, I didn't use the PIII optimisations. What I expected to see was the Celery to be much closer, if not better than, the PIII. Why? 'Cos running a CPU benchmark in SiSoft Sandra, the PIII was a few mips faster than their benchmark PIII/500, and the MFlops were the same. Running the Celery 433&541 in the same test showed that it was far superior to the PIII/500 in terms of raw processing power which, forgive me if I'm completely wrong about this statement, the G400, and 3D calculations in general, rely heavily upon. Can anyone throw any light on the subject, or is it a quirk of 3DMark, my setup, or summit else? Should I have gone Nvidia? Should I have bought a GeForce, got better 3DMark scores with it, plus crap image quality and eye strain to boot? :-)Thanks in advance.....


    ------------------
    Cheerie,
    Monty
    Cheerie,
    Monty

  • #2
    Hi there Monty...

    There is a simple explanation for it!

    3D Mark 2000 is a crappy, biased benchmark driven by the computer industry to always show results that the most expensive computer is automatically also the best one... Yeah, I know, it sucks and it's not much for an explanation but it is the closest thing to truth... Not to be misunderstood, I think PIII is a better processor than Celery (especially Coppermine) but due to a lack of support in software the difference in performance isn't as obvious as it should be.
    3DMark2000 is one of few programs that use optimisations such as SSE and/or 3DNow! but you can also see that they didn't do a good job on that one beacuse D3D (read: software emulation) results are better than hardware PIII results.
    Anyway, you haven't made a mistake by going the Matrox way and leaving GeForce to suckers that think of benchmark results as extensions to their di*ks

    Phew... sorry for the long post, but I could go on like this whole day

    [This message has been edited by Goc (edited 13 January 2000).]
    _____________________________
    BOINC stats

    Comment


    • #3
      Goc,
      Thanks for the reply, and I understand what yer saying, but what was puzzling me more than anything was that a processor that in the Sandra CPU benchmark, was shoving out much more Mips and Mflops than a PIII 500 should score a lot less in 3DMark with a card that is proven to work better and faster the more raw CPU is poured into it (I think I read somewhere here that someone had maxed it out with an Athalon 900MHz CPU, if you pardon then pun...:-) I see what you're saying about 3DMark, but even with industry pressure to show that the most expensive CPU's/graphics cards are always better, and thus give better results, I still can't see how a Celery o/c'ed to 541 (and benchmarked way past a PIII/500's score) can score worse in the 3DMark tests.....Mind you, in going by what you're saying about designing the package around the industry, just by my own observations, of course, I could have sworn it ran smoother, and faster, with the Celery@541 than with the PIII/450@504....Surely a 100Mhz bus speed and SSE can't account for all that much difference in scores???? Can it? BTW, I'd never buy another card apart from a Matrox, anything else just sucks. Seems I can't even get the sarcasm right these days...:-)


      ------------------
      Cheerie,
      Monty
      Cheerie,
      Monty

      Comment


      • #4
        Goc,

        Just be careful that it is software, for instance Dx7 contrains 3D-Now! and SSE code.

        Comment


        • #5
          HI all,

          I mostly agree with Goc, but just to mention one of a few exceptions: CubaseVST performs up to 30% better on P3 @ the same clockrate. Most of the horsepower goes into DSP stuff like EQ and dynamic stuff, which a P3 can handle way better through SSE which Cubase takes advantage of.

          Monty, provided that SSE opti can be disabled in 3DM2k, and if you still have all those CPU's, can you bench a Celery 500 against a P3 under "equal" conditions?

          half a cent,
          Hellmut

          Comment


          • #6
            Goc,

            3DMark2000 is one of few programs that use optimisations such as SSE and/or 3DNow! but you can also see that they didn't do a good job on that one beacuse D3D (read: software emulation) results are better than hardware PIII results.
            The DX7 option uses SSE as well. The P3 optimized just uses their own geometry pipeline instead of DX7's.
            So both are "P3 optimized" even though that isn't weel explained by the app.

            _
            B

            Comment


            • #7
              Also note that A celeron only has 128K of cache memory, and a PIII has what, 512k?

              That makes a huge difference in mathematical calculations. (Not to mention the extra MMX/SSE instructions on the P3)

              Comment


              • #8
                The cache is not that big of a deal. The celerys 128 cache runs at full processor speed and is on die while the p3 runs at half processor speed and is not on die. For example my 300a@450 is very close to a p2 450 in 3dmark 2000, the difference is about 40 points or so.

                Comment


                • #9
                  Hellmut,
                  I'm afraid it was a few weeks in between different CPU's, so I can't redo those benchmarks. In saying that, nothing else, apart from the processor, changed in the setup. In each 3DMark test I only use one setup (The one explained above). I even used the D3D T&L for the PIII as well, noting that it performed a coupla marks better than when tested with the 3DMark PIII optimisations. For raw processing power benchmarking I used SiSoft Sandra 98 Professional. One of the setups that it compares the tested CPU to is a PIII/500. The PIII450@504 was a few Mips faster than the compared setup, with the exact same MFlops rating. The Celery 433@541 put both Mips and MFlops off the scale, leaving the PIII/500 trailing well behind. I'm at work at the moment, but I'll post the figures for those benchmarks tonight, and then you'll see, even with a 30% advantage with SSE, that the Celery433@541 should not be this far behind a PIII/500's scores. BTW, it is DX7 that I ran all these tests on with PD5.41 as the G400 drivers. Well, until I post these scores tonight....ta ra :-)


                  ------------------
                  Cheerie,
                  Monty
                  Cheerie,
                  Monty

                  Comment


                  • #10
                    Here we go....

                    PIII/500: 1350 Mips / 670 MFlops

                    C433@541: 1465 Mips / 723 MFlops

                    The Celeron should be closer in the 3DMark scores? Am I barking up the wrong tree here???



                    ------------------
                    Cheerie,
                    Monty
                    Cheerie,
                    Monty

                    Comment


                    • #11
                      3DMark2K just plain sux. It's not a reliable b/ming tool. For instance, I can get like 1.4 million polys/sec in one test, and then repeat the same test right after and get less than a million polys/sec. WTF is up with that? Mad Pudnion rushed this product, I think.

                      ------------------
                      You have been Donimated
                      You have been Donimated

                      Comment


                      • #12
                        Yeah, but it's real pretty...
                        Lady, people aren't chocolates. Do you know what they are mostly? Bastards. Bastard coated bastards with bastard filling. But I don't find them half as annoying as I find naive, bubble-headed optimists who walk around vomiting sunshine. -- Dr. Perry Cox

                        Comment


                        • #13
                          Heh... sorry Buuri & Himself... guess I didn't make myself clear enough I thought SSE & 3DNow! optimizations in DirectX 7 to be something that is standard since nowadays it seems it is a must to have Dx7 installed...

                          And for SiSoft Sandra I have to say that it puzzles me In lots of cases it is crappy and reports things totaly wrong, and then again in lots of cases it is amazingly useful to use. Too bad processor benchmarking is one of its bad sides. I think that proc. b/marking in Sandra is performed using a very simplified routine that doesn't account for extra cache (it is most likely using maximum of 128K-therefor the equal or better result comparing Celery w. PII & PIII), and is relying solely on the processors speed in Mhzs.

                          What I am trying to say is that it is stupid to believe a benchmark. They maybe right in many things, but they are not flawless. You should know what you are going to do with your computer when you buy it and pick a processor that suits your needs (simply: Celery for games and internet use; Athlon, PIII or even PII for more professional use).

                          So it isn't strange if you see a Celery clocked at 541 Mhz performing better in games, and crappy software than a PIII at 504 Mhz, but if you take software that has a clean and efficient code that uses your hardware to the limits you will find that PII shows significant difference in results from a "mere" Celeron due to more cache even with PII's cache clocked at 1/2 speed, and PIII is even further away (especially coppermine) because of extra registers in processor core (i.e. SSE) and better pipeline design (Coppermine).


                          [This message has been edited by Goc (edited 14 January 2000).]
                          _____________________________
                          BOINC stats

                          Comment

                          Working...
                          X