Announcement

Collapse
No announcement yet.

Matrox Parhelia or 9700

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Matrox Parhelia or 9700

    I'm looking for some advice lets just say I may have the opportunity of a Matrox Parhelia for less that 200 pound, a lot cheaper than a Radeon 9700. What would you say is the better option and which card offers the best image quality as I know that the Radeon is much faster. I currently have a geforce 4 4200 and want to update

  • #2
    Hold out.

    There's nothing your GF4 can't do that one of those cards can, and we're all kinda hoping that there will be a second revision SOON(tm) of the Parhelia that will ... fix ... certain ... things.

    - Gurm
    The Internet - where men are men, women are men, and teenage girls are FBI agents!

    I'm the least you could do
    If only life were as easy as you
    I'm the least you could do, oh yeah
    If only life were as easy as you
    I would still get screwed

    Comment


    • #3
      Originally posted by Gurm
      There's nothing your GF4 can't do that one of those cards can
      Oh? I didn't realize there was a triplehead GeFart...
      Core2 Duo E7500 2.93, Asus P5Q Pro Turbo, 4gig 1066 DDR2, 1gig Asus ENGTS250, SB X-Fi Gamer ,WD Caviar Black 1tb, Plextor PX-880SA, Dual Samsung 2494s

      Comment


      • #4
        By things we mean...

        N:1 Higher core speeds(300 mhz +)....
        N:2 Aniso support up to 8x...
        N:3 FAA that works on all games...
        N:4 2d visual quality issues in some circumstances...
        N:5(optional) memory bandwith saving features(z buffer compression,early z checks,etc...).



        Did i miss anything?....
        Last edited by superfly; 21 October 2002, 20:13.
        note to self...

        Assumption is the mother of all f***ups....

        Primary system :
        P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

        Comment


        • #5
          Sure did.

          I haven't seen any 2D issues, just 3D issues.
          Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

          Comment


          • #6
            Originally posted by Wombat
            Sure did.

            I haven't seen any 2D issues
            You have to admit some did tough...
            Athlon64 4800+
            Asus A8N deluxe
            2 gig munchkin ddr 500
            eVGA 7800 gtx 512 in SLI
            X-Fi Fatality
            HP w2207

            Comment


            • #7
              No. I'm not trying to defend Matrox, I just don't know what 2D problems you're talking about. I have the "shimmering" on my Parhelia, but that only shows up when the 3D engine fires up.
              Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

              Comment


              • #8
                Originally posted by Kruzin
                Oh? I didn't realize there was a triplehead GeFart...
                I think Gurm was reffering to the 3d capabilities only, as that seems to be what 3dfx is after.

                Comment


                • #9
                  Originally posted by superfly
                  By things we mean...

                  N:1 Higher core speeds(300 mhz +)....
                  N:2 Aniso support up to 8x...
                  N:3 FAA that works on all games...
                  N:4 2d visual quality issues in some circumstances...
                  N:5(optional) memory bandwith saving features(z buffer compression,early z checks,etc...).



                  Did i miss anything?....
                  1) is not gonna happen without a full core redesign, more than likely... it would probably take a new core layout and possibly a new process to make it possible...
                  2) might happen but the performance hit would be incredible... i can't blame matrox for disabling it as its base performance is not what they wanted...
                  3) FAA does work on all games baring some bugs w/ specific games... it does not however work on the stencil buffer nor does it work on the inside edges of transparent textures... however this would result in a fairly large performance decrease as well, and a higher core speed might be nessicary... being able to select the FAA level might be cool... 4x FAA or 8x FAA might provide much better performance and still retain very good image quality...
                  4) as Wombat said... only the 3d image quality causes problems, never had an issue with 3d stuff...
                  5) just plain silly... in a study of those features on a Radeon 7500 or 8500, don't remember which one, it was found that those features combined resulted in less than like 5fps of a difference. and even then it was under certain circumstances. the one feature that made the most differences is fast Z-Clears, which the Parhelia supports. those features will make a difference on a fairly memory bandwidth limited architecture (GF2/4MX cards in specific - keep in mind that there was a large jump in speed on the MX cards, partially responsible for improved memory bandwidth saving techniques... again, its a highly bandwidth limited architecture)... keep in mind that the Parhelia has a lot of memory bandwidth available and it will always be faster to read/write data out of memory without having to compress/decompress it as you move it - as long as you are not memory bandwidth limited....
                  "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                  Comment


                  • #10
                    Look, Z-culling DOES make a big difference.

                    There are lots of things that are "wrong" with the current implementation of the Parhelia, due to the shipping silicon basically being a patched beta silicon due to some "issues" which nobody is at liberty to discuss.

                    Bleh.

                    Anyway, I wasn't referring to triple-head, Kruzin. He was contemplating a Radeon (which I own, and love) but the honest truth is that for the short term, living with the GF4 might be worthwhile until after Christmas at least.

                    - Gurm
                    The Internet - where men are men, women are men, and teenage girls are FBI agents!

                    I'm the least you could do
                    If only life were as easy as you
                    I'm the least you could do, oh yeah
                    If only life were as easy as you
                    I would still get screwed

                    Comment


                    • #11
                      Yeah I think I might hold out until Christmas The only reason I thought of going for the parhelia was the price but I must admit many problems with the card itself have put me off.

                      Comment


                      • #12
                        Z-Culling makes a difference when the scene is rendered front to back like its supposed to... it also makes a difference in fillrate limited it provides better performance in things like that.. but on situations where it is being rendered back to front it has a minimal performance improvement...

                        to have an architecture optimized for high levels of performance under certain circumstances (like the Parhelia unfortunately is) does tend to provide uneven levels of performance. it doesn't raise the poly throughput of the core, it lowers the polycount that it has to work on... and if the game is calling for it to do hardware T&L its going to force most of the polys to be operated on at that stage before they get occluded... with some of the complex pixel shader effects in use it could also be dangerous to cull polys from the rendering stage...

                        plus, the Parhelia is capable of beating at least the GeForce and GeForce2 range as far as poly throughput goes... the difference is its able to keep a fairly high poly throughput while operating pixel and vertex shaders on it...

                        considering that the next generation of games, all targeted at GeForce or higher hardware, has much higher speed processors to work with compared to the last generation of games (Quake3 can run on a P133 w/ a Voodoo Rush... hardware accelerated... for what the engine is also capable of its fairly impressive) and in order to achieve higher poly counts than before (even on a GeForce video card) it will result in the need for much more agressive engine based culling... Quake 3 operated on the assumption that the processor wasn't fast enough to process polygons faster than the graphics card in the system... now we see the opposite in effect... people with 2.0ghz P4's and a GeForce2MX...
                        "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                        Comment


                        • #13
                          Originally posted by DGhost
                          [B]

                          1) is not gonna happen without a full core redesign, more than likely... it would probably take a new core layout and possibly a new process to make it possible...
                          You sure about needing a new process to make it hit at least 300 mhz core speeds,i would imagine the making some relatively small changes in the core and layout changes should probably be enough.....


                          I know that the 9700 is a different architecture but i still can't believe that they clocked the core as high as it is,considering that we are talking about a 107 million transistor desing that's over twice as big as an athlon core,and even more impressive is that the card still has overclocking headroom to spare,since there are plenty of people running their cards at 350~370 mhz core speeds with just the stock fan/heatsink...


                          Even if you consider the use of external power as somewhat of a cheat,it's still pretty impressive regardless,especially considering that's it's using the same process tech that parhelia uses...


                          ht happen but the performance hit would be incredible... i can't blame matrox for disabling it as its base performance is not what they wanted...
                          ..


                          That's the other thing i don't get,with all the extra texture unit's essentially going to waste,since most games are still dual textured,why would it have a performance hit enabling Aniso filtering above 2X anyhow?....Unless there's something inherently wrong with the way it's implemented..
                          note to self...

                          Assumption is the mother of all f***ups....

                          Primary system :
                          P4 2.8 ghz,1 gig DDR pc 2700(kingston),Radeon 9700(stock clock),audigy platinum and scsi all the way...

                          Comment


                          • #14
                            z-whatever makes a difference. anyone remember the radeon ddr le, which had hyper-z disabled by default (ati claimed it isnt even there) but could be enabled through the registry and gave a noticeable performance boost.
                            hmm my memory might escape me here but i am quite sure it was the radeon le.
                            no matrox, no matroxusers.

                            Comment


                            • #15
                              Radeon VE.... and Hyper-Z is a collection of 3 things - a quasi-deferred mode rendering architecture (buffers geometry and does Z-Culling... Hierarchical Z is what they called it), Z-Compression (eases memory bandwidth limitations) and fast Z-Clears (clears the Z-buffer a hell of a lot faster than normal)...

                              Anands article on this is here. as it shows, on a memory bandwdith limited architecture, z-compression helps a fair amount, but the z-buffer clearing routines are still the #1 thing that makes a difference...

                              as far as the clock speed stuff, the newer process is probably not nessicary. the thing is that they would have to devote a lot of time to laying out the processor for heat disipation and generation. it would take a lot of time and pretty much a completely relaying out of the core, probably about the same amount of work as recreating the core... ATI optimized their core a lot to hit those high clock speeds... seeing as Matrox probably won't do that, a smaller process is not nessicary...

                              about the texture filtering, if you look at what the Parhelia's architecture specifies, you would see that it is fairly limited in texture filtering capabilites in hardware... you might be able to resample it twice, but you couldn't do trilinear + ansio then.... i'm leaving for a bit so i don't have time to dig up the specific limitations of the hardware...
                              "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                              Comment

                              Working...
                              X