Announcement

Collapse
No announcement yet.

GFFX - 4x2 not 8x1

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GFFX - 4x2 not 8x1

    I'll tell you, Nvidia has been so fricken sneaky. After reading this, I have lost all respect for them. They have used practically every trick in the book to make there piece of crap FX card look good.

    General discussion, tweaking, overclocking and technical support questions about discrete Radeon graphics products.
    Ladies and gentlemen, take my advice, pull down your pants and slide on the ice.

  • #2
    Looks like they're panicing a little bit. Need to to stop the marketing people and let the devolopers do the work and then come back with a strong card.
    Chief Lemon Buyer no more Linux sucks but not as much
    Weather nut and sad git.

    My Weather Page

    Comment


    • #3
      that would explain the f*cked up performance
      Main Machine: Intel Q6600@3.33, Abit IP-35 E, 4 x Geil 2048MB PC2-6400-CL4, Asus Geforce 8800GTS 512MB@700/2100, 150GB WD Raptor, Highpoint RR2640, 3x Seagate LP 1.5TB (RAID5), NEC-3500 DVD+/-R(W), Antec SLK3700BQE case, BeQuiet! DarkPower Pro 530W

      Comment


      • #4
        HAHAHAHA its so funny
        Hey! You're talking to me all wrong! It's the wrong tone! Do it again...and I'll stab you in the face with a soldering iron

        Comment


        • #5
          History repeats itself... looks like this will be an nVIDIA Parhelia

          Comment


          • #6
            me wonders if it's not more efficient to have 4 pipes doing multi-texturing instead or 8 doing single textures... what game still uses single textures??? 8x2, that would be nice...

            Comment


            • #7
              Originally posted by Kurt
              me wonders if it's not more efficient to have 4 pipes doing multi-texturing instead or 8 doing single textures... what game still uses single textures??? 8x2, that would be nice...
              Multi texturing ain't the way of the future, my friend. Don't expect many chips in the future with more than one TMU per pipe.

              Comment


              • #8
                Huh?

                in the doom3 engine (and other that use similar techniques) they will be extremely limited by the cards ability to do multitexturing... especially considering that it will use up to 7 textures per poly, and i am pretty sure no less than 4 at any given point in time.

                the only advantage that an 8x1 provides over a 4x2 design (assuming that they use the same pixel pipeline, except that the 4x2 just has a second texture unit per pipe) is that the 8x1 (with drivers/hardware that is capable of doing loopbacked texture lookups) is capable of rendering single textured pixels 2x as fast. considering that each pixel pipeline takes up extra die space, probably more than adding a second texture unit to each pipeline.

                the only benefit that this offers is that under doom3 (and similar engines) the first pass will be about 2x as fast as it renders only geometry data... but... that difference can easily be made up by more efficent shader units, higher clock speed and other little things (like accessing 2 textures per clock on one pixel possibly being a smidgeon faster than doing 2 pixels with one texture and then having to rerender both pixels to get the second texture)...

                this is one of the reasons why the parhelia (in theory) should still have been a good performer under next gen games... it broke away from the "8 textures/clock" limit that everyone has been at for a while... it might not be the most efficent (you get 4 pixels/clock no matter if you use 1 texture or 4 textures, or 2pixels/clock if you use 5 or 8 textures) but if used correctly it can easily make up the clock speed difference

                also, about the GeForceFX...

                It renders:

                8 z pixels per clock
                8 stencil ops per clock
                8 textures per clock
                8 shader ops per clock
                4 color + z pixels per clock with 4x multisampling enabled

                It is architected to perform those functions.

                Basically, its 8 pipes with the exception of color blenders for traditional ROP operations, for which it has hardware to do 4 pixels per clock for color & Z. It has 8 "full" pipes that can blend 4 pixels per clock with color.
                What NVidia had to say about it, taken from The Tech Report.

                it is neither a 4x2 nor is it an 8x1 design. it is something quite different, following in line with their significantly improved pixel shader units.
                "And yet, after spending 20+ years trying to evolve the user interface into something better, what's the most powerful improvement Apple was able to make? They finally put a god damned shell back in." -jwz

                Comment


                • #9
                  Originally posted by Novdid
                  Multi texturing ain't the way of the future, my friend. Don't expect many chips in the future with more than one TMU per pipe.
                  it was, then it wasn't, then it will probably be again...depends what the programmers manage to do...if they can't figure out nice effects with the shaders, they'll proabably use multi-texturing so we can see some EMBM finally.... (then maybe some of the dx 9 functions in oh a couple of years....)

                  then again the vidcard makers might come up with a way to do it all for free for only 399$ + shipping

                  Comment


                  • #10
                    Originally posted by DGhost
                    Huh?

                    in the doom3 engine (and other that use similar techniques) they will be extremely limited by the cards ability to do multitexturing... especially considering that it will use up to 7 textures per poly, and i am pretty sure no less than 4 at any given point in time.

                    the only advantage that an 8x1 provides over a 4x2 design (assuming that they use the same pixel pipeline, except that the 4x2 just has a second texture unit per pipe) is that the 8x1 (with drivers/hardware that is capable of doing loopbacked texture lookups) is capable of rendering single textured pixels 2x as fast. considering that each pixel pipeline takes up extra die space, probably more than adding a second texture unit to each pipeline.

                    the only benefit that this offers is that under doom3 (and similar engines) the first pass will be about 2x as fast as it renders only geometry data... but... that difference can easily be made up by more efficent shader units, higher clock speed and other little things (like accessing 2 textures per clock on one pixel possibly being a smidgeon faster than doing 2 pixels with one texture and then having to rerender both pixels to get the second texture)...

                    this is one of the reasons why the parhelia (in theory) should still have been a good performer under next gen games... it broke away from the "8 textures/clock" limit that everyone has been at for a while... it might not be the most efficent (you get 4 pixels/clock no matter if you use 1 texture or 4 textures, or 2pixels/clock if you use 5 or 8 textures) but if used correctly it can easily make up the clock speed difference

                    also, about the GeForceFX...



                    What NVidia had to say about it, taken from The Tech Report.

                    it is neither a 4x2 nor is it an 8x1 design. it is something quite different, following in line with their significantly improved pixel shader units.
                    they've been making their chips more programmable every time, they might someday make one with on-the-fly hardware re-organization (cell-computing). That way you don't have to wonder whether more pipes or more tmu or more whatnot-nextgen-thingamagic you need.

                    Comment


                    • #11
                      Originally posted by Kurt
                      they've been making their chips more programmable every time, they might someday make one with on-the-fly hardware re-organization (cell-computing). That way you don't have to wonder whether more pipes or more tmu or more whatnot-nextgen-thingamagic you need.
                      F the GFFX, put an Athlon XP 2000+ on the vid card and let software do things! Oh and some DDR333 as well.
                      P4 Northwood 1.8GHz@2.7GHz 1.65V Albatron PX845PEV Pro
                      Running two Dell 2005FPW 20" Widescreen LCD
                      And of course, Matrox Parhelia | My Matrox histroy: Mill-I, Mill-II, Mystique, G400, Parhelia

                      Comment


                      • #12
                        Originally posted by WyWyWyWy
                        F the GFFX, put an Athlon XP 2000+ on the vid card and let software do things! Oh and some DDR333 as well.
                        That, actually, would suck. Video cards are generally much faster than processors. It's just that the processors are very specialized, and CPUs are more general-purpose.
                        Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                        Comment


                        • #13
                          Yea. true. If you use a CPU for graphics... it's like moving all your rendering onto the CPU... and we all know what happens why you do software emulation for games

                          Comment


                          • #14
                            Originally posted by WyWyWyWy
                            F the GFFX, put an Athlon XP 2000+ on the vid card and let software do things! Oh and some DDR333 as well.
                            Graphics processors are optimized for what they do. It could take a general purpose processor ten times or more clock speed to do the work of a specialized processor.

                            Comment


                            • #15
                              yep i've read recently that a dedicated mpeg decoder chip f.e. only needs 1-2% the time of a cpu.
                              i've also read they are working on a dedicated chip (called brute or sth.) for chess!

                              edit: it's called brutus.
                              Last edited by thop; 24 February 2003, 08:03.
                              no matrox, no matroxusers.

                              Comment

                              Working...
                              X