Announcement

Collapse
No announcement yet.

Did you see this??

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Did you see this??

    Saw this on FiringSquad:

    Question: Well, one of the most asked questions lately, will nocturne support T&L?

    Nocturne Team: GeForce doesn't have adequate fill rate for T&L. T&L was just a gimmick

    Question: Were you guys ever worried about a backlash to the amount of violence/language/nudity in Nocturne?

    Nocturne Team: That's one of the selling points. Besides, its no more violent or sexual than an R-rated movie. Or Fox's new line-up.

    Question: Does the development team have a recommended 3D card for optimum performance with Nocturne?

    Nocturne Team: Matrox G200 or G400
    (I can vouch for that - the G400 flies on this game -ed)

    Question: For the next game you guys do, why not do it all via openGL, that way you can have EASY ports for win32/mac/linux/ and others!

    Nocturne Team: OpenGL is slow.


    There's tons more good stuff in the chat so go check it out.

    ------------------
    PIII-450@504, 128 HDSRAM, Asus P3BF, G400/32, SBLive!, Nokia 447Xi 17", oh yea, a nice floppy drive

  • #2
    Question: Well, one of the most asked questions lately, will nocturne support T&L?

    Nocturne Team: GeForce doesn't have adequate fill rate for T&L. T&L was just a gimmick

    --- 8< --------

    This is a really laughable answer. T&L has NOTHING to do with fillrate. When using T&L instead of lightmaps you even DOUBLE the fillrate. However much more polygons are needed, but the GeForce seems quite capable of handling such amounts.

    Three options:
    1. The editor misunderstood.
    2. The developers don't know what they are talking about.
    3. The interviewed guy was in reality a marketing guy.

    Fan of Dilbert comics, I'd go for the latter one. ;-)

    Frank


    Comment


    • #3
      Well,
      I think that it is indeed true that GeSpot doesn´t have enaugh fillrate for its
      T&L capabilities. Reasons:

      1st.
      The more tris you´re using in a scene, the more bus-bandwidth (apart from "GPU"-power or fillrate) is necessary for these tris to be displayed. GeSpot already showed that it has bus-bandwidth issues on non-DDR Rams.

      2nd.
      The more triangles get textured in a scene, the more texture-reads have to be issued as textures are stored on the cards RAM and only partially (4k) in the chips cache giving
      us another bandwidth problem.

      3rd.
      If you really start writing games that would
      FULLY use T&L you´d also start creating more
      textures (I don´t want to have ONLY smoother edged scenes ... I want a more beautyful scene !)
      And this won´t be possible with the GeSpot.
      I think this is what they meant.


      cheers .. Bjoern

      Comment


      • #4
        First off all, bjoern, lets try to show more respect to nVidia by calling the product by its correct name: GeForce. If I dislike the G400 (which I don't) I also would make fun of its name.

        1.) The bandwidth required for textures is mulitudes higher than the bandwidth needed for triangle information. Take in account the caches on the chip itself and the bandwidth requirements are even more less.

        2.) Texture reads are not on per triangle basis, but on per pixel basis. So more triangles don't imply more memory reads for a texture.

        3.) Ofcourse, we want the best of both worlds. Smoother curves, more detailed textures. But ask yourself: Would you rather have a simple scene with blocky structures and very high quality textures you only notice from really close, or do you want good textures and smooth curves. I'd choose the latter one.

        I choose for the G400, not for the framerates. I choose it because of the best of both worlds, reasonable speed at a PII, 32 bit color, high quality picture, dualhead, etc. If I wanted only speed, I would have gotten myself a Voodoo3.

        Comment


        • #5
          About the first thing that really smacks on this post is the 'OpenGL is slow' line. Granted, maybe Matrox's drivers are still working their way up to maturity, but OpenGL is most definetly NOT slow.

          And T&L is not a gimmick. Anyone concerned about bandwidth should get one of the cool-RAM boards and an AGP 4x motherboard. There, no worries. :-)

          Granted, kudos to the G400 plug, but I dunno about the rest of this..

          Comment


          • #6
            Smells like a big steaming pile of poo!

            Comment


            • #7
              and hey, that T & A has always been my favorite passtime!
              jim
              System 1:
              AMD 1.4 AYJHA-Y factory unlocked @ 1656 with Thermalright SK6 and 7k Delta fan
              Epox 8K7A
              2x256mb Micron pc-2100 DDR
              an AGP port all warmed up and ready to be stuffed full of Parhelia II+
              SBLIVE 5.1
              Maxtor 40g 7,200 @ ATA-100
              IBM 40GB 7,200 @ ATA-100
              Pinnacle DV Plus firewire
              3Com Hardware Modem
              Teac 20/10/40 burner
              Antec 350w power supply in a Colorcase 303usb Stainless

              New system: Under development

              Comment


              • #8
                Sorry Frank,

                I didn´t wanna sound disrespective to Nvidia. I´d better added that my opinion is although I still think that the GeForce has bus-bandwidht issues mainly due to its T&L, it´s still the most advanced games-3d-hardware around and NVidia has done a great job. 8-)

                I really want to make my comments as short as possible. I think this is mostly boaring to others ....

                1) True ... texture bandwidth is by far higher than bandwidth needed for tris. BUT, I wasn´t referring to problems with current games. Texture chaches also mostly
                take advantages out of Texels on a per pixel basis. Ie only when the texture repeats the cache can take advantage.
                I didn´t meant that tris bandwidth itself could develop a problem but rather the fact that due to T&L the bandwidth-need for tris DOESN´T decrease but increase !

                2) True .. Maybe I should´ve pointed that out but I didn´t meant that more tris directly
                causes more tex-reads. It´s more in connection to 3) . For a future game I don´t want only smoother edged surfaces (therefore I wouldn´t have more tex-reads within a high-poly-count-scene) but a more detailed scene which would make the grown amount of tris need more ( or bigger ) textures.

                If there were no bandwidth issues with the GeForce why then is there such a big gap
                between Q2 Demo1 results in 1280*960 on DDR
                and NON-DDR GeForces ?
                The DDR on the GeForce ONLY helps releasing bandwidth issues but not directly fillrate-issues. The Fillrate is mainly (but not completely) defined through the core-clock of the GPU and not by the ram-speed. The Ram-Speed only helps getting enaugh bandwidth for the chip so that it may pump out its buffer without tex-reads or other actions interferring. It should be clear that a chip is only able to bring it´s max fillrate in a sustained fashion when the texture is in cache and no other actions need to be performed (assuming Fcore = Fram).

                I also think that the Nocturne developers were a little harsh to say that T&L is only a gimmick but on the other hand I think that the GeForces T&L is not the breakthrough that everyone thinks it is. It´s just the right way to go.


                cheers .... Bjoern

                ps. Just to be sure ... I don´t want to sound ignorant nor offensive. This is just my personal experience, so I may still be completely wrong !

                Comment


                • #9
                  ooops ... double post

                  [This message has been edited by bjoern (edited 26 October 1999).]

                  Comment


                  • #10
                    A 'little' reply:

                    1.) Currently all the triangle information needs to be sent over the AGP of PCI bus which has less bandwidth than the internal caches. When using vertex buffers the info is transformed on chip and doesn't need the AGP/PCI bus, so much better performance.

                    If you mean that games use more triangles, then the bandwidth requirement rises.

                    Currently even the G400 with its 256 bit dualbus (128 to memory chips) has bandwidth problems for 32 bit textures, 32 bit output, etc.

                    That 32 bit scores lower than 16 bit is NOT because of that the calculations take longer. 32 bit integer calculations are as fast as 16 or 8 bit calculations. The simple fact is that more data from memory is needed in 32 bit, and currently the 128 bit memory interface doesn't have enough bandwidth. DDR has twice the bandwidth, so you get much better fillrate scores.

                    Frank

                    Comment


                    • #11
                      Erm .... yes ....

                      I tried to be objective, so I didn´t want to mention that GeForces bus-bandwidth and throughput is by far superior to G400, no question ´bout that (although that IS objective... 8.) )

                      Than again ... I was not talking about texture-transfer from MainMem but rather assumed that all of the scene is "on board" and that leaves the caches alone for repeating textures. And the cache ain´t big enaugh to store multiple textures. So after some tris (the spans, to be exact) are drawn with the same texture the next texel with a different texture will lead to a cache-miss followed by a texture-read from onboard-mem(the cards mem) with the cacheline being updated.

                      Regarding Vertex Buffers: The G400 uses
                      vertex-buffers as well to create bunches of vertrices of specific size to allow AGP/GART to DMA the buffer to the card. The thing that I´m not sure about now is, how does GeForce do the job ?
                      Are a scenes vertrices loaded to the card once, and for every frame a transformation scheme is loaded that´s being used on the existing vertrices, or is there a vertex-buffer per frame(transfered via AGP) that includes an additional transform scheme to "on the fly" transform(move,scale,rotate) the scene ?

                      finally .....


                      Bjoern

                      Comment


                      • #12
                        Forget all the technical crap, the fact of the matter is that nVidia is yet again screwing over people with a half-baked card. It isn't ready for prime time and is only out there to screw over customers for 6 months.

                        It's like the TNT - the TNT wasn't ready for prime time when it was released, hence the TNT2 6 months later - at the specs the TNT was supposed to have to begin with. This time the chip has the balls, but the RAM doesn't. Different reason, same result.

                        All those nifty features on the GeForce won't be used for a while (6 months), and when they finally are, you'll see the current GeForce fall on it's ass when the Single-Data-Rate DRAM cards can't keep up. Not until the DDR DRAM cards hit the market will there be a "point" to getting the GeForce. Buying one now is pointless, and just hurts the gamers out there that don't know any better.

                        Just my $0.02


                        ------------------
                        Primary System: PIII-540 (450@4.5x120), Soyo 6BA+ III, 2x128MB PC100 ECC SDRAM CAS2, G400 MAX in multi-monitor mode. V2 SLI rig. Two Mitsubishi Diamond Pro 900u monitors, 3Com 3C905, SoundBlaster Live!, DeskTop Theater DTT2500 DIGITAL Speaker System (Sweeeeeet!), WD AC41800 18GB HD, WD AC310100 10GB HD, Toshiba SD-M1212 6x DVD-ROM, HP 8100i CD-RW, Epson Stylus Pro, OptiUPS PowerES 650, MS SideWinder Precision Pro USB joystick, Logitech 3-button mouse, Mitsumi keyboard, Win98 SE, Belkin OmniCube 4-port KVM, 10/100 5-port Linksys Ethernet switch (30~40MB/min under Win98SE)

                        Secondary System: PII-266, Asus P2B BIOS 1008, 1x128MB PC100 ECC SDRAM CAS2, Millennium II, 3Com 3C590, ADSL Modem 640kbit down/90kbit up, 3Com 3C509, Mylex BT-930 SCSI card, Seagate 2GB Hawk, NEC 6x CD-ROM, Linux distro S.u.S.E. 6.1 (IP Masquerade works!), Sharp JX-9400 LJ-II compatible

                        Tertiary System: DFI G568IPC Intel 430HX chipset, P200MMX, 4x64MB EDO Parity RAM, Millennium II, Intel Pro/100+ client NIC, SoundBlaster 16 MCD, Fujitsu 3.5GB HD, WD 1.2GB HD, Creative Dxr3 DVD decoder card, Hitachi GD-2500 6x DVD-ROM, Win98 SE

                        All specs subject to change.

                        The pessimist says: "The glass is half empty."
                        The optimist says: "The glass is half full."
                        The engineer says: "I put half of my water in a redundant glass."

                        Comment


                        • #13
                          Mitsumi keyboard?

                          Comment


                          • #14
                            I user Wrist Gliders, and support my wrists/hands at the point where your forearm bone connects to the wrist (don't know if it's the radius or ulna). Bones hurt like hell until you're used to it, but at least the carpal tunnel never takes any of the load.

                            The flatter the keyboard, the better - my hands "drop into" the keyboard instead of having to rest on some cheesy plastic molded wrist bar, where you still have to type OVER the keys, instead of down onto them.

                            I also don't like clicky keyboards - too noisey. I prefer a keyboard I can beat the crap out of and toss 6 months later when I kill it (no, I've never succeded in doing this - yet).

                            Oh, and the Mitsumi has a curved space to set my pens in above the function keys.
                            The pessimist says: "The glass is half empty."
                            The optimist says: "The glass is half full."
                            The engineer says: "I put half of my water in a redundant glass."

                            Comment


                            • #15
                              I was really starting to get interested in that technical "crap" IceStorm. Pretty informative stuff. On the other hand, I agree with you- in its current state, the GeForce's memory greatly limits the card's ability to take advantage of its otherwise outstanding components. That's the reason why I won't buy one now. No real point in getting a card like that if it's crippled, leaving me to be forced to buy the "Ultra Super Pro 2" version when it's released, if I really want what I was supposed to be getting in the first place.

                              ------------------
                              Ace
                              "..so much for subtlety.."

                              System specs:
                              Gainward Ti4600
                              AMD Athlon XP2100+ (o.c. to 1845MHz)

                              Comment

                              Working...
                              X