Announcement

Collapse
No announcement yet.

G400 & Via 133a & Agp4x

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • G400 & Via 133a & Agp4x

    I know I am not the only one intrested, Firing Squad posting benchmarks beating out the BX motherboards for the first time with new 4n1 drivers v4.17 Or something like that.
    My G400max was one of the first ever made, I was curious to see if I can hit agp4x, (it helps in unreal tourny) This really pisses me off about my card, I pre-ordered the bastard and they didn't even include agp4x!
    damn bastards

    ------------------
    G400 on a G400
    What the hell are we doing in the middle of the desert?

  • #2
    Zyn,

    I don't know why all the hype around AGP 4x, especially with the motherboard you have. The memory your motherboard uses doesn't even have enough bandwidth to feed AGP 4X transfers fully. Another thing to remember, too, is that your processor in a BX board running at AGP2X or AGP1X will outperform the same processor running in that motherboard at 4X.

    Rags

    Comment


    • #3
      Apparently Rags, this is no longer the case. I saw the same tests. It was using a GeFarse with some new drivers, and in UT, a KX133 Athlon combo performs substancially better than the venerable Intel BS...(DOH!!) I mean BX chipset. Or the i820 with RDRAM for that matter. High end applications under NT, that really do use the bandwidth, also show a marked improvement. Simply put - the World is no longer safe for Intel zelots.

      Zyn, I hope you got some PC133 RAM. This page > http://www.gotapex.com/reviews/ep7kxa/ep7kxa_7.htm shows that the best improvements are seen with a memory bus speed of "+33". I don't know how to "force" AGP 4X. But you might just try enabling it in BIOS. Then use something like PCI list to try and enable it in Windows. However, you might be better off leaving "Side-Band Addressing" enabled instead. I don't think both can be enabled simultaneously. Just a thought though. I have to wait for about a week to try out my ideas. I'll be sure to post the results when I've finished my testing.
      <a href="http://www.gaijindesign.com/lawriemalen/jedi" target="_blank"><img src="http://www.gaijindesign.com/lawriemalen/jedi/yoda.jpg" width="285" height="123" border="0"><br>:: how jedi are you? ::</a>

      Comment


      • #4
        Hi CannyOne,

        I am anything but an Intel zealot. I was just pointing out the facts that
        #1 AGP 4X CANNOT be utilized fully, and in all it's glory on a PC133 system, whether it is a KX, a 133A, an i820 using the MTH.

        #2 When using the same processor (which eliminates the Athlon, smart guy), the BX chipset will perform better.

        I will add to this that when comparing 800Mhz and above, clock for clock, the Athlon does not perform as well as a P3 coppermine...even with the KX133 chipset.


        Note how the P3800 on a "BS" chipset beats out even a mighty Athlon 900 on the super great KX133 chipset. I am sure that if the Athlon were using on die cache, the numbers would be much greater in favor of the Athlon. But that's not the case. Also, if you get a chance, compare some memory benchmark numbers between a KX133 chipset and a "BS" chipset, you will notice that the KX133 still underperforms clock-for-clock.

        If you want some more numbers, ask me.

        Rags


        Comment


        • #5
          I wonder how much of this is due to SSE optimizations, and not due to chiset irregularities. Seems to be comparing apples and oranges here.

          Even then, the differences are trivial. I don't run benchmarks all day, I use apps (and the occasional Q3A fragfest). And graphic performance on the 133A is stellar, and memory bandwidth tests from Sandra show numbers that my old BX never had a chance of hitting. And I can do it without having to push my PCI and AGP busses to the brink of self-destruction.

          Oh...Hi Rags! How's it goin?

          Comment


          • #6
            Hi EchoWars,

            I am doing well

            With my motherboard (BE6), my PCI is running within spec at 133 FSB. And as far as AGP is concerned, with a G400, I don't really have any problems. SiSoft Sandra shows my memory performance to be better at 133 than any VIA chipset at 133. About the benchmarks, I could care less about silly benchmarks, too. But you mentioned something about SSE being the difference. If that is so, then how come a via running at 133 FSB barely nudges the BX running at 133? Also, remember that the Athlons are running with 3DNow!, and when comparing P3 to P3, the i820 whoops the via chipset as well?

            Another bonus is that my system is rock solid, everything I plug into it works, and I don't have to go around downloading new patches for my motherboard every month in hopes of fixes or speed ups

            Rags

            Comment


            • #7
              If you are comparing the memory performance of the BX to the Apollo Pro, or even the Apollo Pro 133, then you will see the VIA chipset sucking eggs. Compare the 133a to the BX at the same speed, and the difference is a mere few percent (but the BX is still on top . Noticeable in benchmarks only. VIA is slowly but surely pulling their head out.
              I mentioned the Athlon because I know Intel has teams of MIT grads writing optimized compilers for MMX and SSE - and distributing them to software developers, while AMD does not (that I have heard...) The potential for the Athlon is there, I just don't know if it is the MB, chipset, or so-so optimizations by software developers.

              Comment


              • #8
                Hi E.W.,

                I agree, if software out there were to be optimized for an Athlon in particular, not just 3DNow!, then I am positive the Athlon would perform better. There are many more things that can be done to optimize in software for an Athlon besides just 3DNow!, and I haven't seen anything besides TGL that takes advantage of them.

                I am also sure that if the Athlon had better motherboard support, then it would also perform better. On-Die cache couldn't hurt, either....in fact I believe the current coppermines will be outperformed once T-Bird is finally released


                Rags

                Comment


                • #9
                  Getting back to the first post, I just so happen to have an AGP 4x capable videocard of the same generation as the G400 *and* a motherboard that supports the standard. This gave me the opportunity to test a BX board at AGP 1x and a VIA Apollo Pro 133A at AGP 4x. I also ran a battery of tests on the BX setup before I installed the VIA board, including UTBench, an Unreal Tournament timedemo.

                  Motherboards: AOpen AX6BC Pro II (AGP 1x) and Asus P3V4X (VIA 4in1 Ver. 4.17, AGP 4x)

                  All other system components were identical:

                  PIII 600E @ 800 MHz
                  Guillemot TNT2 Ultra (3.76 Detonators)
                  256 MB Siemens CAS2 PC100 RAM
                  WD 8.4 MB Hard Drive
                  Turtle Beach Montego II (Vortex 2)
                  3Com PCI NIC
                  Windows 98SE
                  Unreal Tournament V. 4.05B, 800x600, 16-bit, Medium/Medium

                  The BX setup was unstable with AGP 2X enabled and a 133 MHz front side bus, so I forced AGP 1X. This corrected the problem. The VIA board supports the 4X standard and permits for the AGP slot to run at spec.

                  BX Chipset, AGP 1X: Min 25.84 Max 57.89 Avg 40.10

                  Via Apollo Pro 133A Chipset, AGP 4X: Min 21.71 Max 50.62 Avg 33.68

                  If AGP 4X actually beats AGP 2X in Unreal Tournament, then maybe the solution is to run your board at AGP 1x, because my AGP 1X numbers are a fair bit higher than my AGP 4X numbers.

                  Paul
                  paulcs@flashcom.net

                  [This message has been edited by paulcs (edited 11 March 2000).]

                  Comment


                  • #10
                    640x480x16bpp and 800x600x16bpp seem to be pretty low resolutions/color depths for stress testing and benchmarking AGP transfer rates...

                    Comment


                    • #11
                      Ashley,

                      I wasn't trying to show differences in AGP speed, I was responding to a blind comment...but anyways, here are the 1024 @32 bit numbers, and as you can see the differences are even more noticable.



                      Rags

                      Comment


                      • #12
                        I was referencing the original post and the effects of AGP 4X and the new VIA chipset and Unreal Tournament. The differences in frame rates from 800x600 to 1024x768 in UT are marginal at best. I could have posted my BX/AGP 1X scores at a higher resolution and they still would have beaten the scores my VIA board was producing at 800x600 soundly.

                        Maybe at 1600x1200 the situation would be different. But I doubt the game would be playable at that resolution.

                        I just don't think AGP 4X makes any difference at all at this point. Not with games at least.

                        Paul
                        paulcs@flashcom.net

                        Comment


                        • #13
                          Guys: That was more in the way of a comment, not a criticism. At 800x600x16bpp, even with triple buffering, you'd be using less than 4MB of local memory, leaving quite a bit left over for texturing. Disabling AGP altogether might very well yield better benchmark scores than 4x, 2x or 1x.

                          Comment


                          • #14
                            No offense was taken.

                            Ashley, do you know how far we would have to push the board for AGP 4x to become a factor? As I mentioned, I saw no positive effects at 1024x768. Are we talking about 1600x1200, 32-bit?

                            Paul
                            paulcs@flashcom.net

                            Comment


                            • #15
                              how come a via running at 133 FSB barely nudges the BX running at 133
                              maybe because the AGP bus on the BX is overclocked?

                              Comment

                              Working...
                              X