Announcement

Collapse
No announcement yet.

GigEthernet : What adapters and what switches do you use ?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GigEthernet : What adapters and what switches do you use ?

    And do they get anywhere near 1000Mbps ?

    I have used a few Netgear GA302T's and a Netgear GS104 (blue metal switch) which was very fast, but the 302T's have all died now, and the switchs was replaced by a GS608 (white plastic one), and I recently bought some Netgear GA311's.

    I am finding the 311's are slow (200Mbps on avg), and am wondering if it's the switch or the Cards that are slow...

    The old GS104 was a Business class switch, and the GS608 is a home-orientated switch, and even though the tech specs are almost identical, I cannot help but feel the business orientated switch was faster and more reliable.

    I have an onboard Marvell GigE adapter, and an onboard Realtek GigE on my two Intel boards, and they both seem faster than the GA311's, even thought the GA311 is a more recent version of the same onboard chip...

    What are your experiences with GigE, and what setups were used ?
    PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
    Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
    +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

  • #2
    thinkpad T60, some intel chip, no idea what the institute is using. have seen 50mb/s, but never really cared. plenty of speed for me.

    mfg
    wulfman
    "Perhaps they communicate by changing colour? Like those sea creatures .."
    "Lobsters?"
    "Really? I didn't know they did that."
    "Oh yes, red means help!"

    Comment


    • #3
      I did some tests on Gb at my previous job. There are several factors that go into your final speed.

      First you have to account for packet overhead, so your actual data throughput is always lower. I don't know actual numbers, but it partly depends on your packet size, which is number 2.

      Second, you have your packet size. The fewer packets you have to send determines the amount of overhead and data you can push. Switches have a limit on the number of packets they can route in a timeframe, which is another reason why jumbo packets are so helpful. Even on a consumer switch with a relatively lower packet rate you can still get closer to your theoretical max with jumbo packets.

      Third, is the speed of the computers at either end. A laptop will always be slower on the network than a desktop or a server. They simply have slower hard drives and often times not as good of a NIC (to conserve power). Also, you have to factor in the speed of the hard drive. It doesn't matter how fast your network is if your data is fragmented across a slow laptop drive.

      Fourth, offloading and resource usage. A good TCP/IP offloading NIC will work faster than a cheap NIC. Simply put, if all or most of the TCP/IP work is done on the NIC through dedicated hardware and drivers, it is much more efficient than using generic hardware. Granted, offloading can often times cause problems, but when it works, it works fast and well.

      And fifth, there is of course your switch. Just because a switch supports Gb, it does not mean you will reach theoretical Gb limits. The better the switch, the closer you will get. One thing that makes a huge differnce, especially with consumer switches, is a cache. A cache reduces the amount of packet colisions, thus increasing the rate of transfer. Also look for a full speed, non-blocking, backplane (example, an 8-port Gb switch should have a 16 Gbps backplane (1 Gbps x 2 Gbps [1 GB for Tx, 1 GB for Rx] x 8 ports = 16 Gbps), and a high forwarding rate, measures in Mpps (Millions of Packets per second). A switch with a cache, full speed backplane and a high packet rate will be your best bet. Obviously, the better the switch, the more it will cost.


      All that being said, your best bet is to get low-end business grade switch and a good NIC with offloading capabilities if you want to maximize your speed without dropping a mint. I know others on this board have highly recommended the Linksys business switches for a good mix of price and speed:



      As for NICs, if you want speed you want an Intel, Broadcom or Marvell NIC chipset, in that order. RealTek and other generics are notoriously poor. Some generic looking NICs will have good Broadcom or Marvell chips, so always check the chip and you can save some cash. 3Com is good too, but I have never seen a Gb 3Com NIC. You also want a PCI Express NIC if your board supports it, and PCI cannot reach Gb throughput. And make sure the NIC supports 9K jumbo packets.

      Once everything is hooked up, update your drivers to the newest version, switch both computers to 9K Jumbo Packet mode, and off you go.

      Jammrock
      Last edited by Jammrock; 3 January 2008, 08:58.
      “Inside every sane person there’s a madman struggling to get out”
      –The Light Fantastic, Terry Pratchett

      Comment


      • #4
        Forgot about cabling. You need CAT 5e or CAT 6 (recommended) to reach true Gb speeds. Normal CAT5 won't cut it.
        “Inside every sane person there’s a madman struggling to get out”
        –The Light Fantastic, Terry Pratchett

        Comment


        • #5
          I have cat 5e cables, i checked, but haven't set jumbo frames yet.

          Will try now and see if there is a difference...cheers for all the info.

          I had my doubts about realtek adapters for a while, but lots of them come with them nowadays...
          PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
          Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
          +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

          Comment


          • #6
            Well, well....
            The Marvell Integrated Adapter on my 955x mobo (PCI) can do up to 9k jumbo frames, 4088 or 9016 actually.
            The Realtek Integrated with my 865 board (8110 REALTEK) is the exact same as the brand new 8169 PCI card. 1-7K jumbo frames.
            Set both to 4K, 4K and 4088.

            I can see the folders, but when i try to get into the folder and see the files, explorer locks the window...and its end task and wait for the desktop to come back.

            Have just ordered another gige card, with a Marvell chip on it...

            I'm sure i had a PCIe network on this mobo...? will check.
            PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
            Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
            +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

            Comment


            • #7
              You might want to install WireShark (previously ethereal) to see what your packet sizes really are.
              Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

              Comment


              • #8
                Gave up at about 6am...should've kept an eye on the time...

                Tried again this afternoon, and put a single cable between the two onboard Marvell adapters, and managed to get Jumbo Frames working.
                No difference than standard frames. 18-22% of Bandwidth used...

                Speed is roughly a Gig a minute.

                Have finally set back to non jumbo frames, and ordered a PCIe x1 Intel card...
                I suspect that the speed of about 20meg/second is good enough, and near enough the hard drive speed with the controller overhead (IDE and LAN)...it'll have to do.

                I'll have a look at wireshark, could be interesting, thanks.
                PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
                Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
                +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

                Comment


                • #9
                  The onboard marvel pci-e chips on my motherboards has managed to do 75mb/s over my 3com switch.

                  the Netgear GS105 switch that is now collection dust is a POS that never got above 50mbs/s
                  If there's artificial intelligence, there's bound to be some artificial stupidity.

                  Jeremy Clarkson "806 brake horsepower..and that on that limp wrist faerie liquid the Americans call petrol, if you run it on the more explosive jungle juice we have in Europe you'd be getting 850 brake horsepower..."

                  Comment


                  • #10
                    All my onboard chips are over PCI only...

                    Are you sending/receiving to/from a RAID array ?

                    What would that setup be, from end to end ?
                    PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
                    Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
                    +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

                    Comment


                    • #11
                      What kind of transmission are you doing? Windows networking? Try ftp or something, if so.
                      Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                      Comment


                      • #12
                        Originally posted by Wombat View Post
                        What kind of transmission are you doing? Windows networking? Try ftp or something, if so.

                        Good point. FTP has far less overhead, but error correction. For raw throughput it is the best of the easily available.

                        There are some freeare network testers out there. Roadkil used to ave one, but they use a dummy load to test actual throughput.
                        “Inside every sane person there’s a madman struggling to get out”
                        –The Light Fantastic, Terry Pratchett

                        Comment


                        • #13
                          Basically its a HDD box.
                          I tried with an Epia 10000 but the throughput was pretty low, 140Mbps and 90% cpu usage.
                          I also had to use the PCI slot for the GigE, so no more storage there. No Sata either.

                          Then I tried the 3.2Ghz celeron, on an epox-ep5pdaj, which was OK, but a tad slow also, with built in realtek.
                          Seeing as I have onboard Marvell on this machine, and I had a AMD x2 4200+ and a NF3 mobo with integrated Marvell, I tried that. it worked for 3 days with XP Pro, then when I put it in place, it crashed, and then I took it apart bit by bit, and it still kept on crashing.

                          So i've come back to USB storage, 480Mbps, added a 500Gb drive in Ide, and am looking at adding 2 x 250Gb drives in Sata, but have to get the adapters for those.

                          All the external drives are powered by a 350W Enermax PSU, and that is UPS'ed witha MGE 800VA UPS. so far thats 7 drives, and i plan to add at least 2 or 4 more in there somehow. At least there is no OS to crap out, and they are always on. I switch on my PC, and there they are.

                          Bloody networking....USB STorage all the wy.
                          PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
                          Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
                          +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

                          Comment


                          • #14
                            eSATA is even better.
                            “Inside every sane person there’s a madman struggling to get out”
                            –The Light Fantastic, Terry Pratchett

                            Comment


                            • #15
                              Yeah, but there are no eSATA switches or hubs that i know of

                              I can get as many drives as i want through one USB cable. I wont be using more than one drive at once, so the bandwidth isn't shared.

                              I should have done a HDTach on the drives when they were on the network...

                              Gonna do a quick run on the USB 500Gb drive...
                              PC-1 Fractal Design Arc Mini R2, 3800X, Asus B450M-PRO mATX, 2x8GB B-die@3800C16, AMD Vega64, Seasonic 850W Gold, Black Ice Nemesis/Laing DDC/EKWB 240 Loop (VRM>CPU>GPU), Noctua Fans.
                              Nas : i3/itx/2x4GB/8x4TB BTRFS/Raid6 (7 + Hotspare) Xpenology
                              +++ : FSP Nano 800VA (Pi's+switch) + 1600VA (PC-1+Nas)

                              Comment

                              Working...
                              X