Announcement

Collapse
No announcement yet.

Socket 754 - jheeeeesh!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Socket 754 - jheeeeesh!

    Nuts!
    Lawrence

  • #2
    They already pulled the article.

    You can't tell me you're surprised. Of course there are a large number of pins...or did you think memory controllers and crossbar links wouldn't add much to the pin count?
    Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

    Comment


    • #3
      30% of them are for power


      Remember that the only way power gets into the die itself is through approximately 30% of those 754 pins that connect to tiny wires on the inside of the package which finally bring power to the chip
      from

      Comment


      • #4
        30% of them are for power
        Yeah, that's normal. It also isn't very much considering you need VDD and Ground both.

        I'd like to know how many are unused, and how many are diagnostic.
        Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

        Comment


        • #5
          Sounds normal

          You need many power rails so that power can be distributed evenly through the chip and without drawing so much through a single wire that it burns out and kills the chip.

          The power wires that connect the core to the outside chip are very thin, so you have to connect many of them in parrellel so that the combined resistance is low enough to allow enough power in without overheating any specific wire.
          80% of people think I should be in a Mental Institute

          Comment


          • #6
            Hammer I presume? Has anyone seen the chip - pins all the way across the package - even underneath the die. Unusual to look at, but it's the way forward
            Meet Jasmine.
            flickr.com/photos/pace3000

            Comment


            • #7
              I think Clawhammer has a small area underneath the die that is pinless, but sledgehammer seems to have the whole base covered with something like an excess of 900pins. I guess it's the additional HT busses that is the main reason to a higher pincount. Any comments Wombat?

              Comment


              • #8
                Actually, HT is a low pin-count protocol, with even the widest possible HT connection I'm aware of requiring about 40 pins; it's unlikely that it would cause this difference you're talking about. I'm scanning through this PDF now, but certainly haven't read all of it yet. The document is targetted towards vendors wishing to supply the sockets.

                Looking through the PDF, the socket itself does not have pinholes directly underneath the die - so what pictures are you looking at Pace and Novdid?

                The power wires that connect the core to the outside chip are very thin, so you have to connect many of them in parrellel so that the combined resistance is low enough to allow enough power in without overheating any specific wire.
                Yes, they have to supply a significant amount of current, but the large number of power pins is more about controlling power plane degradation rather than the overall current needs of the chip. The RLC characteristics of the circuits can cause some serious signal and power degradations as your distance from a clean power source increases. Think if it as a town having a thousand faucets around the neighborhood, rather than trying to all drink from the same well.
                Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                Comment


                • #9
                  Originally posted by Wombat
                  They already pulled the article.

                  You can't tell me you're surprised. Of course there are a large number of pins...or did you think memory controllers and crossbar links wouldn't add much to the pin count?
                  What exactly is a crossbar link and what does it do?

                  Comment


                  • #10
                    Read this Wombat. It's something I saw at Anandtech a couple of months ago.



                    It sais that the additional pins over clawhammer is two more HT busses and the dual channel mem controller, as opposed to the singlechannel controller in clawhammer.

                    Comment


                    • #11
                      Ah, cool. Thanks for that link.

                      Loosely, crossbars are the ways that CPUs in large multi-processor setups talk to each other, usually through bus uplink chips that are designed to handle that kind of traffic. Depending on how you define the term, Sledges might actually be considered "crossbar-less," since they only talk through other Sledges.

                      AMD released a document some months ago that had illustrations as to how multi-CPU setups would be accomplished with Sledges. The biggest diagram they had was for an 8-way box. I don't have a link to the diagram, but IIRC, 4 of the processors couldn't even link to anything except other processors, and the the other 4 only had one spare link. Also, with the bridges on the die with the processor, AMD hasn't said anything about redundancy or hot-maintenance. Losing just one processor has the potential to be disasterous for the whole system. If AMD really had good ways of avoiding such problems with their design, they probably would have been boasting about them for a long time already. The silence is worrysome.
                      Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                      Comment


                      • #12
                        Originally posted by Wombat
                        AMD released a document some months ago that had illustrations as to how multi-CPU setups would be accomplished with Sledges. The biggest diagram they had was for an 8-way box. I don't have a link to the diagram, but IIRC, 4 of the processors couldn't even link to anything except other processors, and the the other 4 only had one spare link. Also, with the bridges on the die with the processor, AMD hasn't said anything about redundancy or hot-maintenance. Losing just one processor has the potential to be disasterous for the whole system. If AMD really had good ways of avoiding such problems with their design, they probably would have been boasting about them for a long time already. The silence is worrysome.
                        In what way can you approach this problem? Is it possible? I see that there can be some serious issues to get it to operate even if one CPU fails. Stability and reliability is afterall the most important part of such 4/8-way setups.

                        Comment


                        • #13
                          When you start worring about processor failure (that is exceptionally rare in any case) you really need to be looking into mainframe country.

                          The CPU redundacy in mainframes is simply amazing. It isn't just a bolted on system in mainframes, the CPU's are designed from the ground up to be fault tolerant. Whenever a cpu has detected it has failed (or failing) it moves its state to a hot spare (or loads it from a checkpoint) and continues processing in its place.

                          The mainframe then calls home and reports the failure, and some time later a replacement cpu arrives and the bewildered administrators (WTF is this guy doing at our door with this big ass CPU?) simply hotswap the dead CPU with the new one.

                          This is what I have heard anyway, since I haven't played with mainframes. It sounds right though

                          But even if my blurb about mainframes is not 100%, without actually being designed from the ground up to be fault tolerant (this requires extra work and a performance hit to the CPU), I can see any Intel or AMD x86 multiprocessor system recovering cleanly from a proc failure.
                          80% of people think I should be in a Mental Institute

                          Comment


                          • #14
                            Yep, it's pretty much right. One of the reasons that big boxes still stick to SCSI as well, they're so easily hot-swapped.

                            CPU failure is fairly common in large systems - the CPU may not literally blow up, but it may have to shut down due to overheating. Things that <I>cause</I> CPUs to shutdown/lockup are fairly common.
                            Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

                            Comment


                            • #15
                              Gee,
                              I wonder if they have the Compaq view on Heatsinks
                              If there's artificial intelligence, there's bound to be some artificial stupidity.

                              Jeremy Clarkson "806 brake horsepower..and that on that limp wrist faerie liquid the Americans call petrol, if you run it on the more explosive jungle juice we have in Europe you'd be getting 850 brake horsepower..."

                              Comment

                              Working...
                              X