Announcement

Collapse
No announcement yet.

0mz = FAST (but how do you overclock?)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 0mz = FAST (but how do you overclock?)

    Emerging technology news & insights | AI, Climate Change, BioTech, and more


    Too cool.
    chuck
    Chuck
    秋音的爸爸

  • #2
    That is very cool! I like the idea that it is a better, smarter way of doing it, not just brite force. Too bad it will take so long to implement.
    AMD XP2100+, 512megs DDR333, ATI Radeon 8500, some other stuff.

    Comment


    • #3
      Hmm... it might have a clock to govern the speed at which the individual elements operate, thus overclocking may still work.

      Comment


      • #4
        I found the article a little lightweight, and sometimes questionable in both facts and purpose. But asynchronous design <I>is</I> very cool. But it's a royal bitch to try to pull off in any kind of large-scale design. I'm surprised there wasn't more information from Sun, since last I heard they were the "closest" to having an asynch GP CPU. But please keep in mind that's like currently being the "closest" to having a power-efficient fusion reactor right now.
        Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

        Comment


        • #5
          Originally posted by Wombat
          I found the article a little lightweight, and sometimes questionable in both facts and purpose. But asynchronous design <I>is</I> very cool. But it's a royal bitch to try to pull off in any kind of large-scale design. I'm surprised there wasn't more information from Sun, since last I heard they were the "closest" to having an asynch GP CPU. But please keep in mind that's like currently being the "closest" to having a power-efficient fusion reactor right now.
          Oh, whadu you know Whipper Snapper.
          Happy belated Birthday.
          chuck
          Chuck
          秋音的爸爸

          Comment


          • #6
            TdB

            but if there is no clock, then how can they make one async chip faster than another?? smaller die-sizes???

            seems like we NEED pr-rating after all

            it is a shame that pentium chip wasn´t released, it would probably be faster than the pentium2, and maybe even the slower pentium3´s

            btw have you heard anything about async fsb´s my computer-architechture-teacher said that it would would be easier to implement than an async cpu, and that it would be a really good idea. is this being researched aswell?? would this require a new pci-standard?
            This sig is a shameless atempt to make my post look bigger.

            Comment


            • #7
              I think async FSBs might actually be <I>less</I> likely. The current prevailing approach in asynch is to have a handshaking signal that couples along with the data, and determines when calculations are complete.
              On a PCB, that would mean both more traces (bad), and having to protect these new signals from noise in more effective ways.

              There's nothing that said the async Pentium was verified. 5 years ago, the tools were even less competent then they are now.
              They probably still aren't good enough for anything over a few million transistors or so. That asynch Pentium would still be highly useful (and profitable) today in embedded or portable devices. There's got to be other reasons it wasn't released.
              Gigabyte P35-DS3L with a Q6600, 2GB Kingston HyperX (after *3* bad pairs of Crucial Ballistix 1066), Galaxy 8800GT 512MB, SB X-Fi, some drives, and a Dell 2005fpw. Running WinXP.

              Comment


              • #8
                I'd say it was squashed as it would be hard to market, and at that time, Intel OWNED the PC processor market. They had no reason to rock the boat. Marketing a processor that is not easily quantitatively faster and better would be hard. People like to be easily able to one-up each other.

                just my 2 clock cycles of thought on the matter.
                AMD XP2100+, 512megs DDR333, ATI Radeon 8500, some other stuff.

                Comment


                • #9
                  Well, it also might have cost $10,000 per chip to make.
                  Fine for research, useless as a product.
                  chuck
                  Chuck
                  秋音的爸爸

                  Comment

                  Working...
                  X