PDA

View Full Version : Polish Parhelia confirmation



mabraham
22nd April 2002, 08:49
http://www.enter.pl/technologie/archiwum/archiwum.asp?id=1236

Translates to something like:
"News about comming new chip from Matrox is not fiction. We not can say any more this time. This chip is better that you know from previous internet information. Preview of Parhelia comming in 6/2002 magazine Enter and 6/2002 is date of relase this chip"

Evildead666
22nd April 2002, 09:02
MWAHAHAHAHAHAHAAAAAAAAAAAAAAAAAAAAAAAAAAA!!!! (big Evil Laugh)

That sounds like some good 'safe' info,

Announced in May(mid to late) and on the shelves by June.

Sounds real good to me.

Just have to get the prices now....:D

Thankee mabraham

thop
22nd April 2002, 09:49
so after all they will announce it on ants birthday! :D

if you look closer at the last two sentences there is more information in there than it seems (i happen to know polish).

"but the sensational paramaters, of which we informed you a while ago, prove not very [low/not a lot] sensational [=fictional here] in combination with reality, which exceeds everything we have seen so far.
A detailed description [not preview, like said above] of parhelia can be found in issue 06/2002 of ENTER, which [the magazine] will be in store the same day the new Matrox GPU officialy gets premiered"

so it really has to be some freakin new technology! another thing is issue 06/2002, which will be out somewhere in the mid of may (i guess, couldnt find any info on that). their issue numbers don't fit the months (like most magazines). 05/2002 is already out. so another hint for ants birthday :D
and "premiered" looks more like actual benchmarks or sth. like that than just specs announcement.

so what might this crazy new technology be? what can be so good that it makes anand smile? i doubt the parhelia is just fast. maybe it is 64bit rendering? will this be usefull for the gamer? for the pros i guess yes.
it is only getting better :)

guillenv
22nd April 2002, 09:55
posted April 22, 2002 12:08 M
I don't think that date is true unless he is referring to the date that he will publish something in his magazine.

I've seen the leaks coming from every corner of the world and I have to say, don't believe everything some of these sites are publishing because 1 item in particular is not true.

That just confirms what I told you folks when all these leaks start

------------------
Haig
Matrox Graphics
Technical Support Manager

thop
22nd April 2002, 09:58
actually haig could be right, it seems like these guys also try hard to sell their magazine :D
who knows.
but if this information is true, they will probably never see a NDA again :)

Wombat
22nd April 2002, 10:06
actually haig could be right

LOL! Gee, I wonder why Haig would be qualified to think he knows what is going on! :)

thop
22nd April 2002, 10:10
of course he does know whats going on :)
what i mean is that even if some sites posts "facts", he will very likely say they are not true (for obvious reasons) even if they are true until matrox officialy presents the card.

Greebe
22nd April 2002, 10:11
While I could care less what makes Anand smile, we've been trying to tell you guys that it's sweeter than sweet for quite sometime now.

Yes new technology, yes fast, but there is a whole lot more... but 64 bit rendering, now come on, how many cards do you know that even support that?... even on the Pro scale not many.

BTW, gamers would never see any advantage to 64 bit color depth anywho.

Haig is 100% correct... it's just a move to have more page hits/whatnot ($$)

TdB
22nd April 2002, 10:14
so what might this crazy new technology be? what can be so good that it makes anand smile? i doubt the parhelia is just fast. maybe it is 64bit rendering? will this be usefull for the gamer? for the pros i guess yes.

NOW, im curious!

it is probably something else than displacement mapping, and matrox has a reputation for doing wacky inovative stuff like headcasting and dualhead, i wonder what it is this time(and i bet anand wouldn´t smile over something silly, like headcasting, i bet it is somethin cool). :D

Michel
22nd April 2002, 10:34
i know what makes anand smile:

The followup of headcasting: Bodycasting :)

I wonder if the announce and in store same day is true..
You'll get a lot of leaks from the stores before the announcement.

anyway, I think I'll mail my creditcardnumber to the Matrox Sales department to be sure i'm in the first batch....

mmp121
22nd April 2002, 10:40
First came DualHead
Second came HeadCasting

Put em together and what have you got?

DualHeadCasting :D

Gotta Love Matrox for it!

impact
22nd April 2002, 10:50
Originally posted by Michel
i know what makes anand smile:

The followup of headcasting: Bodycasting :)

I wonder if the announce and in store same day is true..
You'll get a lot of leaks from the stores before the announcement.

anyway, I think I'll mail my creditcardnumber to the Matrox Sales department to be sure i'm in the first batch....

What? The rumoured breastcasting and arsecasting has finally made it to the feature set of the latest matrox' offering? I sure hope RSN will be present, too.

Amiga Blitter
22nd April 2002, 11:25
"A third capable manufacturer returns to the scene? Since Matrox has never really been gone or tried their hand at being competitive in 3D, and Bitboys has never been to the scene in the first place, it brings one company to mind. A certain gfx company that has a joint venture with one Taiwanese chipset maker, and a rumored DX9 part thats codenamed a certain South American country. "


Is maybe the S3 Columbia
:(

Greebe
22nd April 2002, 11:36
Clueless sites do vex us :rolleyes:

Wombat
22nd April 2002, 11:49
AFAIK, Haig hasn't lied yet. I wouldn't expect him to either. Just a lot of "can neither confirm nor deny."

Jammrock
22nd April 2002, 12:00
If Bodycasting = 4xRSN, I'm all for it :)

Jammrock

az
22nd April 2002, 12:45
Haig said "1 item in particular is not true", so this basically means the rest is true - so I think it is NOT true that the card will be introduced when the site claims, BUT the preview, review, whatever will indeed appear in that magazine at that time :)

AZ

Indiana
22nd April 2002, 12:55
Originally posted by Greebe
Haig is 100% correct... it's just a move to have more page hits/whatnot ($$)

Yes, and as we all know Ant has been doing this as well not so long ago....;)


(...taking cover... no, please don't beat me, it's a joke....:D:D)

impact
22nd April 2002, 13:27
It's all one big conspiracy to get more ads displayed... right...

jwb
22nd April 2002, 13:50
Originally posted by mmp121
First came DualHead
Second came HeadCasting

Put em together and what have you got?

DualHeadCasting :D

Gotta Love Matrox for it!

Hyp-X
22nd April 2002, 13:51
Hmm, why is this a "confirmation"?

It seems to be an exact translation of a news item posted here at www.murc.ws ...

jwb
22nd April 2002, 13:56
Arrrghh... I guess I waited too long to post the answer...

In response to the previous quote I stated...

The logical answer is not Dual-Headcasting but... DUAL-CORE

What have all Parheliae in common...

Yes, Left and right from the Sun (Bright Shader) there will be two Pixelmeister... :D

Or in other words: As Rampage would have been, only 3 times more powerful.. or what Spectre would be now....

Be patient... I always told you so ;)

Wombat
22nd April 2002, 14:03
Why would you <I>start</I> with a dual core? If you need 2 processors to start with, then your design is in trouble.

jwb
22nd April 2002, 14:17
Why do you think the design i flawed...

Why not share the grunt work?...

Some advantages:
Easier to design
Easier to handle
Modular architecture (1, 2, 4 chips...)

The same what two CPU's (in SMP) will do or what dual-channel DDR or RDRAM buses do or what dual heads are do or two humans are doing err.... ähmm, wrong thread....

Greebe
22nd April 2002, 14:21
... expensive to implement and a foolish design strategy.

Besides we already know what Parhelia is (a few of us here like Wombat :)

Wombat
22nd April 2002, 14:26
Speaking from personal experience, having two processors work together is much, much, harder to design.

It's more costly to manufacture, more costly to test, lowers yield, makes drivers much more difficult to write....

Besides, if you're going to go dual-core, you design fast single-core with dual capabilities designed inside. That way you can ship the first design (it had better be fast), and you have the time to get the multi-chip solution out later, as a market boost. Of course, it's usually better to just port & shrink the single-chip design instead.

Also, graphics is one of those things that doesn't parallelize very well.

Also, comparing a dual-<B>processor</B> to dual-channel <B>memory</B> is ludicrous.

jwb
22nd April 2002, 14:26
It maybe a bit more expensive at start... it will pay in the end.

BTW. If you know that Wombat knows that who knows it anyway...

He didn't denied it. He just said that it maybe not wise to start with...

Wombat
22nd April 2002, 15:20
No, listen, it WON'T pay in the end. It's just the wrong way to do this kind of thing. It's not more expensive "to start," it's more expensive all-around, and performance would suck.

jwb, both Greebe and I are under NDAs.

Jon P. Inghram
22nd April 2002, 15:34
Perfect gift for the Betelgeuseian member of the family that's always hard to please!


Originally posted by mmp121
First came DualHead
Second came HeadCasting

Put em together and what have you got?

DualHeadCasting :D

Gotta Love Matrox for it!

superfly
22nd April 2002, 15:55
Originally posted by Wombat
No, listen, it WON'T pay in the end. It's just the wrong way to do this kind of thing. It's not more expensive "to start," it's more expensive all-around, and performance would suck.

jwb, both Greebe and I are under NDAs.


While i agree with you that it's always more difficult to implement multichip solutions on a single board,the fact is that sooner or later there won't be any other choice really...


it's taking longer than ever to introduce new fab processes that allow chip makers to keep on adding new features to their chips while keeping it's size reasonable...

Wombat
22nd April 2002, 16:27
I completely disagree. You just have a very short memory. People have been predicting the eminent failure of Moore's Law for over a decade. Actually, it hasn't failed yet.

We just:
-keep finding smaller lambdas
-improve the fab process, redefining (reasonable size)
-repairable circuits
-find better dielectrics
-start working with more accurate models than previously used

Et cetera, et cetera. I can see what's coming down the pipe for the next 5 years or so, and there's nothing for iconoclasts such as yourself to be yelling about, but that hasn't stopped your kind yet, even after decades of being wrong.

Here's a quick review of the last decade: http://www.icknowledge.com/history/1990s.html

Oh, McKinley is 464mm^2, 3.3x the P3 size, and since die errors are roughly O(n^2), McKinley yield should be about 1/11 of the P3, if we operate in your world where the fab will kill us. I can assure you that such a calculcation is foolish.


For the most part, video display doesn't parallelize fairly well. That means that any multi-chip solution would have to have a whole lot of communication between the different cores. Now, would you care to guess how many orders of magnitude slower it is to talk to another chip than it is to communicate with another part of the die?

flee
22nd April 2002, 16:29
You guys have it the wrong way wrong way round....

Its not multichip its multicore!!!!!!!

Now that wouldn't be too expensive or hard to implement....

Wombat
22nd April 2002, 16:47
I sure hope you're joking.

Multi-core is nice, in some places, but it costs you. It's MORE expensive to design, test, yield, and debug than multi-chip (at least you can get a logic analyzer between two chips). But when you get it working, yeah, it's sweet. Fast, expensive, but sweet.

Ali
22nd April 2002, 17:00
Just to be anal, but there are a few mistakes in that history.

CutnPaste:
1997 - Intel Pentium IITM

The Pentium II introduced single in-line cartridge housing the processor chip and standard cache chips running at ˝ the processor speed. The Pentium II was manufactured in a silicon gate CMOS process with 0.35µm linewidths, required 16 mask layers and had 1 polysilicon layer and 4 metal layers, the Pentium II had 7.5 million transistors, a 233 to 300MHz clock speed and a 209mm2 die size.




The P2 had a clock speed of 233 to 450Mhz, not to 300.

CutnPaste:
1999 - Intel Pentium IIITM

The Pentium III returned to a more standard PGA package and integrated the cache on chip. The Pentium III was manufactured in a silicon gate CMOS process with 0.18µm linewidths, required 21 mask layers and had 1 polysilicon layer and 6 metal layers, the Pentium III had 28 million transistors, a 500 to 733MHz clock speed and a 140mm2 die size.




this gets harder, as Intel changed the P3 at 600Mhz, so the origonal P3 went from 450Mhz to 600Mhz, then the newer one went from 450Mhz (again) up to 1.13ghz (althought I think that got recalled), now there is an even newer P3, that goes from (I think) 933Mhz to over 1.2Ghz

Ali (the anal retentive) ;)

flee
22nd April 2002, 17:01
Opppps I forgot to mention

Not only is it multicore its is multi GPU:D

Although not to get you guys way tooo excited..... I could be imagining this:eek:

superfly
22nd April 2002, 17:13
Originally posted by Wombat
I completely disagree. You just have a very short memory. People have been predicting the eminent failure of Moore's Law for over a decade. Actually, it hasn't failed yet.

We just:
-keep finding smaller lambdas
-improve the fab process, redefining (reasonable size)
-repairable circuits
-find better dielectrics
-start working with more accurate models than previously used

Et cetera, et cetera. I can see what's coming down the pipe for the next 5 years or so, and there's nothing for iconoclasts such as yourself to be yelling about, but that hasn't stopped your kind yet, even after decades of being wrong.

Here's a quick review of the last decade: http://www.icknowledge.com/history/1990s.html

Oh, McKinley is 464mm^2, 3.3x the P3 size, and since die errors are roughly O(n^2), McKinley yield should be about 1/11 of the P3, if we operate in your world where the fab will kill us. I can assure you that such a calculcation is foolish.


For the most part, video display doesn't parallelize fairly well. That means that any multi-chip solution would have to have a whole lot of communication between the different cores. Now, would you care to guess how many orders of magnitude slower it is to talk to another chip than it is to communicate with another part of the die?


While the progress accomplished so far is nothing short of impressive,there's no denying that sooner or later fab processes are bound to hit a barrier... i'd say no more than 10 years tops.

lithography technology is being pushed to the point where pretty soon they'll be operating very close to x-ray frequencies.

Then there's the problem of how transistors will react to operating at extremely high speeds(think 10 ghz +) when they're fast aproching the atomic scale as far as gate lenght/height goes....

Wombat
22nd April 2002, 18:35
Ali:

Their P2 history isn't that far off. I think they're just sticking to initial offerings. The initial core (Klamath?) stopped at 300, maybe 333. Everything over that was a different fab job (Descheutes?).

JF_Aidan_Pryde
22nd April 2002, 20:48
Wombat,

Graphics parrallels better than anything else in the PC. Graphics has always benefited immensely from parallel processing, right from the days of the first dual texturing Voodoo2 to SLI architecture. Surely multi-chip configurations are hard to design in many aspects and also cost a heap, but they give near linear returns in speed for extra chips added in. CPUs can not come close in this respect. In graphics, the speed you get is directly proportional to the amount of extra transistors you put in, since we are accelerating a particular task (3D) in hardware. Extra transistors in CPUs are used mostly as overhead, solving problems for the CPU rather than dealing with any problem directly.

The reason why scalable graphics has been rather unsuccessful recently is mainly architectural. Current implementations of SLI and AFR are too inefficient (texture duplication) and lack geometry scaling. A deferred render is much easier to implement a multi-chip configuration. When eDRAM matrues, it should also make designing scalable IMR much easier.

Wombat
22nd April 2002, 21:06
That's the thing. The SLI setup worked because a single Voodoo2 was fillrate limited. That problem, although it exists today, is not nearly what it was. The ATI card, with AFR, suffered its own miserable fate.

The returns from multi-core graphics setups are diminishing quickly, as the communication costs of a multi-chip solution continually rise to counter the benefit of the second processor. eDRAM may very well make this problem more difficult, as the graphics community would have to tackle issues such as cache coherency. Not to mention the lost real estate (and power) needed for all of the extra pins for IPC.

In CPU land, the return from multiple processors <I>can</I> be linear, it all depends on the job you're doing. If it's not something that can be done in parallel very well, then the return isn't very big. I can't think of any reasons that video processors don't fit the same profile.

I don't understand what you're trying to say about transistors in CPUs.

Lemmin
23rd April 2002, 02:23
I think that Moore's law, in its more generalised form, is likely to stay in force pretty much indefinately - at least as long as people need more processing power. AFAIK the technologies for the next 10 years of silicon manufacturing are already mapped and planned out, and thats without touching things like focused X-ray (lobster eye lens) and electron beam etching. And even when silicon stops being usefull, there are plenty of other technologies that could carry on the spirit, if not the letter, of Moore's law.

BTW PowerVR have a highly successfull mutli-chip solution in use right now - their naomi arcade box is a 5 chip solution (1+4). In fact their arcade chipsets have always been designed to be scalable.

LEM

knirfie
23rd April 2002, 03:13
Wasn't the 'original' G800 supposed to get multi-chip support? I thought i head read something about a G800 "max" coming out with two chips instead of one, of course we never got to see the g800 in all its glory, let alone the dual G800 but it's not sucha a bad idea: two parhelia GPU's on one card :classic: or FOUR :laugh:

Novdid
23rd April 2002, 03:21
Well I believe that people heard the "fusion", and thought "that must be a multichip solution!!!".

Evildead666
23rd April 2002, 03:35
Everyone saw what happened to 3DFX when they decided to go multi chip.....

Too much power drain, and the cards were huge to say the least.

Single core, single chip, but with a new set of tricks up their sleeve seems to be the best bet yet.

Core architechture is the key.

As i understand it, just for an example, the P4 core performs worse than a PIII(latest model) at the same clock speeds.
The PIII core was brilliant, the P4 core is there to get higher clock speeds (and integrate new functions etc..). there is prob more to it than that, but that is what i have come to hear on the many forums and web pages i have read...

64bit rendering is a tad overkill.
40bit internal rendering sounded good, and then outputting @32bit, or even 40bit.
But there will be more to it than that, and those in the know can't tell us what.
Even if we have speculated correctly, they (the NDA's) cannot even hint at a good guess.....

Long live the bunny.:bunny:

jwb
23rd April 2002, 07:13
Sorry for the lack of answer. Yesterday evening from my place the forum was not accessible anymore.


Originally posted by Wombat
No, listen, it WON'T pay in the end. It's just the wrong way to do this kind of thing. It's not more expensive "to start," it's more expensive all-around, and performance would suck.


Why on earth would sombody buy then a 3dfx Alchemy (8 or 16 times VSA100 if my memory serves correctly) if performance sucked.

Why on earth was Voodoo2 (and SLI) so successfully... Because its performance sucked at that time...

NOOO...

It may NOT pay for YOU in the end. But it will for anybody else trust me...


Originally posted by Wombat

jwb, both Greebe and I are under NDAs.


And... What should it it tell me....

!! Stop replying if you are under NDA then !!

And BTW if sombody replies anyway so harsh on my personal opinion and has to tell as if i'm interested in that he/she is under NDA then something is deeply hidden here.

I KNOW... (that i'm not under NDA :) )

jwb
23rd April 2002, 07:25
And who said it has to be distinct multi-core(s) on the same PCB.

Remember PPro:
Same cartridge two dies. One for the core one for the mem (cache).

Why not 3 dies , 1 core, 2 memory
If one die dies ;) instead of 256bit bus you get 128bit bus
you don't have to throw the whole thing away. Still fine today.

Much shorter traces, more tighter intergration, more bandwidth in return

Who said this can't be a multi-core solution...

Be patient is the key...

Evildead666
23rd April 2002, 07:35
Hang on...

You are now going on about a single core with multiple memory bus. (dual bus).

The first PII and PIII cartridges had one core, and some really fast cache memory on them.
That cache memory eventually ended up being included in the core itself.

Multi-core or Multi-bus is asking for more traces, and more and more complex boards, hence a big hike in price.

Evildead666
23rd April 2002, 07:36
...and why should one die die?

TdB
23rd April 2002, 11:09
isn´t the g450mms a multi-chip vga btw?

jwb
23rd April 2002, 11:34
Originally posted by Evildead666
...and why should one die die?

One die could die in the manufacture process. If all dies are needed to function correctly you can throw the entire chip into the bin. If for example you employee an -independent- dual-bus with two different memory chips then you can still sell it as a half-baked solution.

IMHO:
A bit like GeForce4MX which is a GeForce4 Ti with broken programmable T&L and possibly one or two broken Pixel pipelines. Just cut a few traces to open the bypass.... and you're done.

So instead of selling a Ti for 300$ you can sell a broken (MX) chip for 150$ :eek:

Except this is done on-chip and not on separate dies.

You would think that the same can be done with multiple cores...

Be patient.... even the light itself takes more than 8 minutes from the sun to shine on us.

Wombat
23rd April 2002, 12:11
TDB,
The MMS video cards are two/four independent video systems, each running their own display. They aren't working together

jwb
23rd April 2002, 13:13
Originally posted by Wombat
TDB,
The MMS video cards are two/four independent video systems, each running their own display. They aren't working together

The MS and Intels are two/four independent companies, each running their own business. They aren't working together...

Sorry, just couldn't keep my mouth shut...

pvsavola
23rd April 2002, 13:28
Originally posted by Wombat
TDB,
The MMS video cards are two/four independent video systems, each running their own display. They aren't working together

So how hard would it be to hijack a horizontal retrace and start outputting some other memory region (even output other chip's z-buffo)? If this could be tuned, then.. A card could (theoretically) share tex memory and chip should have it's own z-buffer, but only for a limited screen space (1280x512 instead of 1024x1024). And then you could have hw multiplayer (a la Ps1 games) a bit easier, also horsepower enhacement by just dumping more chips on PCB ;).

Well. I guess it would already be done if it was viable. I don't exa ctly remember what 3dfx SLI did, wasn't it that they rendered every other scanline?

thop
23rd April 2002, 13:56
Originally posted by pvsavola

Well. I guess it would already be done if it was viable. I don't exa ctly remember what 3dfx SLI did, wasn't it that they rendered every other scanline?

yes.

Nappe1
23rd April 2002, 14:34
Originally posted by jwb


IMHO:
A bit like GeForce4MX which is a GeForce4 Ti with broken programmable T&L and possibly one or two broken Pixel pipelines. Just cut a few traces to open the bypass.... and you're done.


umm... NV17 is based on NV15/NV10 core and NV25 is based on NV20 core so GF4MX isn't broken GF4Ti... they are different chips. even John Carmack said that naming them like they are named sucks.

So nVidia just picked their old GF2GTS and add some features (DVD decoding, nView, GF3 style memory controller.), re-spinned it to smaller line and slapped a new name to it. And Voila! we got GF4MX that isn't GF4Ti Compactible, but more like an new mainstream DX7 chip.

they made it to compete against Radeon 7500 over a half a year later after Radeon 7500 was introduced. And I think this is the only reason why ATI hasn't released RV250 chip yet. There hasn't been any reason for it.

Maybe GF4Ti4200 release will make Ati to launch RV250. Or then they are just renaming it to RV300 and it will be released as Mainstream DX8.1 chip simultaneusly when ATI releases R300 as High End DX9 chip.

Greebe
23rd April 2002, 16:07
jwb, I sure glad your not an EE and only speculating

Wombat
23rd April 2002, 16:36
Sorry, just couldn't keep my mouth shut...We know, but we'd really, <I>really</I> appreciate it if you would - at least until you can start making sense and not spouting erronous ****.

jwb
23rd April 2002, 22:45
Originally posted by Greebe
jwb, I sure glad your not an EE and only speculating

What's an EE?

jwb
23rd April 2002, 22:47
Originally posted by Wombat
We know, but we'd really, <I>really</I> appreciate it if you would - at least until you can start making sense and not spouting erronous ****.

Remember this forum is named after?

jwb
23rd April 2002, 22:54
Originally posted by Wombat
We know, but we'd really, <I>really</I> appreciate it if you would - at least until you can start making sense and not spouting erronous ****.

See, this is excactly what I mean. I'm in that forum because of fantasy, hopes, etc...

But you sound like I'm a complete idiot and your tha man which keeps erveryone well informed except me.

If you know more then your crystal ball is washed out and dirty and actually havn't lost anything in this forum. Got me?

BTW This is not meant as a personal offense.

jwb
23rd April 2002, 22:57
Originally posted by Nappe1


umm... NV17 is based on NV15/NV10 core and NV25 is based on NV20 core so GF4MX isn't broken GF4Ti... they are different chips. even John Carmack said that naming them like they are named sucks.

So nVidia just picked their old GF2GTS and add some features (DVD decoding, nView, GF3 style memory controller.), re-spinned it to smaller line and slapped a new name to it. And Voila! we got GF4MX that isn't GF4Ti Compactible, but more like an new mainstream DX7 chip.


Yes you are right... in the sense that this is the "offical" version.

Wombat
24th April 2002, 00:54
Yes, this forum is here so you guys can speculate about future products. But when you *cough*authoritatively explain how the GF4MX is chosen, when everything you say is a contradiction to KNOWN FACTS, well.


You hear that whooshing sound? That's your credibility plummeting.

WT<B>F</B> does "official version" mean? You can crack open the packaging on a GF4MX and see that it most certainly is not a GF4Ti.

I can't tell if I want you to pull your head out of your ass, or keep stuffing it up there until you asphyxiate yourself.

Evildead666
24th April 2002, 01:57
Nice one Wombat.

Jwb, what are you on?

Pluto???

EE is for Electrical Engineer.
I have done 5 years of computer studies @ University and i currently work with computers.(programming and such)

I also did Microelectronics Systems engineering...

Make up whatever u want, but don't go saying that u know more than anyone else...ok?

Sheesh, somebody just woke up after being asleep for a hundred years.:alien:

Go on Bunnies, attack!!!:bunny: :bunny: :bunny: :bunny: :bunny:

Damien
24th April 2002, 02:46
Girls try to keep your skirts on, its just his opinion………

knirfie
24th April 2002, 03:28
:classic: :classic: :classic: GIRLS :classic: :classic: :classic:

Evildead666
24th April 2002, 04:02
Hmmmmm........:rolleyes:
:bunny:
lookit the bunny...yeah.....lookit.:clown:

Pace
24th April 2002, 04:43
Originally posted by jwb
Why on earth would sombody buy then a 3dfx Alchemy (8 or 16 times VSA100 if my memory serves correctly) if performance sucked.

Why on earth was Voodoo2 (and SLI) so successfully... Because its performance sucked at that time...

NOOO... SLI was to improve fillrate (at the time), as Wombat said. Now the limits are with memory bandwidth - SLI won't help with this, it just doubles the requirements.


!! Stop replying if you are under NDA then !!

I KNOW... (that i'm not under NDA :) ) NDA means he can't disclose the secrets he knows. Your errors are not a secret :) Just accept that Wombat knows more about CPU design, or show some knowledge to the contrary.

P.

PS: Not meant to be a direct attack, but it's just in my mind at the moment...how many people on this (damn ;)) Internet speak absolute rubbish as if it's the truth? If I'm not sure about something, then I ask the question, "Is SLI better?" - I don't say "SLI is better, always has, always will." Och aye the noo...

cbman
24th April 2002, 06:16
Hey Wombat,

This is unrelated to the thread but I was reading an article in the paper about the HP, Compaq courtcase that is going on at the moment...

That s*** affecting you guys in the CPU division at all?

Oh and if you think Core Architecture is exciting, you should have met the guy I was talking to this morning over coffee... he designs Stills for breweries for a living.. how cool is that. :D

Joe DeFuria
24th April 2002, 06:46
SLI was to improve fillrate (at the time), as Wombat said. Now the limits are with memory bandwidth - SLI won't help with this, it just doubles the requirements.

Actually, the beauty of SLI was that it increased bandwidth in lockstep with fill-rate.

With every chip you added to increase fill-rate, you also by definition also added the proportional bandwidth required to utilized that fill-rate.

Pace
24th April 2002, 07:00
Yeah...but it doubles the need for bandwidth with the increase, does it not?

Please tell me my statement wasn't a perfect example of a POS ;)

Say 1Gb/s texture transfer...two SLI chips deliver 2Gb/s, but need twice the textures anyway...filling 2Gb/s?

P.

Novdid
24th April 2002, 07:10
Yes that is correct Pace.

jehwuk
24th April 2002, 07:38
oops double post

Greebe
24th April 2002, 07:39
With many of the several of the 3dfx 6000 series cards having now been benchmarked, it is noted that in order to saturate it's fillrate, it would be neccessary to have a 4gHz cpu... that's what is bad about their mutlichip designs

Substantial enough proof of what we say is true.

jehwuk
24th April 2002, 07:39
Originally posted by Pace
Yeah...but it doubles the need for bandwidth with the increase, does it not?

Please tell me my statement wasn't a perfect example of a POS ;)

Say 1Gb/s texture transfer...two SLI chips deliver 2Gb/s, but need twice the textures anyway...filling 2Gb/s?

P.

the need for the extra bandwidth is also handled w/ a seperate memory bus connected to the extra chip.
ur right that the need for bandwidth would be doubled with an two chip SLI config, but the two chip SLI config itself consists of double fillrate, double bandwidth.
i think this is what Joe DeFuria is trying to say, that the "needed" extra bandwidth would be no problem.
but having no problem did make a problem :), because it had to dedicate equal amount of RAM to each chip(different amount would be inefficient and would make no use of the extra RAM dedicated to the chip with more RAM), and thus the actual video RAM that is being utilized in eace scene would be divided by the #of chips.
V5 5500 consisted of 2 VSA100 had 64MB of memory(did i get it right?), but could be considered as a single chip 32MB card with double the fillrate, double the bandwidth.

personally, i am very against such technology, since room for innovations r still plenty! (Parhelia, i say?!? i hope?!?) :)

edit: i just saw greebe's post, and thought i should add this. if the chip(that could do SLI) being used alone was already suffering from the bandwidth bottleneck, then the extra amount of bandwidth that was needed to fulfill one chip's fillrate would be a bottleneck for all the each chips on the SLI configuration, and thus be very stupid(a wrong match of fillrate and the bandwidth would be stupid in the first place, just like the budget video cards on the market, limiting memory bandwidth to 64bit)

Pace
24th April 2002, 07:54
But Greeby, we could put a dual-core HyperThreaded Quad-SMP 2GHz Xeon in a beowulf cluster and fill it that way man! Get a grip dumb BBz. gEeZ! ;)

Pace
24th April 2002, 08:00
the need for the extra bandwidth is also handled w/ a seperate memory bus connected to the extra chip.
ur right that the need for bandwidth would be doubled with an two chip SLI config, but the two chip SLI config itself consists of double fillrate, double bandwidth.As I said? :rolleyes:

And as we've covered, SLI is useless unless we are fillrate limited. We're now bandwidth bandits baby!

Wombat
24th April 2002, 10:33
cb,
Well, let's just say that opinions of our CEO are, uhhh, varied and colorful.

My particular lab will probably actually grow if the merger goes through, since Intel bought the Alpha team, we have no Compaq counterpart.


Mmmm, beer.

jwb
24th April 2002, 11:36
Originally posted by Wombat
Yes, this forum is here so you guys can speculate about future products. But when you *cough*authoritatively explain how the GF4MX is chosen, when everything you say is a contradiction to KNOWN FACTS, well.


You hear that whooshing sound? That's your credibility plummeting.

WT<B>F</B> does "official version" mean? You can crack open the packaging on a GF4MX and see that it most certainly is not a GF4Ti.

I can't tell if I want you to pull your head out of your ass, or keep stuffing it up there until you asphyxiate yourself.

Good one.. had to go read a lexikon to really know what meant asphyxiate myself. English is not my mother-tongue.

Did I ever had credibilty? Do I need it?

Did you crack open both or is the "opinion" of the masses known facts?

I didn't explained it "authoritatively". If you felt it this way....

Believe what you will and be patient.

Wombat
24th April 2002, 13:03
Did I ever had credibilty? Do I need it? Yeah, everybody starts with being given at least a base amount of respect. You burned through that. Yes, you need it, especially here.

No, I haven't ripped them open, but the die photos are available. Also, they'd probably be going after Anand and other websites for publishing fraudulent information.

You're a crack baby. Go away.

Greebe
24th April 2002, 13:04
Did I ever had credibilty? Do I need it?

Did you crack open both or is the "opinion" of the masses known facts?

I didn't explained it "authoritatively". If you felt it this way....

Believe what you will and be patient.

When debating or informing others of how things work in the manner which you have done, the first rule is to actually know what you're talking about in the first place.

Cobos
24th April 2002, 14:00
Sure it is, but PLEASE people ease off jwb... I don't care if he says wrong things, tell him so, convince me with better arguments that he is wrong, no need to bite a newbie's head off, especially when he is not even a native speaker !

Cobos

Novdid
24th April 2002, 14:14
I don't care if he says wrong things, tell him so, convince me with better arguments that he is wrong, no need to bite a newbie's head off, especially when he is not even a native speaker !

The fact that he touts that the GF4 MX is a true Ti with disabled die blocks is just ridiculous. AND...that the "official" version is that they are two different cores but the truth is what he claims, while everything goes against him. Wombat tells him that's not the case just like other people have done and he still sais believe what you want to believe, but that's the way it is.

This is the first time I ever have critisized anyone in the forum for what they post (OK, maybe I haven't agreed with Gurm either at times, but who has???;)) and in my opinion I think he should go through the survival guide before posting again.

thop
24th April 2002, 14:29
http://www.theunholytrinity.org/cracks_smileys/contrib/blackeye/Eyecrazy.gifhttp://www.plauder-smilies.de/knuddel.gifhttp://www.theunholytrinity.org/cracks_smileys/contrib/blackeye/Eyecrazy.gif

jwb
24th April 2002, 23:56
Originally posted by Wombat
Yeah, everybody starts with being given at least a base amount of respect. You burned through that. Yes, you need it, especially here.

I always burn trough it. Afterwards you will "see" rearing their u... faces...


No, I haven't ripped them open, but the die photos are available. Also, they'd probably be going after Anand and other websites for publishing fraudulent information.
[/QUOTE]

If Anand posted die photos from whom do he got those? Yes from the same source you think will send lawyers for publishing fraudulent information. Folks need "panem et circenses"

Greebe
25th April 2002, 00:09
:rolleyes:

Wombat
25th April 2002, 09:05
Hmmm, time to move along, I suppose. jwb must just be some troll. There's no way anyone could actually be that stupid.

Nappe1
25th April 2002, 14:07
:D
I have been reading this thread all the way when I posted the "Official" version. :D

anyways, based on jwb's theory, Ferraris and donkeys are based on same core. donkeys just have few parts broken.

Indiana
25th April 2002, 15:31
Originally posted by Nappe1

anyways, based on jwb's theory, Ferraris and donkeys are based on same core. donkeys just have few parts broken.

Err, you do mean "Ferraris just have a few parts broken"..... :D:D:eek:

SCNR

Wombat
25th April 2002, 15:55
How would you know that? Have you ever opened up a donkey to see that it's not a Ferrari? After all, the people that put out anatomy books are probably lying.

Damien
25th April 2002, 22:58
Remember when you guys didn’t know shit or have you always been super intellectuals.
Mobbing is not what this forum is about. You guys are starting to sound like real ****oles...

Wombat
25th April 2002, 23:18
Hey, I still don't know shit. But I admit it, and I don't insist on conspiracy theories when everyone else has solid proof countering my position. I don't run the whole thing into the ground.

Evildead666
26th April 2002, 02:06
I don't think everyone is collectively mobbing...
it just seems to be a collective surprise...

Has anyone any ideas as to what size the die process is gonna be made in for Parhelia.
I know the most likely is 0.13µ but could it even be possible they bring out some versions next year in 0.09µ?

Higher clock speeds etc...

Novdid
26th April 2002, 06:00
I atleast tried to be gentle...:tired:

cbman
26th April 2002, 07:16
My wife always says I'm an a$$hole... so I am just trying to fit the part. :D

Jammrock
26th April 2002, 08:38
<spoken>
Folks, I'd like to sing a song about the American Dream. About me. About you. The way our American hearts beat down in the bottom of our chests. About the special feeling we get in the cockles of our hearts, maybe below the cockles, maybe in the sub-cockle area. Maybe in the liver. Maybe in the kidneys. Maybe even in the colon, we don't know.

<singing>
I'm just a regular Joe with a regular job
I'm your average white suburbanite slob
I like football and porno and books about war
I've got an average house with a nice hardwood floor
My wife and my job, my kids and my car
My feet on my table, and a cuban cigar

But sometimes that just ain't enough to keep a man like me interested
(Oh no) No Way (Uh-uh)
No, I've gotta go out and have fun
At someone else's expense
(Oh yeah) Yeah yeah yeah yeah yeah yeah yeah

I drive really slow in the ultrafast lane
While people behind me are going insane

I'm an ****ole (He's an ****ole, what an ****ole)
I'm an ****ole (He's an ****ole, such an ****ole)

I use public toilets and piss on the seat
I walk around in the summertime saying, "How about this heat?"

I'm an ****ole (He's an ****ole, what an ****ole)
I'm an ****ole (He's the world's biggest ****ole)

Sometimes I park in handicapped spaces
While handicapped people make handicapped faces

I'm an ****ole (He's an ****ole, what an ****ole)
I'm an ****ole (He's a real ****ing ****ole)

Maybe I shouldn't be singing this song
Ranting and raving and carrying on
Maybe they're right when they tell me I'm wrong

Naaaah!

I'm an ****ole (He's an ****ole, what an ****ole)
I'm an ****ole (He's the world's biggest ****ole)

<Spoken>
You know what I'm gonna do? I'm gonna get myself a 1967 Cadillac El Dorado convertible, hot pink with whaleskin hub caps and all leather cow interior and big brown baby seal eyes for headlights, yeah! And I'm gonna drive around in that baby at 115mph getting one mile per gallon, sucking down quarter pounder cheese burgers from McDonald's in the old-fashioned non-biodegradable styrofoam containers and when I'm done sucking down those grease ball burgers, I'm gonna wipe my mouth with the American flag and then I'm gonna toss the styrofoam container right out the side and there ain't a God damned thing anybody can do about it. You know why? Because we got the bombs, that's why.

<Spoken>
Two words. Nuclear ****ing weapons, okay?! Russia, Germany, Romania - they can have all the Democracy they want. They can have a big democracy cake-walk right through the middle of Tiananmen square and it won't make a lick of difference because we've got the bombs, okay?! John Wayne's not dead - he's frozen. And as soon as we find the cure for cancer we're gonna thaw out the duke and he's gonna be pretty pissed off. You know why? Have you ever taken a cold shower? Well multiple that by 15-million times, that's how pissed off the Duke's gonna be. I'm gonna get the Duke and John Cassavetes...
(Hey)
and Lee Marvin
(Hey)
and Sam Pekinpah
(Hey)
And a case of Whiskey and drive down to Texas...
(Hey, you know you really are an ****ole)
Why don't you just shut-up and sing the song pal!

<singing>
I'm an ****ole (He's an ****ole, what an ****ole)
I'm an ****ole (He's the world's biggest ****ole)

A-S-S-H-O-L-E Everybody! A-S-S-H-O-L-E

<Barking>
Arf Arf Arf Arf Arf Arf Arf
Fung achng tum a fung tum a fling chum
Oooh Oooh

<Spoken>
I'm an ****ole and proud of it!


Jammrock :p

Lemmin
26th April 2002, 08:47
Wow. That makes me so proud to be English... :cool:

LEM

K6-III
26th April 2002, 11:39
hehe

lewdvig
30th April 2002, 15:35
you are an idiot.
-everyone knows the GF4 MX is an NV17
-SLI or something like it makes no sense anymore unless you are talking about a multi-core chip (2 years away).

today's fastest GPUs are handcapped by memory bandwidth.

faster memory is exotic and expensive.

to take advantage of a dual architecture, each GPU would need its own dedicated memory (64-128mb each)

you just doubled the cost for no reason.

a wider memory bus enables the use of slightly slower memory. reducing costs.

an advanced GPU comparable in spec to the NV25 but with a few new tricks and a better memory bus would be a winner.

do not expect Matrox to leap over anyone. just as they did with the 400 when it came out, they will catch up or briefly take the lead. they lack the R&D to really compete in the performance market.

thop
30th April 2002, 15:58
Originally posted by lewdvig

do not expect Matrox to leap over anyone. just as they did with the 400 when it came out, they will catch up or briefly take the lead. they lack the R&D to really compete in the performance market.

while i would be more than happy with a card with overall matrox quality that is fast enough to play current games and the games of they next year at 1024x768x32 at 60fps+ (actually i only care about UT2, U2 and D3) i still hope you are not right on that one :(