PDA

View Full Version : Rumors !!!!!!!!!!!!!!!!!!!!!!!!



Lukappaseidue
25th November 2001, 07:18
Surfing the web I have found this new from:

http://www.rivastation.com/index_e.htm


11/24/01 Rumors

There are new rumors regarding upcoming graphic chips floating around. nV News posted some speculations on the next big NVIDIA chip: The NV25:

Rumoured 6 Pixel pipelines
Core freq: 300 MHz.
Memory: 660 MHz. (eff) ~ 10.5 GB/sec BW, assuming they stay with 128-bit data paths.
Supports TwinView
Supports (finally) Hardware iDCT
More powerful T&L unit, to include a second Vertex Shader
Can't find the link, but there's a rumour stating that we can expect Voodoo5 5500-esque Anti-Aliasing feature. The presumption is that the NV25 will bring a Rotated-Grid AA implementation to the table.
.13u Manufacturing process

N O T E

But NVIDIA is not the only one. There are also strong rumors about Matrox A project called Parhelia - with 4 Vertex- and 4 Pixel Shader Pipes and 19,2GB/s memory bandwith using 256Bit DDR (If itīs true...).
:eek:

prr
25th November 2001, 07:52
And this is connected to Matrox hardware ... how?

Venturer
25th November 2001, 10:19
Originally posted by Lukappaseidue
But NVIDIA is not the only one. There are also strong rumors about Matrox A project called Parhelia - with 4 Vertex- and 4 Pixel Shader Pipes and 19,2GB/s memory bandwith using 256Bit DDR (If itīs true...).
:eek:

Ok, that's cool, but remeber it s only a rumor! And rumors from Matrox are never verfied! :p :p :p

Kruzin
25th November 2001, 10:44
Relocated to The Crystal Ball forum, home of rumors...

mdhome
25th November 2001, 11:49
All though these are rumours, the bandwidth specs for Parhelia are pretty impressive to say the least. Now let the fantasizing begin.:)

Kastuvas
25th November 2001, 12:29
Rumors don't come from nowhere, so theres got to be some truth;)

superfly
25th November 2001, 15:53
Wait a minute there,4 vertex unit's AND 4 pixel shader pipes?...In a single chip?...that really doesn't make too much sense unless we're talking a multiple chip architecture with each chip having at most 2 of each,which would make more sense and most likely easier to implement as well.


But i will admit that those bandwith specs are pretty impressive,but i don't really agree that having on die embeded dram running at 150 mhz with a 512 bit internal bus would be all that usefull since there's only so much you could fit in there and the second obvious drawback is that it would take up die space that could be better used IMHO,to make the chip as feature complete as possible.

For instance,if you'd use embeded dram,most likely the vertex engine would still be present but likely have it's transistor count reduced so that the chip woudn't be too big and expensive to build....examples that that.

RedRed
25th November 2001, 17:20
Superfly

A question.
Could this Edram milarkey be used as a 'pogrammable' GPU?

I dont know much about the process.......

however, and this is pure speculation.... (no basis in fact, I am just asking the question , and it might be stupid!!! :) ) If you had an edram enabled GPU, would it not be possible to configure it to a Von Neuman model, with the edram being used both as a a store for data AND as a programmable 3d engine, dynamically using its own memory resource as it need to from ram? Could it store the 'kernel' code in a non-volitile section of the GPU, calling in the additional engines as required from ROM's on the graphic card, or resource files in the drivers themselves......

That way the application using the 3d components could load up the correct bits of microcode, leaving the rest of the edram for 3d processing.... the performance would vary, with more resources being available for apps with less complex requirements for resource (or using only a couple of 3d features), while more complex engines would use more, relying more on the boads main ram....

obviously, there would complexity in the edram, but the advantages of being able to have drivers that almost totally re write the card would be amazing.....

Tell me if I am talking Pooh!!!!!


RedRed

superfly
25th November 2001, 18:01
That would be nice to see...but i doubt that graphics chip makers would even consider it since it would way too complex to implement and would likely be slower than a more conventional programable(up to a point) asic.


Edram would most likely be used for both final frame buffer storage(just before the image is sent to the monitor) and if possible,depending on how much memory it has,as a means of suplying the graphics core with vertex and texture data as quickly as possible,much like the embeded memory that built into the ps 2's graphics chip(4 mb worth).


It works very nicely on the console since most t.v's support a max of 640*480,which fits quite nicely in those 4 megs.

But with pc monitors that support 1600*1200 32 bit being quite affordable these days and the current top end end video cards can play quite a few games at 1600*1200 32 bit,that requires nearly 30 megs just for frame buffer storage and that not even counting on the fact that the current trend is for game makers to use larger textures as well,up from 256*256 to 512*512 and even greater than that.


Something that i very much doubt can be made to fit in a graphics chip with embeded memory using current 0.15 and even upcoming 0.13 micron fab processes..

KeiFront
26th November 2001, 09:26
http://www.xbitlabs.com/news/story.html?id=1006599702

mdhome
26th November 2001, 09:51
It will be Winner IV, supporting up to 4 monitors at the same time. As we know, all the latest graphics cards from ELSA are based on NVIDIA chips. Therefore, we dare suppose that NVIDIA may have one more special solution planned for the market sector, which they haven’t conquered yet.

KeiFront
26th November 2001, 10:24
yep serious competition for Matrox.