Been reading some RGBA articles… August 27, 2005Posted by winden in coding, demoscene.
I was reading some articles yesterday, which touched on some very interesting topics like mesh-storage optimization for intros. They were taking to basic aproaches:
1. Store a basic quad-mesh and refine it with bicubic surfaces: What it didn’t say was if the meshes were then stored pre-surfaced on memory or rather they were re-surfaced with vertex shaders realtime while rendering. Of course this approach makes meshes much easier to pack, since you don’t really need all that extra detail that is readly available with the interpolation.
2. These basic meshes were then delta-encoded, most probably a byte-per-coord one, so that they could then be packed better. This has been standard procedure for sound data (“The Player” mod-player surfaced could pack the 8bit samples with 8bit deltas, or even 4bit deltas for lossy compression), but is not so used for meshes… or is it?
But my main point in commenting the articles, is to point out how much PC coding is being affected by PS2 coding, even if it’s indirectly and the same tradeoffs are done for completely different purposes.
Taking the delta coding I just wrote about, it’s in fact a hardware feature on PS2 to take a DMA’d chunk of coordinates which come packed with byte-per-coord delta and are unpacked to 32bits before reaching the vector units that drive the graphics chips… but it’s done to save bus bandwidth, because PS2 can draw much more polys than it could receive from the busses!
One of the first games which looked really good on PS2 was SSX, a snowboarding game which used realtime-surfaces to render the tracks with dynamic level-of-detail, also due to the speed mismatch… in fact doing these surfaces were making more vertexes and more detail without forcing any slowdown!
So there you have, what may we spec when the new heavy multiprocessing PS3 hits? I sure wish we all get cell-like processors to play with! ;)