nVidia shows the other side of Fermi

I’ve already discussed Fermi earlier, when nVidia introduced us to the GPGPU computing capabilities of Fermi. Back then, a lot of people seemed to think that it wouldn’t be very good at graphics, since nVidia didn’t say anything about it (that’s the state of logic these days… present an architecture with many new computing capabilities, and somehow people think this is BAD for graphics, rather than good). But at CES, nVidia spoke about the graphics side of Fermi, or GF100 as the consumer graphics cards will be codenamed.

I’m not going to explain the entire architecture in detail here, because there are plenty of great articles on the internet already, such as this one on Anandtech. I’d just like to focus on what I think is the main new architectural feature.

That feature is the new vertex/triangle processing unit, which nVidia calls the PolyMorph Engine. Up to now, tessellation wasn’t that much of a success. The DX10 geometry shader didn’t really take off. Firstly it was too limited in its functionality, and secondly, it seemed to be limited by the rest of the pipeline. So nVidia decided to completely redesign the pipeline at this point, and allow triangle setup in parallel. Now that there are 16 units to tessellate and set up triangles, the triangle throughput shouldn’t be bottlenecked as quickly, and high degrees of dynamic tessellation will actually start to make sense.

I have mixed feelings about this myself. On the one hand, I think it’s great that nVidia is pushing polycount further. After all, even the most detailed games available at this point, such as Crysis, are still a far cry from actual CG movies. With high levels of dynamic tessellation, we can finally move towards Pixar RenderMan territory. Perhaps it will mean the end of the current in-between solutions like bump/parallax/occlusion mapping.

On the other hand… Ever since hardware T&L arrived, I’ve been arguing in favour of pushing polycount. The GPUs could handle far higher polycount, but developers didn’t really embrace the new opportunities. Polycount in games still increased very gradually (probably at least partly to continue supporting low-end hardware with slow or no hardware T&L). Outside of Crysis and 3DMark, not many games seemed to explore the possibilities of multiple millions of polygons on screen.

So I wonder how it goes. Will developers finally embrace the technology, and bump up the detail? Or will they keep the detail down, going for the lowest common denominator with no or slow tessellation?

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s