So, AMD has released a new Catalyst hotfix driver:
· The Catalyst Control Center includes an early prototype of some new tessellation controls. Our goal is to give users full control over the tessellation levels used in applications. The default selection “AMD Optimized” setting allows AMD, on a per application basis, to set the best level of tessellation. The intention is to help users get the maximum visual benefit of Tessellation, while minimizing the impact on performance. Currently no applications have been profiled, so the “AMD Optimized” setting will be non-operational until further notice.
· The “Use Application Settings” option gives applications full control over the Tessellation level. Users can also manually set the maximum tessellation level used by applications with the slider control
· The long term goal for the “AMD Optimized” setting is to use the Catalyst Application Profile mechanism to control the AMD recommended level of tessellation on a per application basis. AMD’s intention is to set the tessellation level such that we will not be reducing image quality in any way that would negatively impact the gaming experience.
It’s official: AMD is going for the driver hack to alleviate their tessellation deficiencies. Basically what they’ve said above is that the driver can artificially limit the tessellation factors. This classifies as a driver cheat for the simple reason that it breaks the DirectX 11 and OpenGL 4.0 APIs. Namely, the application explicitly passes tessellation factors to the tessellation units via the shaders. What AMD is doing here is short-changing the application. The application shader will say: “I want tessellation factor 15″, and the driver says “Sure… but I’m just going to use tessellation factor 5″. The developer already has full control over the tessellation factors through the API. So technically there’s no need for this slider. The developer can build it into the game, just like any other quality slider commonly found in games (texture detail, shader detail, shadowmap size etc). And historically, developers have been quite good at giving games plenty of controls so their games can be adjusted to perform very well on a wide variety of hardware (even if you disregard AMD hardware for a moment, there is quite a bit of performance disparity between nVidia’s slowest DX11 card and their fastest, so having control over tessellation detail is desirable even in an nVidia-only world). But now AMD allows the user (or effectively AMD themselves, when using the Optimized setting) to override any choices the developer made, and use lower detail.
It’s pretty much equivalent to reducing texture quality by silently using 16-bit textures instead of 32-bit ones, for example… or by using lower resolution textures than what the application is actually using (eg mipmap biasing). So basically it’s driver cheating. Trading in image quality for performance.
I can only hope that AMD won’t enable this by default (like their AI feature which does some of the above texture reduction tricks), since it will make any kind of comparison with competing cards even less apples-to-apples than they already were.
But I’m glad that AMD has now admitted that their hardware needs this option to lower tessellation quality, because they can’t do tessellation as well as nVidia’s hardware. So now you can pretend that you can run games at full tessellation detail, just like nVidia cards. Except unlike nVidia cards, your AMD card won’t actually render at full detail anyway, it cheats.
Update: I see that certain people refer to this as an ‘optimization’. It is NOT an optimization. To quote Wikipedia:
In computer science, program optimization or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power.
Making something work more efficiently or use fewer resources assumes that this ‘something’ is the same. That is, you have the same input and the same output for a given function. What AMD is doing does NOT class as an optimization because given the same input, you get less output. They are cheating by reducing the workload and producing results which are incorrect. Results which do not match the specification of DirectX and OpenGL.
Optimizing is about producing better, more efficient code. AMD doesn’t do anything to improve the code at all.
Besides, you should ask yourself: why did AMD introduce this setting, while nVidia apparently doesn’t need it? Without any kinds of cheats or hacks nVidia’s hardware ploughs through the tessellation workload. If all these tessellation games and benchmarks were so suboptimal, wouldn’t nVidia need a cheat as well?
Another side-effect of this feature is that AMD can no longer hide behind their excuse of ‘over-tessellation’. Namely, you can put the limit on any value you like. You will find that even in those ‘over-tessellated’ scenario’s, setting the limit to 16x instead of the regular 64x, doesn’t do much, if anything, for performance. This indicates that although in these scenario’s, very high tessellation factors may sometimes occur, they are not just tessellating the heck out of the scene, but rather they are working in a nicely adaptive fashion. It’s much like adaptive anisotropic filtering… you can set the maximum to 16x, but it will barely be slower than 8x, or perhaps even 4x, because the adaptive algorithm will rarely use values higher than 8x under normal circumstances. In this case, the tessellation factor doesn’t go over 16x often enough to impact performance. AMD’s problem is more structural than these sporadic high tessellation factors (as I already said earlier).