I read this interview recently. And really… what the heck is he on about? Make the API go away? Only an idiot would make such a request (newbies who haven’t programmed in the DOS era, and don’t understand what DirectX is for). The API is a hardware abstraction layer. Without hardware abstraction, a developer will have to implement separate code paths for all common hardware. The biggest reason why you wouldn’t want that, is because you will force hardware to become backwards compatible. This brings us back to the lowest-common-denominator culture that we had in the DOS era, with sound cards for example.
Sound cards had to be compatible with a SoundBlaster, since that was the most popular card. You could buy an alternative card, but most games would simply use it as a SoundBlaster, and not use the extra features that your card may offer. This made the hardware more expensive as well, since they had to have a Yamaha synthesizer chip to be compatible with a SoundBlaster (which also generated noise. I always muted my FM chip whenever I didn’t use it, because it was the sole source of hiss. The wavetable synth and DACs had over 90 db SNR, so they were about as quiet as a CD player).
Thanks to DirectX, SoundBlaster compatibility was no longer required, and the quality of sound cards, and the features offered, were greatly improved, now that they were freed of the hardware-level compatibility.
The x86 CPU suffers from the same problem: every new CPU has to support all the previous x86 instructions and modes, no matter how irrelevant they might be to modern software. As a result, every x86 CPU comes with a very complex x86 decoder, which translates the legacy code to an internal format (as I also explained in an earlier blog).
Currently we are at the point where we could FINALLY live without any kind of VGA compatibility on our videocards, so the last bit of legacy crud can finally be removed. And then people want to go back to hardcoding software to the hardware? Bad idea, VERY bad! I especially don’t understand why something like this would come from a hardware manufacturer. Especially GPU designers should realize that the freedom they have in designing their architecture comes from the fact that the hardware will only be used via hardware abstraction layers, and as such, you can completely throw your architecture and instructionset around every few years. For example, although a DX10 card can run DX8 and DX9 code, the architecture is entirely different. A DX10 card has unified shaders which are fully floating-point. The compiler will translate the legacy DX8/9 shaders to floating point unified shaders, without the actual hardware having to support any specific instructions from DX8/9. There is no need for complex decoding and translation hardware to ensure that legacy software will still work. The driver will take care of the translation at the compiling stage.
Sure, it’s not perfect, but neither is the console solution. Although Xbox360 and PS3 may be more efficient than a PC, their hardware cannot be updated. As a result, consoles have a relatively long life, with no hardware developments. With a PC, you don’t have to wait for 5-6 years until a new generation of hardware comes out. Newer, more powerful CPUs and GPUs are released all the time, and you can use them as soon as they are released. New consoles are not even guaranteed to run older console games.
I also don’t understand the complaints about the API inefficiency in general. Microsoft has changed the driver model fundamentally with DirectX 10, to make state management a lot more efficient. In DX10+, you also need to program nearly everything in shaders, where they were available through renderstates/fixed function in DirectX 9 earlier. So DX10+ already feels a lot like writing everything yourself (although unlike eg OpenCL, you can still make use of specific fixedfunction hardware in the pipeline, for maximum efficiency). If anything, these complaints are a few years too late. They apply more to DirectX 9 (and OpenGL) than to DX10+. Although, ironically the DX9 drivers are so optimized that they tend to run better than any other API. Perhaps AMD should spend some more time optimizing their DX10+ drivers first?