I just stumbled upon this blog by Alex St. John, one of the creators of DirectX at Microsoft:
http://www.alexstjohn.com/WP/2013/07/22/the-evolution-of-direct3d/
It gives a very interesting view on how OpenGL and Direct3D related to each other in the early days, within Microsoft.
Microsoft certainly was not anti-OpenGL. On the contrary, one could argue that Microsoft’s commitment to OpenGL is what made gaming a priority for OpenGL and the ARB, and made it a direct competitor to Direct3D. Be sure to also read the comments, as Alex St. John (TheSaint) posts some interesting background there.
It also shows that Direct3D managed to fulfill the goals it set out to achieve.
And although Direct3D did not quite evolve the way St. John had originally imagined, after he left, the chosen direction is not necessarily bad.
He also has a nice archive of old DirectX resources for you to download, including some very early SDKs and documentation: http://www.alexstjohn.com/WP/2013/02/08/direct-downloads/
One of the (many) interesting points of that blog entry was about Intel’s (supposedly) deliberate attempt in dragging the development of PCI bus speed to avoid people using auxiliary cards to offload computations from their CPUs, afraid that it would make their CPUs irrelevant (at that time).
Well, thanks to that we now have exactly the opposite what Intel had wanted: GPGPUs. =b
Overall, a fascinating read, Scali.
Thanks for pointing that blog out.
Yea, that PCI-bus story is one that I’m not really sure of. Namely, even if you’re going for software-rendering, you still need a lot of bandwidth to render at decent resolutions and framerates.
One thing that Intel may have done deliberately is to keep the bus a one-way street. Writing to videomemory was always quite fast, but reading from videomemory has been very slow until the PCI-e bus came along.
So that meant that you could not perform calculations on the GPU and read them back with the CPU.
Then again, it may not be deliberate. It could just be that the technology of the time did not allow for efficient full-duplex communication. I don’t have the expertise to judge on that matter.
At any rate, it resulted in offloading as much of the graphics workload to the GPU, making it nearly self-sufficient with its own videomemory.