A while ago, I wrote down a brief summary of the evolution of (consumer) 3D graphics hardware for someone. I concentrate mainly on PC hardware here, because it would be impossible to try and cover every custom graphics solution out there (such as the Amiga, which I covered earlier, which could do hardware-accelerated line drawing and flood filling, allowing you to create your own polygon routine). I have updated it slightly, and I will publish it here in the hope it may interest some of you:
- 3D rendering was completely CPU-based. The CPU would perform all lighting, transforming, rasterizing and finally the actual pixel drawing. Although there were some videocards in the early 90s which could render lines or even polygons in hardware, these were sold as ‘Windows accelerator’ cards, and were not really used for rendering 3d graphics/games. The videocards were mainly used as ‘dumb’ framebuffer devices. Early graphics card standards were set by IBM, with the exception of Hercules monochrome graphics. (Pre-accelerator era, CGA/EGA/VGA)
- The innerloop of the triangle filling routine was accelerated by the videocard. A triangle is rendered as two scanline-oriented quads (upper and lower half). The CPU could pass these quads to the videocards, and the scanlines were filled automatically. Basic texturing and shading could be applied as well, but the CPU still had to do the setup to calculate the gradients for the quads. (e.g. early VooDoo cards, pre-Direct3D to early Direct3D, proprietary 3D APIs and MiniGL)
- Rasterizing and triangle gradient setup were accelerated by the videocard. The CPU could now feed triangles in screenspace directly to the videocard. (Roughly Direct3D5-era)
- The dawn of the GPU: Transforming and lighting were accelerated by the videocard. The CPU could now pass triangles in object space (which could be stored in videomemory, since they would be static throughout the lifetime of the object), transform matrices and light parameters to the GPU, and the GPU would completely accelerate the drawing process from start to finish. The term ‘GPU’ (Graphics Processing Unit) was coined by nVidia, to imply that the videocard was now a complete processor in its own right. ATi instead tried to market the term VPU (Visual Processing Unit), but that term did not stick. (Direct3D7-era)
- The dawn of programmable shaders: Up to now, the lighting and shading were fixed-function, and operated as a state machine. The CPU would set a few states to control how the GPU would perform shading. This state machine has become so complex, and because of multitexturing, it already worked in multiple stages, that it started to make sense to model these states as simple instructions with input and output registers. The fixed-function T&L and shading operations could now be ‘scripted’ in an assembly-like language. Purists could argue that the term GPU was not appropriate for the earlier non-programmable chips. (Direct3D8-era)
- Unified shaders and GPGPU: Up to now, vertex processing and pixel processing were two seperate types of operations, requiring separate types of execution units. The GPU would have a small set of vertex units, which would have high precision floating point, and a relatively powerful instructionset. Then it would have a larger set of pixel units, which were more aimed at texturing, and had lower precision arithmetic, and a simpler, less powerful instructionset. You basically had to use two languages when programming: vertex shader language and pixel shader language. But now, all shaders were made unified. So now you could use the same high-precision powerful instructions for pixel shaders as for vertex shaders. The hardware now also used a single large array of shader units, which could dynamically be allocated to whatever shaders were running (effectively an automatic load balancing system between vertex processing and pixel processing). At this time, nVidia also introduced the first real GPGPU: the GeForce 8-series. Its unified shaders were linked to a large shared cache, and could be used outside the graphics pipeline, which had been hardwired up to now (if you wanted to do any calculations, you’d always have to set up geometry and render actual triangles, in order to make pixel shaders execute and output data to a buffer). (Direct3D10-era)
While the above listing is not a complete listing, and the PC platform was generally not the first to receive certain technology, it should at least give you a bit of insight in how graphics hardware evolved, and how more and more parts were offloaded from the CPU to the videocard. I pick out a few major changes in how processing of 3D was handled in general (‘inflection points’), skipping over the smaller evolutions in graphics hardware, such as multitexturing, stencil buffering, higher precision pixel processing and whatnot. As such, the whole Direct3D9 generation is skipped. While this generation of hardware was far more widespread than the earlier Direct3D8 hardware, from a technical point-of-view it mainly took the existing D3D8-technology one step further, rather than changing the way graphics were rendered altogether.
Perhaps it is also good to mention Silicon Graphics (SGI) here. This was a company dedicated to graphics computing at an early stage. In the early 80s they started developing their own graphics terminals and UNIX workstations, with the hardware mainly designed for graphics processing. They called their product line IRIS (Integrated Raster Imaging System). They developed a graphics API under the name of IRIS GL (Graphics Language). Their own flavour of the UNIX OS would go by the name of IRIX.
SGI designed some custom chips to accelerate certain 3D graphics tasks and incorporate them in IRIS GL. Initially they were mainly math co-processors for efficient geometry processing. This evolved into fully accelerated 3D. In 1992, SGI decided to open up IRIS GL to third party licensees, and renamed it OpenGL.
SGI’s custom graphics chips were eventually eclipsed by standard consumer-grade ‘gaming’ graphics cards (in fact, some of the last SGI workstations actually used ATi chips derived from their Radeon line). This led to SGI’s demise in 2009. However, OpenGL was not under the control of SGI, but of an independent Architecture Review Board (ARB), and was transferred to the Khronos Group in 2006. OpenGL still lives on today, and is still actively being updated by the Khronos Group.
So SGI played a significant role in the early development of hardware acceleration and OpenGL. In the early years, PC graphics cards were mainly trying to play catch-up with SGI’s hardware. The big change was around the time of the first ‘GPU’, nVidia’s GeForce256, in 1999. A card which could accelerate pretty much the entire OpenGL featureset (and actually had mature OpenGL drivers to do so), and would rival SGI’s much more expensive workstations in both performance and image quality.
Another honourable mention is for the company 3DFX. This is a company founded by former SGI employees in 1994. They were one of the first to bring a 3D accelerator card to the PC: the 3DFX VooDoo in 1996, the first truly successful 3D accelerator for PC. This was one of the most dramatic revolutions in the history of the PC platform. Before 3DFX, there were quite a few different companies making graphics chips for PCs. The younger generation will probably never have heard of them at all (such as Tseng Labs, Number Nine, Matrox, Cirrus Logic, Western Digital/Paradise, S3, Trident). The reason for this is that one company: 3DFX.
The first VooDoo card was nothing less than a bombshell. It caught most graphics chip companies off guard completely. Many of them never even managed to release a 3D accelerator card at all, before they had to retreat from the graphics market. It just went THAT quickly. There were a few who actually did manage to release a 3D accelerator (e.g. Matrox, ATi, Trident, S3, Paradise), but in most cases it was nowhere near good enough to compete with 3DFX’ offerings. And 3DFX just kept churning out more powerful VooDoo cards with their SGI expertise.
Of these companies, only a few still survive today. S3 more or less lives on through VIA these days. Matrox and ATi were the only two ‘old’ companies that managed to survive 3DFX’ onslaught. Matrox managed to compete for a short while, and at one time actually had the fastest video card on the market (the Millennium G200). However, things went downhill from there (more on that later), and they had to retreat from the consumer market. They are still around, and still produce graphics hardware, but they aim at niche markets now, not high-performance 3D acceleration.
ATi did not get off to a very good start in 3D acceleration. Their early chips (the Rage series) were notoriously slow and buggy. However, since ATi was a popular choice for OEMs, they managed to survive even though their products were not too competitive. ATi’s products improved steadily however, and when they released their Radeon line, they were starting to compete for the performance crown.
Another honourable mention is PowerVR (technically that is not the company name, but the company name changed many times, while the PowerVR brand name was kept). During the early VooDoo days, a few PowerVR chips were released, with moderate success. A PowerVR chip also powered the Sega DreamCast console. However, PowerVR eventually had to withdraw from the PC market as well. Like Matrox, they managed to find a niche. Their niche was mobile and embedded devices. Today, PowerVR chips power many smartphones and tablets, including the most important ones: the iPhones and iPads.
In a twist of irony, 3DFX went out of business almost as quickly as all those established graphics companies that they themselves pushed out of business. And even more irony is in the fact that yet another newcomer played a big part in this. That newcomer was nVidia, founded in 1992, but their first proper PC video cards were released in 1997 (the Riva128, after the ill-fated STG-2000 in 1995).
nVidia quickly started to compete with 3DFX head-on for the performance crown, and delivered more features and better image quality with the TNT and TNT2 series. 3DFX quickly found themselves unable to compete, and by 2000, nVidia bought out 3DFX.
Around this time, Matrox also started to slip in terms of performance and features. Then nVidia came out with the GeForce series and ATi came out with the Radeon series, and Matrox’ new chips were not even remotely competitive anymore, so Matrox retreated, leaving only nVidia and ATi to compete.
Technically nVidia won that battle as well, because after a few years of stiff competition, ATi was acquired by AMD in 2006. AMD kept the ATi brand name alive for a few more years, but in 2010, the ATi brand was dropped, and graphics products are now marketed as “AMD Radeon”.
All in all, SGI, 3DFX and nVidia have been the most significant companies in the history of 3D graphics hardware.