A brief overview on the history of 3D graphics hardware

A while ago, I wrote down a brief summary of the evolution of (consumer) 3D graphics hardware for someone. I concentrate mainly on PC hardware here, because it would be impossible to try and cover every custom graphics solution out there (such as the Amiga, which I covered earlier, which could do hardware-accelerated line drawing and flood filling, allowing you to create your own polygon routine). I have updated it slightly, and I will publish it here in the hope it may interest some of you:

  1. 3D rendering was completely CPU-based. The CPU would perform all lighting, transforming, rasterizing and finally the actual pixel drawing. Although there were some videocards in the early 90s which could render lines or even polygons in hardware, these were sold as ‘Windows accelerator’ cards, and were not really used for rendering 3d graphics/games. The videocards were mainly used as ‘dumb’ framebuffer devices. Early graphics card standards were set by IBM, with the exception of Hercules monochrome graphics. (Pre-accelerator era, CGA/EGA/VGA)
  2. The innerloop of the triangle filling routine was accelerated by the videocard. A triangle is rendered as two scanline-oriented quads (upper and lower half). The CPU could pass these quads to the videocards, and the scanlines were filled automatically. Basic texturing and shading could be applied as well, but the CPU still had to do the setup to calculate the gradients for the quads. (e.g. early VooDoo cards, pre-Direct3D to early Direct3D, proprietary 3D APIs and MiniGL)
  3. Rasterizing and triangle gradient setup were accelerated by the videocard. The CPU could now feed triangles in screenspace directly to the videocard. (Roughly Direct3D5-era)
  4. The dawn of the GPU: Transforming and lighting were accelerated by the videocard. The CPU could now pass triangles in object space (which could be stored in videomemory, since they would be static throughout the lifetime of the object), transform matrices and light parameters to the GPU, and the GPU would completely accelerate the drawing process from start to finish. The term ‘GPU’ (Graphics Processing Unit) was coined by nVidia, to imply that the videocard was now a complete processor in its own right. ATi instead tried to market the term VPU (Visual Processing Unit), but that term did not stick. (Direct3D7-era)
  5. The dawn of programmable shaders: Up to now, the lighting and shading were fixed-function, and operated as a state machine. The CPU would set a few states to control how the GPU would perform shading. This state machine has become so complex, and because of multitexturing, it already worked in multiple stages, that it started to make sense to model these states as simple instructions with input and output registers. The fixed-function T&L and shading operations could now be ‘scripted’ in an assembly-like language. Purists could argue that the term GPU was not appropriate for the earlier non-programmable chips. (Direct3D8-era)
  6. Unified shaders and GPGPU: Up to now, vertex processing and pixel processing were two seperate types of operations, requiring separate types of execution units. The GPU would have a small set of vertex units, which would have high precision floating point, and a relatively powerful instructionset. Then it would have a larger set of pixel units, which were more aimed at texturing, and had lower precision arithmetic, and a simpler, less powerful instructionset. You basically had to use two languages when programming: vertex shader language and pixel shader language. But now, all shaders were made unified. So now you could use the same high-precision powerful instructions for pixel shaders as for vertex shaders. The hardware now also used a single large array of shader units, which could dynamically be allocated to whatever shaders were running (effectively an automatic load balancing system between vertex processing and pixel processing). At this time, nVidia also introduced the first real GPGPU: the GeForce 8-series. Its unified shaders were linked to a large shared cache, and could be used outside the graphics pipeline, which had been hardwired up to now (if you wanted to do any calculations, you’d always have to set up geometry and render actual triangles, in order to make pixel shaders execute and output data to a buffer). (Direct3D10-era)

While the above listing is not a complete listing, and the PC platform was generally not the first to receive certain technology, it should at least give you a bit of insight in how graphics hardware evolved, and how more and more parts were offloaded from the CPU to the videocard. I pick out a few major changes in how processing of 3D was handled in general (‘inflection points’), skipping over the smaller evolutions in graphics hardware, such as multitexturing, stencil buffering, higher precision pixel processing and whatnot. As such, the whole Direct3D9 generation is skipped. While this generation of hardware was far more widespread than the earlier Direct3D8 hardware, from a technical point-of-view it mainly took the existing D3D8-technology one step further, rather than changing the way graphics were rendered altogether.

Silicon Graphics

Perhaps it is also good to mention Silicon Graphics (SGI) here. This was a company dedicated to graphics computing at an early stage. In the early 80s they started developing their own graphics terminals and UNIX workstations, with the hardware mainly designed for graphics processing. They called their product line IRIS (Integrated Raster Imaging System). They developed a graphics API under the name of IRIS GL (Graphics Language). Their own flavour of the UNIX OS would go by the name of IRIX.

SGI designed some custom chips to accelerate certain 3D graphics tasks and incorporate them in IRIS GL. Initially they were mainly math co-processors for efficient geometry processing. This evolved into fully accelerated 3D. In 1992, SGI decided to open up IRIS GL to third party licensees, and renamed it OpenGL.

SGI’s custom graphics chips were eventually eclipsed by standard consumer-grade ‘gaming’ graphics cards (in fact, some of the last SGI workstations actually used ATi chips derived from their Radeon line). This led to SGI’s demise in 2009. However, OpenGL was not under the control of SGI, but of an independent Architecture Review Board (ARB), and was transferred to the Khronos Group in 2006. OpenGL still lives on today, and is still actively being updated by the Khronos Group.

So SGI played a significant role in the early development of hardware acceleration and OpenGL. In the early years, PC graphics cards were mainly trying to play catch-up with SGI’s hardware. The big change was around the time of the first ‘GPU’, nVidia’s GeForce256, in 1999. A card which could accelerate pretty much the entire OpenGL featureset (and actually had mature OpenGL drivers to do so), and would rival SGI’s much more expensive workstations in both performance and image quality.

3DFX

Another honourable mention is for the company 3DFX. This is a company founded by former SGI employees in 1994. They were one of the first to bring a 3D accelerator card to the PC: the 3DFX VooDoo in 1996, the first truly successful 3D accelerator for PC. This was one of the most dramatic revolutions in the history of the PC platform. Before 3DFX, there were quite a few different companies making graphics chips for PCs. The younger generation will probably never have heard of them at all (such as Tseng Labs, Number Nine, Matrox, Cirrus Logic, Western Digital/Paradise, S3, Trident). The reason for this is that one company: 3DFX.

The first VooDoo card was nothing less than a bombshell. It caught most graphics chip companies off guard completely. Many of them never even managed to release a 3D accelerator card at all, before they had to retreat from the graphics market. It just went THAT quickly. There were a few who actually did manage to release a 3D accelerator (e.g. Matrox, ATi, Trident, S3, Paradise), but in most cases it was nowhere near good enough to compete with 3DFX’ offerings. And 3DFX just kept churning out more powerful VooDoo cards with their SGI expertise.

Of these companies, only a few still survive today. S3 more or less lives on through VIA these days. Matrox and ATi were the only two ‘old’ companies that managed to survive 3DFX’ onslaught. Matrox managed to compete for a short while, and at one time actually had the fastest video card on the market (the Millennium G200). However, things went downhill from there (more on that later), and they had to retreat from the consumer market. They are still around, and still produce graphics hardware, but they aim at niche markets now, not high-performance 3D acceleration.

ATi did not get off to a very good start in 3D acceleration. Their early chips (the Rage series) were notoriously slow and buggy. However, since ATi was a popular choice for OEMs, they managed to survive even though their products were not too competitive. ATi’s products improved steadily however, and when they released their Radeon line, they were starting to compete for the performance crown.

Another honourable mention is PowerVR (technically that is not the company name, but the company name changed many times, while the PowerVR brand name was kept). During the early VooDoo days, a few PowerVR chips were released, with moderate success. A PowerVR chip also powered the Sega DreamCast console. However, PowerVR eventually had to withdraw from the PC market as well. Like Matrox, they managed to find a niche. Their niche was mobile and embedded devices. Today, PowerVR chips power many smartphones and tablets, including the most important ones: the iPhones and iPads.

nVidia

In a twist of irony, 3DFX went out of business almost as quickly as all those established graphics companies that they themselves pushed out of business. And even more irony is in the fact that yet another newcomer played a big part in this. That newcomer was nVidia, founded in 1992, but their first proper PC video cards were released in 1997 (the Riva128, after the ill-fated STG-2000 in 1995).

nVidia quickly started to compete with 3DFX head-on for the performance crown, and delivered more features and better image quality with the TNT and TNT2 series. 3DFX quickly found themselves unable to compete, and by 2000, nVidia bought out 3DFX.

Around this time, Matrox also started to slip in terms of performance and features. Then nVidia came out with the GeForce series and ATi came out with the Radeon series, and Matrox’ new chips were not even remotely competitive anymore, so Matrox retreated, leaving only nVidia and ATi to compete.

Technically nVidia won that battle as well, because after a few years of stiff competition, ATi was acquired by AMD in 2006. AMD kept the ATi brand name alive for a few more years, but in 2010, the ATi brand was dropped, and graphics products are now marketed as “AMD Radeon”.

All in all, SGI, 3DFX and nVidia have been the most significant companies in the history of 3D graphics hardware.

About these ads
This entry was posted in Direct3D, Hardware news, Oldskool/retro programming, OpenGL, Software development, Uncategorized and tagged , , , , , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

13 Responses to A brief overview on the history of 3D graphics hardware

  1. MacOS9 says:

    Thanks for another informative article Scali, this time on the history of 3D graphics. I feel old reading this since I remember names like “Matrox” (also there was “Diamond 3D” if I remember correctly, from the late 90s).

    A couple of questions for you:

    (1) When looking for a new computer, should one steer clear of AMD (previously ATi) graphics chips and look for nVidia instead, or is this irrelevant in 2012? (I remember their being various blogs on the internet a couple of yrs. ago suggesting that games take a serious frame-rate hit with ATi [or maybe this was only a problem on Macs?].)

    (2) I’ve always been confused by the term “2.5D games,” a term often thrown around in the early 2000s to differentiate between games like Quake (fully 3D supposedly), and games built around the BUILD and Marathon engines (e.g., Duke Nukem 3D, Damage Incorporated, Shadow Warrior). What exactly is 2.5D, considering that 2D refers to classic side-scrollers like the first two Prince of Persia titles?

    • Scali says:

      Yes, Diamond was a big name, and pretty much a guarantee for good performance videocards back then. But they did not design their own chips. They used various chips, including Tseng Labs, Cirrus Logic, S3 and 3DFX. I’m not sure why they went out of business exactly, perhaps because they stuck to 3DFX rather than going for nVidia chips? I don’t recall Diamond ever using nVidia chips at least. Orchid is another company from that era that suddenly disappeared (they were actually the first to sell VooDoo cards). Diamond was restarted a few years ago, and they make videocards again: http://www.diamondmm.com/ Although I have never seen any for sale in my area.

      Matrox built their own cards (as did ATi back then), but funny enough their first 3D card, the m3D, was the first and only time that Matrox ever used a third-party chip. They chose a PowerVR PCX2. Probably a good indication of the blow that 3DFX dealt. And it may have been the reason why Matrox was one of the few who could survive the initial blow, and continue to compete for a few years.

      As for your questions:
      (1) I don’t know about Mac performance specifically, but on Windows, AMD cards are generally well-matched to nVidia cards in terms of price/performance. Whichever you prefer depends on the whole package deal (things like power consumption, noise, extra features like Eyefinity, 3D Vision, PhysX and whatnot). However, AMD generally has poorer OpenGL support than nVidia. That may hurt Mac somewhat. On Windows it is not a problem, since hardly any software uses OpenGL anymore, it’s all Direct3D. The only time I had a problem with my Radeon HD5770 was when I wanted to run Rage. It took a while for AMD to release drivers that would run Rage properly. But once those drivers were out, Rage worked just fine. So it’s not that big of a deal really. Bit of an inconvenience, but enough to “steer clear” of AMD? Well, I don’t think so. Not for Windows anyway.

      (2) The term 2.5D means that things look 3D, in the sense that there is perspective, but you can’t really move around in 3D. Also, generally these games still use scaled sprites for characters, rather than actual 3D models. These sprites would not change orientation when you moved around, but always faced you the same way.
      For example, Wolfenstein 3D could only draw walls, not ceilings and floors. And you could only rotate left and right, not up and down. Later games, like Doom and Duke Nuken 3D would be able to draw ceilings and floors, but they were still using a hack. They were limited to horizontal or vertical planar surfaces, and you could still not look up and down. Some of these games, such as Hexen, had a hack that allowed you to look up and down, but it resulted in clearly distorted visuals, because they were just tweaking some precalculated tables, rather than actually rendering the planes at an angle.

      Descent was one of the first games that had a full textured 3D world which allowed full motion in all directions. But I believe it still used sprites for characters/objects. Quake used real polygonal models for characters/objects, so that these too had proper perspective. So it was finally ‘full 3D’ as we know it today (there were plenty of other full 3d games before that, mainly flight sims or racing games, but they generally did not use textures).

      • snemarch says:

        Duke Nukem 3D’s BUILD engine was kinda special – iirc it did allow a limited amount of looking up/down (but yes, with funky perspective distortion, probably why it was only a limited amount), and it supported a single level of “room above other room” – which was somewhat of an engine hack, and horrible to deal with in the editor – but it supported it nevertheless :)

      • Scali says:

        Most of these old engines have been rewritten over the years, with OpenGL and/or D3D renderers.
        I played Duke Nukem 3D not too long ago, with full 3D acceleration (with proper perspective, yay), high-res textures, texture filtering and all that :)
        Same with Wolf3D and Doom I/II.

      • wfw311 says:

        Of course Diamond used nVidia chips, starting with the Diamond Edge 3D (NV1) and continuing with the Viper V330 (Riva 128), Viper V550 (TNT), Viper V770 (TNT2).
        Then they merged with S3 which was the beginning of their problems.

      • Scali says:

        I’d say the NV1 was the beginning of their problems then :)

    • k1net1cs says:

      Ah, Diamond Multimedia.
      I still remember the Monster 3D add-on card I had back then.
      I got it to supplant my Stealth3D 2000 card, IIRC.
      Quite funny with the external pass-through setup now that I’m remembering it again. =b

      • Scali says:

        Yea, I had quite a few Diamond cards myself, in the old days. Then I switched to Matrox. Never had a VooDoo card myself. My first 3d accelerator was a Matrox Mystique. Never had the Matrox m3D, but I do have a Videologic Apocalypse 3Dx (the m3D is basically a rebrand of that). I put together some code to render a donut on that recently: http://www.youtube.com/watch?v=1BWbuUg8yvA
        As far as I know, that is the only add-on 3D card that did not require an external pass-through cable. A very elegant solution.

        Also had a G200 and G450. Then I discovered just how great those nVidia GeForce cards really were, so I got a GeForce2 GTS. Legendary stuff.

      • k1net1cs says:

        I can’t really remember whether I had some other cards after that Monster 3D, but I still remember my GeForce 256; still have the retail packaging box stored somewhere.
        I skipped GeForce 2, and went on to get GeForce 3; the vanilla, 1st generation one, not the Ti.
        Then I skipped 4 and FX 5xxx, settled on a 6600, skipped 7xxx, got an 8800GT, went AMD for awhile with a 5770 (still being used on my C2D E6550@3.01GHz system, with an 80GB PATA HDD, heh), then settled on a 560Ti (non-448, paired with an i5-2500K@4.4GHz).

        As for the laptop-side, I only had two; both are ATi/AMD.
        The first one is a Vaio with a Mobility 9200 32MB, and the second one is another Vaio with Mobility 5650.

      • Scali says:

        I started with Plantronics CGA. Then two Paradise VGA cards (ISA), then a Diamond SpeedStar Pro VLB card (dang that was fast, Cirrus Logic chip)…
        I wasn’t very loyal to nVidia… after the GeForce2 I went for a Radeon 8500, and then a 9600XT. Had to give up the 9600XT card because I upgraded my motherboard, and needed a PCI-e card instead of AGP. The 9600XT still had some life left in it. Brought me back to nVidia though, as I got a 7600GT to hold me over until the 8800 was released. Then I got an 8800GTS320. The rest is documented on this blog: 8800GTS died, got a Radeon 5770, then a GTX460, which I’m still using. Normally I’d probably upgrade again around this time (a 660Ti looks nice), but I have more pressing things to invest in at this point.
        Funny enough I started using the 7600GT again for work a few days ago (had to repair the fan too). It’s a single-slot card, and it does not require external power, which has its advantages. It’s fast enough to power two outputs with our software. I put a crappy video up, with a machine running our software on the 7600GT: http://www.youtube.com/watch?v=UhmU0TC4X2w

        Most videocards still work, and I actually have various boxes in working order, which I still play with from time to time.
        The 9600XT is in an old Athlon box, which I use to test our software on. The Radeon 8500 is in a Pentium II… I actually managed to get our software running on that as well (had to fix some code to make it SM1.x-compatible again, and had to manually copy D3D runtime files over because the newer installers fail if you don’t have SSE). The Apocalypse 3Dx is in my old Pentium Pro box, as you could see in the video I posted earlier (the 2D card is a Matrox Millennium btw). And the SpeedStar Pro is in my 486DX2-80, which I used to capture the Crystal Dream demo.

  2. MacOS9 says:

    Scali, what’s a good amount of Video RAM these days if someone’s interested in running an intensive flight sim like Rise of Flight (one of my favorites)? I see on the Mac side that some of the top-models carry 1GB GPUs from AMD, like the 27inch iMac, although I’ve also seen some 3GB video cards already floating around in the market, on the PC side. I’m thinking that a custom-built PC is the way to go for something like Rise of Flight?? And by the way, is the amount of video ram on the GPU more important than the type and speed of CPU, for an intensive flight sim? (I’m thinking that a Core2Duo won’t cut it, even with the fanciest GPU.)

    • Scali says:

      That’s not an easy question. On the one hand, it depends on what you do with it. On the other hand, sometimes the difference in memory is not the only difference between two cards. Eg, the GTX460 with 768 MB had a smaller bus than the GTX460 with 1 GB. So the 1 GB model was always faster, even if you never needed more than 768 MB anyway.
      There have also been various cards where the card with more memory had slower memory, so it was generally slower, except in situations where the other card would run out of memory.

      The memory requirements depend a lot on what settings you use in the game. Higher quality settings for textures, shadows and post-processing will require more memory, as will high resolutions and the use of higher levels of AA.
      Indirectly, this means that high-end cards will need more memory than slower cards. The slower cards won’t be able to run at the very highest settings anyway, so you’ll use lower settings, requiring less memory.

      Having said that, I have seen very few games that need more than 1 GB at all, even on the highest settings (even Crysis 2 seems to be fine with a 1 GB card at 1080p with 4xAA and all detail at maximum). And quite a lot of videocards have 1 GB these days.
      There are some high-end cards with 1.5 GB or 2 GB, and that should be fine. I think those 3 GB ones are a bit overkill… unless perhaps you are going for some kind of super-extreme setup with multiple videocards in SLI/CrossFire, and perhaps some surround-gaming setup.

      GPU-wise, flight simulators aren’t all that demanding usually. You just have a few planes and a lot of sky, and some ground with not a lot of detail. CPU-wise, I wouldn’t really know, I haven’t played a flight sim in decades. It all depends on how accurately things are being simulated, one would expect. Then again, we already had very good flight simulators 20 years ago, with games like Falcon 3.0, running on a simple 386 or 486.
      Apparently they recommend a Core2 Quad or better here: http://riseofflight.com/en
      Which I assume implies that it actually makes use of multiple cores properly. Because if it can only make use of 1 or 2 cores efficiently, a Core2 Duo is just as fast as a Core2 Quad. And even if it can’t, a Core2 Duo may compensate somewhat with higher clockspeeds, so they could have recommended eg a Core2 Duo at 3+ GHz.
      Apparently they also recommend 1 GB videomemory, and as expected, the actual GPU recommendation is relatively low, with GTX260 or HD5850.

  3. MacOS9 says:

    As always, thanks for the thorough info.: will most likely pick up a 1 or 1.5GB GPU then if I decide to run Rise of Flight since I’m on a Core2Duo at 3.3GHz; will also tinker with it in WINE first, curious to see if it will run in it at all.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s