Or well, to be more exact, AMD will actually enter the ring, and start its marketing campaign. nVidia will have to defend itself against AMD’s DirectX 11 hardware until it gets some DX11 hardware of its own. It will quite literally be a marketing war at first, since DirectX 11 isn’t officially out on Vista yet, Windows 7 isn’t out on the consumer market at all yet, let alone any games that make use of DirectX 11.
So initially, AMD will just have to try and score with improved image quality and performance in existing games, and just try to sell DirectX 11 with technology previews. Regarding DirectX 11, there’s some nasty details… If we look at this advertisement for the upcoming HD5870 for example:
They mention the following features: Tessellation, HDR compression, multithreading and DirectCompute.
While these are all new features in DirectX 11, not all of them require DirectX 11-level hardware. Unlike DirectX 10, DirectX 11 also supports DirectX 9 and DirectX 10 level hardware. All hardware can take advantage of the multithreading features of DirectX 11, and most DirectX 10 hardware (ironically, all nVidia hardware, but only the HD4000-series from AMD) can take advantage of DirectCompute (although DirectX 11-level hardware supports Shader Model 5, which will also allow extra functionality in Compute Shaders). Only tessellation and HDR compression actually need specific hardware-support.
nVidia has to try and communicate this to the consumer audience in some way, which is not going to be easy. Many people might expect that they need new hardware to take advantage of a new version of DirectX. They might even think that Windows 7 and DirectX 11 go together. In reality, nVidia’s drivers already support DirectX 11, including DirectCompute and multithreading, on both Vista and Windows 7.
nVidia still has an ace up its sleeve though. They have PhysX. Unlike DirectX 11, PhysX is already being used in current games. It delivers lots of eyecandy, so it’s not that hard to make some impressive demos. It is going to be harder to show off things like tessellation and HDR compression to the general public. HDR compression mainly improves performance… Tessellation can improve detail with less performance impact, but it’s not going to be that obvious, since we already have pretty detailed graphics with per-pixel parallax mapping and self-shadow etc, which already delivers an incredible amount of detail. In fact, people had trouble seeing the difference between Crysis in DirectX 9 and DirectX 10, while Crysis used higher detail meshes in DirectX 10, and more advanced shading.
I don’t know how they did it, but AMD did already create a lot of buzz on forums and review sites. Many users there seem to be very pro-AMD, and anti-PhysX… To a point where it becomes completely irrational. The future AMD cards and their performance are hyped like crazy, and you hear the most unbelievable horror-stories about nVidia and how it will take them ages to come up with a DirectX 11 competitor. In fact, there’s even FUD going around about Intel and its Larrabee. Allegedly they have the chip ready, but it would draw so much power (“more than an entire PC”), that board partners aren’t interested in building cards based on this chip. Uhh, right.. and I suppose that also stopped them from all those other cards that draw “more than an entire PC”, that were released in the past few years? I doubt that this single Intel chip will draw more than the current dual-GPU cards from nVidia and AMD, which approach 300W.
In fact, the AMD-love isn’t just limited to their videocards. The recent release of Intel’s Lynnfield platform was also met with a lot of hate and irrationalism. People were complaining that it wasn’t fair that reviewers left the turbo-mode of these CPUs enabled, since that was overclocking. Well, that depends on how you look at it. It is a stock feature that Intel has been using in their latest generation of CPUs. It simply clocks the CPU higher when not all cores are used (eg, when running single-threaded software on a CPU with 4 physical cores). However, this system is designed so that it will always stay within the specified power dissipation range. So it never runs the CPU out-of-spec, unlike conventional overclocking. It won’t impact stability or compatibility either. It’s just a clever way of maximizing performance of the CPU in certain scenario’s. It’s a logical extension of the overheating protection (also pioneered by Intel and later copied by AMD), which would clock down (‘throttle’) the CPU when it detected excessive temperatures. Now that Intel has managed to make idle cores use nearly no power at all, and generate almost no heat, it’s only logical that the remaining cores can be overclocked when they are being stressed, since there is plenty of headroom.
It seems that the AMD fans are just very bitter. It’s humiliating enough that Intel’s CPUs are considerably faster while using nearly identical technology… but Intel didn’t stop there. They also added HyperThreading and this turbo mode, to make their CPUs perform even better. With videocards, they must be very bitter as well… nVidia was way earlier to market than AMD was, with DirectX 10 hardware, and AMD never quite managed to catch up. And then nVidia stole AMD’s thunder by buying Ageia and making PhysX work on their GPUs, while AMD had been touting GPU-accelerated physics since the Radeon X1800 series. To this day, AMD just never managed to come up with a final product. They did manage to make their fans excited about Havok though (which ironically is owned by Intel, their other big competitor, which ironically is going to enter the GPU market soon as well). It’s amazing how these fans prefer vapourware like Havok over actual released games with PhysX. They just seem unable to give nVidia credit for actually putting the concept of GPU-accelerated physics into practice. Now they demand that nVidia’s cards not only accelerate PhysX, but also give 60+ fps at the highest possible resolutions and settings, while doing so. Suddenly people pretend that videocards have always been able to run the latest games with all the eyecandy enabled at the highest settings, while obviously this has never been the case.
We’ll just have to see what happens. I would like the claims of the HD5870’s performance being nearly twice that of current high-end cards like the GTX285, GTX295 and the HD4890 to be true, but if it sounds too good to be true, it usually is. I have no doubt that it’s going to be the fastest card on the market, but I just don’t think it’s possible to make such a leap in performance at this point in time. The stories about nVidia in the meantime seem too bad to be true. I certainly don’t expect nVidia to have an answer to AMD right away, but I don’t think they’ll be more than 6 months off. They have known for a long time when DirectX 11 and Windows 7 would arrive on the market, and during that time, they mostly have had smooth sailing, business-wise, with their DirectX 10 line being very successful. So nVidia should have had plenty of time and resources to spend on this DirectX 11 generation. I expect to see nVidia’s DirectX 11 somewhere at the end of the year, or perhaps early 2010. I also expect nVidia to outperform AMD, especially in the area of DirectCompute/OpenCL, where nVidia had a considerable lead over AMD in the DirectX 10 era, with Cuda, which laid the groundwork for these new GPGPU APIs. And where does Intel fit in with its Larrabee? I have absolutely no idea. I sincerely hope that Larrabee is going to be competitive, and I more or less expect it from Intel, they owe it to their status of being the largest chip manufacturer in the world… But I also know that it’s going to be hard to get both the hardware and the software right the first time.
Going to be an interesting few months!