nVidia makes good on their promise: DX12 support for Fermi

You might remember that in the months leading up to Mantle, DX12 and Vulkan, I mentioned that all of nVidia’s cards from Fermi and up would support DX12. This was also officially confirmed by nVidia on this page, and also here. However, you can see the small print there already:

Fermi will receive DX12 support later this year (expected around the first wave of DX12 content).

And indeed, nVidia’s initial release of DX12 drivers had support for all GPUs, except Fermi. However, these Fermi drivers never appeared later that year.

nVidia later made a statement that they would not support Vulkan on Fermi. People extrapolated from this that the elusive DX12 drivers for Fermi would never materialize either.

But nVidia silently made good on their promise. I still have an old Core2 Duo machine around, with my old GTX460 in there. I put Windows 10 on there to have another DX12 test box, and I ran into exactly this problem: no drivers.

However, I just upgraded it to the Windows 10 Fall Update, and while I was at it, I also installed the latest GeForce drivers, namely 388.00. And lo and behold:

GTX460_DX12

There it is! Direct3D DDI version 12! And driver model WDDM 2.3! These are fully up-to-date drivers, exposing the DX12 driver interface to applications. I don’t know how long this has been in nVidia drivers, but it can’t be more than a few driver releases since I last checked (previous drivers reported only DDI 11).

If I were to hazard a guess, I would think that the 384.76 drivers were the first. Previous release notes say this for DirectX 12 support:

DirectX 12 (Windows 10, for Kepler, Maxwell, and Pascal GPUs)

But now there is no mention of specific GPUs anymore, implying that Fermi is also supported.

Of course I wanted to make absolutely sure, so I ran one of the DirectX 12 samples on it. And indeed, it works fine (and it’s not running the WARP software emulation. The samples would mention this in the title if they did, and it was compiled with WARP set to ‘false’):

GTX460_DX12Sample

I also tried the only DX12 demoscene production I know of so far, Farbrausch’s FR-087: Snake Charmer. This also works:

GTX460_SnakeCharmer

So there we have it, DX12 on Fermi is finally a thing! Kudos to nVidia for delivering on their promise at last.

Update: Apparently this was already discovered on Guru3D, which confirms the 384.76 driver release as the first: http://www.guru3d.com/news-story/nvidia-fermi-cards-get-d3d-12-support.html

I found that link while looking at the Wiki page for nVidia GPUs, which someone had updated to DirectX 12 already in July.

 

Advertisements
This entry was posted in Direct3D, Hardware news, Software development, Software news and tagged , , , , , , . Bookmark the permalink.

17 Responses to nVidia makes good on their promise: DX12 support for Fermi

  1. Redneckerz says:

    Not going to lie, that kind of support is impressive. Not entirely sure if these old cards get a performance advantage in pure DX12 titles, but that Nvidia supported that generation of cards is still a great thing.

    Completely unrelated to this, recently found out about a YT channel who tests old cards with new(ish) games. Radeon HD2900 XT came by (Sadly no Paladins test) but also one of your old faves, the 8800 GTX:

    It really amazed me how that more than 10 year old card still is pulling off playable framerates, in the case of Paladins even a 1080p60 update. Granted, that game is DX9 based and focussed on low end PC’s, but still..

    Yeah, its me, that argumentative little guy from time and time again, but this time i felt a post like this should be here 🙂

    • Scali says:

      What makes the 8800 so interesting is that its architecture was the first ‘modern’ GPU architecture: it moved to a scalar SIMD approach, which all GPUs still use today. You could say that it quite literally turned GPU design around 90 degrees (its scalar approach means that shaders are run ‘vertically’, where earlier approaches would try ‘horizontal’ parallelism by processing 4d vectors or even 5d vectors in parallel in a single instruction, rather than just 1d scalars).
      It also introduced GPGPU as we know it today (this is the oldest card to support OpenCL and DirectCompute, both APIs being released years after the card, much like how Fermi is the oldest card to support DX12).
      You could argue that modern GPUs are like an 8800 with some extra features and tweaks for better efficiency. The 8800 was way ahead of its time.

      • Redneckerz says:

        In terms of ”modern” compatibility the G80 was. When it comes to a unified shader architecture (I know, you arent discussing this) then ATI’s Xenos GPU was by far the first one. It does not have much in common with modern GPU’s today when it comes to stream processors, but for 2005 hardware, that was very top end. Ofcourse, in the PC space the G80 and subsequent successors were a big leap.

        Even today a 8800 GTX still can run most modernish titles, sans DX11. Heck, Paladins runs on it. Which means even the console like HP Firebird (Dual 9800S gpu’s) still is able to run modern titles at an acceptable pace, even today. It just goes to show how incredibly forward thinking the Tesla architecture was.

        Its longevity in games providing acceptable resolutions and frame rates is still unmatched, only the Radeon HD5770/5870/5970 series is slowly getting there.

      • Scali says:

        Not sure why you have to bring up unified shaders though. They are more of a logical evolution… as in, early cards had ‘advanced’ shader units for vertex processing, and ‘simple’ shader units for pixelshading. As technology evolved, the gap between the two was closed, so pixelshaders were as ‘advanced’ as vertex shaders, ergo you could re-use the same execution units for both tasks.

        It is nothing like the 8800, which was actually revolutionary by completely turning around how shaders were compiled and executed, as I said above.
        Sure, it also relied on a certain tradeoff… That is, VLIW-style execution units were simpler than the SIMD-style ones, so you could pack more of them in the same die space, which could compensate for the lower efficiency. That is also part of the reason why the 8800 aged so well: as shaders got more advanced (not just processing 3d or 4d XYZW or RGBA vectors, but also 1d and 2 vectors), they would be less efficient on VLIW-style GPUs, but SIMD ones were not affected. So even in the latest games that run on the 8800, the GPU usage is very efficient. The 8800GTX has a lot of raw power, and you are actually using nearly all of it. Other cards of the era may have even more raw power on paper (such as the 2900XT), but they lack the efficiency to even match the 8800 in practice.

        There is a certain elegance to it, which I love as an engineer.

  2. dealwithit says:

    It’s would be nice to AMD to support their DX11 cards too.

    • dealwithit says:

      *It, damn typo

    • Scali says:

      Yea, that was my point all along with the whole AMD Mantle/DX12 claims… If AMD was right about there not being a DX12 back in 2013, and AMD is right about Mantle being the reason that MS developed DX12… Then how is it possible that there are NV cards going way back to early 2010 that support DX12, and NV cards have been supporting DX12_1 since Win10/DX12 was launched, yet AMD hardware doesn’t go back further than 2012, and AMD didn’t support DX12_1 until Vega? To me it seems that in both cases, AMD is about 2 years behind NV in terms of technology (not to mention that although Vega finally has feature-parity with Maxwell v2, the performance-per-watt is still nowhere close to NV’s latest GPUs, which architecture itself is already 1.5 years old already, and probably due for a refresh sooner rather than later).
      Heck, even Intel has supported DX12_1 since Skylake.

      Which brings me back to this old article: https://scalibq.wordpress.com/2011/06/21/amd-follows-nvidias-lead-in-gpu-design/
      AMD quite obviously abandoned their old approach to GPU design, and adopted an architecture very close to NV’s Fermi. That same Fermi that is basically the lowest common denominator for DX12.

      • Redneckerz says:

        ”Then how is it possible that there are NV cards going way back to early 2010 that support DX12, and NV cards have been supporting DX12_1 since Win10/DX12 was launched, yet AMD hardware doesn’t go back further than 2012, and AMD didn’t support DX12_1 until Vega?”

        Because the GCN series was introduced in 2012. Unlike Nvidia, AMD seems to have decided to support a universal architecture that ”evolves” with every generation, like GCN. Whereas Nvidia has decided to support multiple different microarchitectures, like Fermi/Kepler/Maxwell/Pascal.

        Whilst Radeon HD5000 series supports DX11, its underlying universal microarchitecture (Terascale) has its roots in the Radeon HD4000 series, as Terascale 1. Radeon HD5000 was Terascale 2. When you think of it, its not really reasonable for AMD to support a by core DX10 compatible microarchitecture (That just had a DX11 capable upgrade) and upgrade that to DX12. The core architecture simply is too old. Thats why all Terascale hardware is out, as its root architecture is ”just” DX10.

        It would be like giving the 8800 GTX DX12 support.

        In this case, Nvidia’s decision for seperate architectures actually benefits them, as they have multiple DX11 compatible architectures, starting with Fermi. Thats why they can support DX12 with a card as old as Fermi, and AMD cant with a card like HD5000 series. Fermi’s roots lay in DX11, whereas Terascale is not.

        That isnt to say that either one has made a bad decision in regards to compatibility. Where Nvidia can provider longer support over their hardware, due to its non-universal nature its also more time consuming for them to support multiple architectures. This is where AMD has an advantage – It just has one architecture spanning multiple generations.

        So each approach has its pluses and its minuses.

      • Scali says:

        Interesting theory… but not plausible in the least.
        Firstly, as I just said above about the 8800, its architecture is basically the first ‘modern’ GPU, and all current GPU architectures are still very similar.
        The fact that NV chooses unique codenames for each new generation does in no way imply that they are completely different internally. If you study the internals you will clearly see evolutionary steps, very similar to what AMD has been doing.

        Secondly, it helps if you’re a developer… Then you would know that DX11’s roots lay in DX10. DX11 is only a small incremental update to the DX10 API. It adds only a handful of new features, such as programmable tessellation, compute shaders and deferred contexts. Some of these new features could even be backported to DX10 hardware (all DX10 hardware and even the later DX9 hardware is supported by the DX11 API).
        So the argument that a GPU has to be designed for DX11 is even more flawed than the argument that a GPU has to be designed for DX12.

        The real reason is probably two-fold:
        1) Since TeraScale is a completely different architecture, it would require a completely different driver. AMD does not see any use in investing any resources in that (and that is assuming that TeraScale has no physical limitations that would prevent a DX12 implementation).
        2) As you could see in the videos on the YT channel you linked above, TeraScale did not quite age as well as NV’s older architectures such as Tesla and Fermi. So even if AMD did add DX12 support for their older cards, it probably wouldn’t be very useful in practice. Even for Fermi the use is debatable.

      • Redneckerz says:

        EDIT:
        Terascale 1 wasnt HD4000, but HD2000 series even. This further strengthens my point that its underlying architecture simply was too old to upgrade to DX12.

  3. Redneckerz says:

    ”Not sure why you have to bring up unified shaders though. They are more of a logical evolution… as in, early cards had ‘advanced’ shader units for vertex processing, and ‘simple’ shader units for pixelshading. As technology evolved, the gap between the two was closed, so pixelshaders were as ‘advanced’ as vertex shaders, ergo you could re-use the same execution units for both tasks.”

    Just wanted to point out the Xenos for first using a model that has since then become common ground, with G80 providing the definitive take on it on PC.

    ”Other cards of the era may have even more raw power on paper (such as the 2900XT), but they lack the efficiency to even match the 8800 in practice.”

    Always liked the 2900 XT’s GDDR4 and its silly amount of bandwidth. F2F tested that card aswell and it too provides reasonable and playable framerates on modernish games today. Which is interesting since that card was much less supported in games than the 8800 series.

    ”Interesting theory… but not plausible in the least.
    Firstly, as I just said above about the 8800, its architecture is basically the first ‘modern’ GPU, and all current GPU architectures are still very similar.
    The fact that NV chooses unique codenames for each new generation does in no way imply that they are completely different internally. If you study the internals you will clearly see evolutionary steps, very similar to what AMD has been doing.”

    Just saying that AMD has a more universal approach to its architectures than Nvidia does. Then again Nvidia can support each architecture accordingly.

    ”Secondly, it helps if you’re a developer… Then you would know that DX11’s roots lay in DX10. DX11 is only a small incremental update to the DX10 API. It adds only a handful of new features, such as programmable tessellation, compute shaders and deferred contexts. Some of these new features could even be backported to DX10 hardware (all DX10 hardware and even the later DX9 hardware is supported by the DX11 API).”

    It kind of is obvious, isnt it?

    ”The real reason is probably two-fold:
    1) Since TeraScale is a completely different architecture, it would require a completely different driver. AMD does not see any use in investing any resources in that (and that is assuming that TeraScale has no physical limitations that would prevent a DX12 implementation).”

    This is basically what i was arguing anyways. Terascale is simply too old and too different to even have something like DX12 since Terascale 2 was just DX11 bolted on top of it.

    ”2) As you could see in the videos on the YT channel you linked above, TeraScale did not quite age as well as NV’s older architectures such as Tesla and Fermi. ”

    It is intriquing however that the 5770/5870 cards are still listed as minimum requirements for most games though. For 2009 hardware, its fairly similar to the longevity of the 2006 8800 series.

    • Scali says:

      Just saying that AMD has a more universal approach to its architectures than Nvidia does.

      And I just pointed out that this isn’t true. But as usual, you ignore anything anyone says, and just repeat the same thing over and over again.

      It kind of is obvious, isnt it?

      Why do you say that? You just argued the opposite thing one post above.

      This is basically what i was arguing anyways. Terascale is simply too old and too different to even have something like DX12 since Terascale 2 was just DX11 bolted on top of it.

      Then you agree with my original statement that NV was a few years ahead of AMD in terms of architecture with Fermi. NV was already in ‘DX12 territory’, where AMD required 2 more years and a major architecture overhaul.

      It is intriquing however that the 5770/5870 cards are still listed as minimum requirements for most games though.

      Why is that intriguing? They have to pick some lower bound, apparently those cards are it.
      If you look at the NV cards, you’ll probably have to conclude that the NV architectures from that era aged better over time (which ironically is an argument that people love to make about AMD cards). NV’s move to SIMD with the 8800 was a revolutionary step. AMD didn’t take that step until GCN, many years later.

  4. Fermi says:

    https://www.youtube.com/user/Face2FaceHardware/videos?disable_polymer=1

    Terascale 2 cards have aged extremely poorly as shown by videos in the above Youtube channel testing Terascale 2 cards.

    https://www.anandtech.com/show/9815/amd-moves-pre-gcn-gpus-to-legacy

    Terascale 2 users have severe buyer’s remorse, being abandoned by AMD when it comes to driver support.

    https://www.geforce.com/whats-new/articles/star-wars-battlefront-ii-game-ready-driver

    Meanwhile Fermi has been getting OpenGL 4.6 support and WDDM 2.3 support with the latest 387/388 driver, though I don’t expect much more driver support because Nvidia will probably branch it off to a legacy driver branch just like Tesla after 7/8 years.

    https://www.anandtech.com/show/7857/nvidia-announces-legacy-support-plans-for-d3d10-generation-gpus

  5. Fermi says:

    http://nvidia.custhelp.com/app/answers/detail/a_id/4654
    “Effective April 2018, Game Ready Driver upgrades, including performance enhancements, new features, and bug fixes, will be available only on Kepler, Maxwell, and Pascal series GPUs. Critical security updates will be available on Fermi series GPUs through January 2019.”

    RIP Fermi, still better supported than inferior Terascale garbage.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s