There has been quite a bit of speculation on which API and/or which vendor was first… I will just list a number of facts, and then everyone can decide for themselves.
- Microsoft’s first demonstrations of working DX12 software (3DMark and a Forza demo, Forza being a port from the AMD-powered Xbox One), were running on an nVidia GeForce Titan card, not AMD (despite the Xbox One connection and the low-level API work done there).
- For these two applications to be ported to DX12, the API and drivers had to have been reasonably stable for a few months before the demonstration. Turn 10, developers of Forza, claimed that the port to DX12 was done in about 4 man-months.
- nVidia has been working on lowering CPU-overhead with things like bindless resources in OpenGL since 2009 at least.
- AMD has yet to reveal the Mantle API to the general public. Currently only insiders know exactly what the API looks like. So far AMD has only given a rough global overview in some presentations, which were released only a few months ago. And actual beta drivers have only been around since January 30th. Microsoft/nVidia could only have copied its design through corporate espionage and/or reverse engineering in an unrealistically short timeframe.
- AMD was a part of all DX12 development, and was intimately familiar with the API details and requirements.
- DX12 will be supported on all DX11 hardware from nVidia, from Fermi and up. DX12 will only be supported on GCN-based hardware from AMD.
- The GCN-architecture marked a remarkable change of direction for AMD, moving their architecture much closer to nVidia’s Fermi.
Update: This article at Tech Report also gives some background on DirectX 12 and Mantle development: http://techreport.com/review/26239/a-closer-look-at-directx-12
– twitter @AMDGaming : We’re proud to have a live @Microsoft’s DirectX 12 demo run on current GCN hardware at our booth in the South Hall! #AMDGDC
: only forza5 confirmed on Titan black
– http://www.hardware-360.com/directx-12-requires-new-graphics-cards-to-utilize-all-features/
DX12 requires new card, only DX11+lowering CPU overhead works on DX11 cards
DX12 does not require a new card. There are optional features which will only work on yet unreleased DX12 hardware.
There is no “DX11+lowering CPU overhead”, it’s the new DX12 API (+ driver model?).
It has been like that for all versions of DX, except for DX10. See also here: https://scalibq.wordpress.com/2012/12/07/direct3d-versioning-and-compatibility/
Also, the tweet you mention seems to be talking about AMD’s own booth, and was not part of the Microsoft presentation of DX12.
If AMD was a part of all DX12 development, and was intimately familiar with the API details and requirements, then why did they create Mantle? If they knew DX12 was going in the direction they want (i.e. lowering the CPU overhead, which is their main weakness), it makes no sense to create a proprietary API. I’m sure AMD knows the fate of 3dfx Glide.
They are, the Tech Report article I link to, says as much.
They would also be familiar with the fact that DX12 wouldn’t be ready until late 2015. Which means they’d have a 2-year gap between the release of the XBox One and PS4, and DX12, to market their own ‘DX12-lite’, and position themselves as the ‘saviours’ of graphics APIs.
I think that also explains why AMD started the rumour that there won’t be a DX12: to prevent developers from waiting for DX12, and persuade them to use Mantle instead. Because, let’s face it… If you are a developer, and AMD tells you there won’t be a DX12… you know that AMD works with Microsoft… would you doubt AMD’s words, and contact Microsoft? I think most developers wouldn’t expect AMD to manipulate them like this.
Here’s what I think: One explanation is that they needed a stopgap solution until late 2015, but weren’t allowed to release the current draft spec of the next DirectX currently in development, because it’s owned by Microsoft, so they invented a new name and made no public statements about it having anything in common with the next DirectX at all for legal reasons (due to NDAs, etc.). Which might also explain why the Mantle API isn’t public yet.
I think you’re also underestimating the game developer’s intelligence. Maybe you’re confusing them with the fanboys we usually meet in Internet forums 🙂 If I were a game developer and AMD came to me and said: “Hey, why don’t you start developing for this new proprietary API we just invented. Don’t worry, it’s here to stay. It won’t suffer the fate of Glide. Of course, NVIDIA and Intel will never support it, but who cares about them anyway?”, I would simply laugh. But if they came and said “for legal reasons we’re not allowed to tell you this is probably going to be close to what the next DirectX will look like and you can reap the performance benefits from it now on AMD platforms, and then, two years for now, when DX12 is released, your 3D engine will need only minor changes to support DX12.”, then that’s much more likely to convince me. Of course, I’ll still be calling it Mantle in public statements. 🙂
Well, this is what AMD says:
I think people are too quick to say Mantle == DX12. I don’t think it is, and apparently AMD even says that explicitly. I think we should interpret that statement as: Mantle is more hardware-specific, and is tied more closely to GCN (which AMD also stated earlier… claiming competitors did not have GCN-compatible hardware, so they could not support Mantle at this time… but apparently they CAN support DX12).
I guess the only thing we know for sure is that Mantle gives you the improved resource handling and better command buffer manipulation that will be in DX12, today.
Is that enough to convince developers? I don’t think it is. It seems that all developers using Mantle are in the ‘Gaming Evolved’ program, meaning they get paid by AMD to use Mantle. I don’t know of anyone who picked Mantle purely on its merits.
And as nVidia is trying to demonstrate, there is still life left in D3D11. Perhaps nVidia has been telling developers: “Don’t bother with Mantle. We’ve got some tricks up our sleeve to make the D3D11 drivers perform close enough to Mantle to hold you over until DX12 arrives.”
I don’t think I do, actually… I pointed out several things early on in this blog, which were contrary to claims of game developers. Such as the claims that there would not be a DX12, or that Mantle would be supported by all GPU manufacturers, or that Mantle would also be used on XBox One and PS4.
“… why did they create Mantle?”
Because they asked to create a low-level API. That’s all. Johan Andersson want it, and they bring it to him. They known that Frostbite 3 will support it, and this means nearly 20 games for Mantle. The question was: Is it worth it? They said yes, but I think they hope more support and they was right. Now Nitrous, Asura and CryEngine also support Mantle. So I think AMD sees Mantle as a success. For the future they have an own API, which can be upgraded with new features. It’s just usefull if they want to design special hardwares.
I doubt it is that simple. As I said earlier:
nVidia already offered many extensions to reduce CPU overhead and have better control over resources and such.
I think a lot of it has to do with the fact that it’s nVidia, not AMD… And Andersson is in the Gaming Evolved program, not The Way It’s Meant To Be Played.
I think it’s a failure. AMD has always had an under-staffed driver team anyway. Their D3D11-drivers are quite poor compared to nVidia’s, and they are almost a year behind on OpenGL support. The focus on Mantle does not seem to be a good thing for overall graphics support… And those few games that may run Mantle won’t be able to compensate for the poor performance in all those D3D and OpenGL games.
Besides, even Mantle games currently don’t run well. Thief is often slower in Mantle mode than in D3D mode. I think AMD bit off more than they can chew.
OpenGL has many ARB and EXT extensions for this. Every vendor support these. But sometimes a “reset” is more usefull.
I think that they don’t care about how people’s feel about Mantle. They done it, and the developers can get/use it. But modern OGL and D3D12 is good for them also. Mostly D3D12, because the optimization for the consoles can be ported to PC.
The only thing not good for them is GameWorks, because these effects can’t be modified by the developers.
But D3D has done a ‘reset’ every few years… we don’t need AMD for that. Besides, why didn’t Andersson use OpenGL before Mantle was around? Most of his arguments go for OpenGL as well, yet he chose D3D. Doesn’t make sense. Andersson can not be trusted, and neither can AMD.
No they can’t, except for a handful of selected developers in the Gaming Evolved program. So far AMD has only said that the SDK will come “next year, or possibly later this year”. Which would mean that the Mantle SDK won’t be available for general use before the DX12 SDK is out.
“… we don’t need AMD for that.”
Who is we? Especially I need it, and I will use it, before the next D3D reset.
“why didn’t Andersson use OpenGL before Mantle was around?”
I don’t know. Maybe he don’t want to rewrite the existing HLSL code to GLSL. This is why I don’t use OpenGL.
“Andersson can not be trusted …”
Why? Because he think different? He choose Mantle, that’s all. Feel free to choose anything else.
The Mantle beta program starts this month. If you want it you can get it. Contact Nick Thibieroz. If your project is good, you can earn access to the API.
The industry? Everyone involved with D3D one way or another?
The OpenGL extensions that do what he wants have been around for years. So he could have started out with OGL right away, so there would not be any existing HLSL code in the first place.
So the question is: since nVidia already offered what he wanted with OGL extensions, why did he ignore that and use D3D11 instead? And then why does he use Mantle, if not just because AMD pays a lot of money for it with Gaming Evolved? It can’t be all the reasons he mentions, because then it doesn’t make sense why he chose D3D11 instead of OGL.
No, because of a lack of logic and consistency in his choices and statements.
No thanks, the marketshare of GCN-hardware is far too small. In fact, I don’t even own a GCN-card myself. And the gains on the systems I target are far too small.
Or put it this way. According to the current Steam hardware survey, and discounting Intel and other, NV hit ~63% of the target audience, AMD hit ~37%. So the choice is between vendor-specific-API-stuff that hits 63% or vendor-specific-API-stuff that hits 37%.
Obviously the rational person would is going to say “neither”, but if you’re going to and you feel that you absolutely *must* choose one, does it make sense to go for the “37% of target audience?” option?
Thats why people are calling shenanigans on repi. Now, I don’t doubt that he’s a *good* programmer, and he knows his stuff, but it’s difficult to make sense of his recent actions without ascribing them to some form of ulterior motive. What exactly that ulterior motive *is*, exactly, is of course up for grabs.
It’s worse than that, I’m afraid. AMD has 37% of all DX11-videocards. Only about 12% of those are based on GCN and thus are Mantle-capable.
As opposed to the 63% of DX11 cards from nVidia which are ALL capable of DX11 (including nVidia’s ongoing driver optimizations), DX12, and all the latest OpenGL extensions.
I don’t think we just talk about Johan Andersson. He is not the only one who support Mantle. There are others: Dan Baker, Chris Kingsley, Paul Houx, Forrest Stephan, and more.
The market share (from steam or JPR or other source) is not the only consideration to choose a side. NV is now an extremely mobile focused company. They have a very logical strategy for the future, but many developer simply don’t want GameWorks, but they want better hardwares with console class programability. This is what AMD provides them.
Problem with those lists is that these people are ALL in the Gaming Evolved program, so AMD is paying them to support Mantle.
Independent developers do not support Mantle.
Oh please, what the heck is that even supposed to mean? nVidia still has the fastest GPUs on the market, the largest marketshare on the PC gaming platform, and their roadmap indicates continued support for discrete videocards and future APIs.
nVidia gave developers low-level access and low CPU-overhead years before Mantle, via OpenGL extensions.
Besides, that ‘console class programmability’ is way overhyped. *Some* developers would want it, but they can wait for DX12. Except for a few, who have been paid off by AMD. Who may look like the biggest fools out there if nVidia manages to get their D3D11 drivers to match Mantle performance. After which DX12 will make Mantle nothing but a bad memory.
If anyone thinks that Mantle is about AMD giving developers what they want, they’re only fooling themselves. Mantle is about AMD giving developers what AMD wants, which is a proprietary API with vendor lock-in.
For all AMD’s big talk last year, Mantle remains proprietary. Where are the public specs? Where are the headers and libs? Where is the programming guide? Where are the other vendor’s implementations? Where is the community doing Mantle tech demos and Mantle ports of Quake 3? They don’t exist.
The way things are going, if there ever is a public release of Mantle (and that’s looking like a real big “if” right now) it will be in the D3D12 timeframe. And then developers will get to choose between a proprietary single-vendor API or an API that works on a much much broader range of hardware from multiple vendors. Guess which one they’ll pick? The one that enables them to write their code once and use it on any hardware, perhaps?
Don’t forget – AMD aren’t your friend. They’re in a business to make money, just like the other hardware vendors are. If it turns out that they’ve bet the farm on Mantle, and if D3D12 delivers, they could well end up going under as a result of this.
http://www.geforce.com/drivers/results/74636
Mantle killing beta drivers on Monday.
lol hold your horses there.
I personally would wait until real-world results coming up until I can decide if it’s really a Mantle-killer or not.
The NDA lift on benchmarks shouldn’t be too long since the press is usually being sent a copy of the drivers 1-2 weeks before official release date.
http://videocardz.com/50183/nvidias-geforce-337-50-driver-launches-today-expect
http://us.download.nvidia.com/Windows/337.50/337.50-desktop-win8-win7-winvista-64bit-english-beta.exe
http://us.download.nvidia.com/Windows/337.50/337.50-notebook-win8-win7-64bit-international-beta.exe
Enjoy Mantle killing performance.
http://www.geforce.com/whats-new/articles/nvidia-geforce-337-50-beta-performance-driver
This kills the Mantle.
Apparently it can now handle the broken D3D code in StarSwarm efficiently: http://www.tomshardware.com/news/nvidia-geforce-337.50-driver-benchmarks,26473.html
So there’s the answer to StarSwarm’s D3D-mode: if a single driver update can gain THAT much, clearly something was VERY fishy in D3D-mode.
Look at number of batches per second and compare it to number of objects visible. For further fun I suggest to use RTS view…(apparently unfixable mess from driver view)
And beside batches, it has absolutely insane number of useless calls and furthermore its threading seems to be bad. (It optically scales but only to all 12 threads in total of a minute out of IIRC 6 minute run…)
Pingback: Richard Huddy back at AMD, talks more Mantle… | Scali's OpenBlog™
Can I assume you haven’t seen this interview yet?
http://www.heise.de/newsticker/meldung/APU13-Der-Battlefield-Schoepfer-ueber-AMDs-3D-Schnittstelle-Mantle-2045398.html
Johan basically says he started working on Mantle (or at least its ideas) back in 2008 (5 years ago from 2013) and then took it to several vendors. AMD was the one that agreed to help.
And AMD has publicly gone on record saying they shared their work on Mantle with Microsoft from the beginning.
“Its ideas” obviously… Since a) Mantle is an AMD-specific API, so it wasn’t Mantle until Andersson and AMD agreed to work together, and b) Mantle is targeted at AMD’s GCN architecture, which didn’t even exist in 2008.
So they could not have worked on what we now know as Mantle before GCN was available, in late 2011. And 2011 was also around the time that Huddy started making claims about ‘making the API go away’: https://scalibq.wordpress.com/2011/03/20/richard-huddy-talks-nonsense-again/ (which is ironic, since Mantle is… an API, as I also pointed out in that blog, long before anything about Mantle was made public). And then AMD started making claims about no DX12.
In fact, in this old blog I actually quote Andersson making such claims: https://scalibq.wordpress.com/2012/05/03/richard-huddy-comments-on-my-blog/
Back in 2009, AMD/Huddy still thought DX11 was pretty awesome: https://web.archive.org/web/20130116105407/http://blogs.amd.com/play/2009/06/02/why-we-should-get-excited-about-directx-11/
Note also this interesting part:
Indeed, the same also goes for DX12.
Or this interesting part:
Exactly the thing AMD later criticizes DX11 for, offering Mantle as the answer. The real reason is that AMD can’t make it work in their drivers, while nVidia/Intel can.
And those “ideas” are far from new or unique. Consoles have always been programmed through a very thin abstraction layer, or even none at all. And there have also been many OpenGL extensions to give low-level control over hardware/driver features. So I think it’s quite a meaningless statement to make.
Aside from that, all the information is always coming from Andersson and/or AMD themselves. Nobody outside the Mantle project ever confirmed anything they claimed. I don’t put too much value in this kind of statement. “Sharing their work on Mantle” can be interpreted in a number of ways.
But we know for sure is that AMD is way behind on DX12 (and DX11, multithreading through multiple contexts simply doesn’t work in their drivers, and is just serialized) compared to nVidia, and Mantle is dead. Not sure why you even bother to reply to such an old article anymore, about a topic that is no longer relevant.
Pingback: DirectX 12 is out, let’s review | Scali's OpenBlog™
Pingback: Todo lo que hay que saber de DIRECTX 12 y Vulkan | Bonus-Level Portal