I wasn’t originally going to even waste time on these developments… but over the past weeks I’ve been noticing a lot of confusion on the net, so I will just give my two cents here.
SteamOS
Well, yet another linux distribution… As John Carmack already said, linux is not the right OS for video games. He would know, he has published a few games for linux in the past, and uses linux for the flight computers at Armadillo Aerospace.
He is not so much talking about technical issues, but as he says, it is just a poor business case, mainly because of the low market share. He proposes an emulation layer such as Wine, so that Windows games can run as-is. Which I suppose he is saying because Windows is a consistent platform (you know exactly what libraries and APIs to expect), where linux is not.
The fact that Valve feels the need to release their own linux distribution seems to mirror that notion. This way Valve can at least make sure that there is one linux distribution that comes with the right libraries, APIs, drivers and whatnot, so that games will run out-of-the-box as they do on Windows. As I said earlier, if they would stick only to third-party linux distributions, they would run the risk that OS updates would introduce incompatibilities with certain games, or even Steam itself. But the question is: are they supporting linux this way, or just SteamOS?
Google is doing something similar with Android. They use the linux kernel, but other than that, the Android platform is pretty much a parallel universe to the GNU/linux platform. It is quite possible that SteamOS will take a similar route, where it will be difficult, or even impossible to get Steam games running on other linux-based OSes, because they are just too different.
Things could also go the other way: What if Steam support would become a priority of other linux distributions? An organization such as Ubuntu has more manpower and experience available when it comes to tweaking and fine-tuning a linux distribution. So what if Ubuntu would actually run Steam games better than SteamOS itself?
It will be interesting to see where this is going. The biggest problem however is simply: how do you get gamers interested in running SteamOS (or any other linux) over Windows? Valve is going to launch a SteamBox, but judging by the hardware, it is not going to be cheap. It is not going to be in the price range of other consoles. And it will only be able to play Steam games which have been ported to linux, which is not a whole lot. Yes, they want to stream Windows games from another box, but what’s the point in that? You’d need their expensive console AND another decent gaming PC in order to run the Windows games. Might as well just install Windows on the SteamBox itself then. There are just so many more Windows games, that I don’t see any way they can make SteamOS or the SteamBox an attractive deal.
Mantle
As for Mantle, AMD’s vapourware graphics API… Let’s defuse some nasty rumours first… Has Intel expressed interest in supporting Mantle? No they have not. Andrew Lauritzen, an Intel employee, has expressed personal interest in the API on his personal twitter account. Which is not surprising. I mean, I would like to have a look at the Mantle API as well. Does not mean that I am going to use it, I will decide that once I’ve seen it. However, the same tweet also said that AMD did not provide him with any information. People just pulled that small tweet completely out of context, and made it into Intel officially stating that they were going to support Mantle.
Is it even likely that Intel is going to do that? I don’t think it is. The main redeeming value of Mantle is that it allegedly reduces the CPU overhead. Why would a CPU company like Intel be interested in that? I can see why AMD wants/needs it: their CPUs are much slower than Intel’s. But for Intel, the more CPU overhead, the better, really.
Is Mantle an open API? That depends on how you look at it. CUDA is an open API in the sense that nVidia will allow other vendors to implement support. Nobody ever did however, since CUDA is obviously designed for nVidia’s architectures, and supporting a completely different architecture would prove to be quite inefficient. Not to mention that you are at nVidia’s mercy when it comes to API updates and extensions. You never know what nVidia will think of next.
Mantle is a very similar story. The API is tied quite closely to the hardware. So closely, that it does not even run on all of AMD’s own DX11-class GPUs, but only their Graphics Core Next architecture. As AMD’s Chris Hook himself said in an interview:
So Mantle is not an open standard ? Let’s say some IHVs [Independent Hardware Vendors] want to write a Mantle backend/driver, what would be the requirements for them ?
Chris Hook, Head of PR – There aren’t many companies of course… Because of GCN they don’t have Mantle capable hardware today…
So, according to AMD, the competition does not even have Mantle-capable hardware in the first place. So they couldn’t support Mantle if they wanted to. They would have to make a GCN-like architecture first, and then they could implement Mantle. That is not likely to happen. So even though they say “it’s not AMD’s CUDA”, it very much is.
Is Mantle used on consoles? Again, unlikely. Microsoft and Sony already supply their own APIs, which already have the advantage that they are a minimal abstraction layer over specific hardware, so minimum CPU overhead. They also already expose the full functionality of the GPU, because only one GPU has to be supported.
Besides, as Carmack points out, if Mantle indeed boosts performance on PC hardware, it would make them more competitive with their own consoles, and why would they want that?
Does Mantle make API calls 9x as fast? If it sounds too good to be true, it usually is. What AMD probably meant to say is that the calls are up to 9x as fast. This may well be true, in some isolated cases. However, in practice, there is a lot more to it.
Not all draw calls are equal, so I doubt that they can claim the same 9x figure on every single draw call. Aside from that, what does it really mean to have faster draw calls? That depends on a lot of things. Even if you were to reduce the CPU overhead, the GPU won’t magically get faster. So you will just run into GPU-limited territory sooner (it would basically just mean that the faster draw calls make the CPU wait longer on the GPU or other CPU-threads). So it all depends on just how much of a factor the CPU-overhead is in the overall performance. I personally expect Mantle to be something like 10-15% faster than Direct3D on a high-end system in practice. We will see once the Battlefield 4 patch with Mantle support will arrive.
And that is giving them the benefit of the doubt. For years we’ve had people claiming that OpenGL calls had less overhead than Direct3D as well. Yes, that may well be true, but it did not result in OpenGL having better performance than Direct3D. The same can be said for Direct3D 10+ vs Direct3D 9. The API in Direct3D 10+ works in a much more efficient way, theoretically, where you can update large blocks of state information with just a single call. Since my engine supports both Direct3D 9 and 10+, I have done some testing on that. I came to the conclusion that although my test scene needed 83 calls in Direct3D 9, and only 11 calls in Direct3D 10+, the Direct3D 9 version actually ran at a fractionally higher framerate (and that is on Windows 7, where Direct3D 9 runs slightly slower than on Windows XP anyway).
So draw calls, call overhead and all that… It is not as simple as just counting the number of calls, or just trying to measure the overhead of a single call. Direct3D 9 may be less efficient than Direct3D 10+ on paper, but apparently a lot of optimization has gone into the drivers over the years, so all that theoretical overhead just isn’t an issue in practice.
So, let’s just wait and see if this Mantle is really as good as AMD claims it is. I am skeptic… firstly, because it is yet another vendor-specific API, which is not a lot of incentive to support it (especially since even for older AMD hardware you still need to have a Direct3D or OpenGL fallback path). Secondly, because I don’t expect the performance gains to be all that dramatic in practice. Thirdly, it appears that AMD and DICE are just making the Mantle API up as they go along, so it is still very much vapourware at this point (I suppose the Mantle project is what Richard Huddy was getting at a few years ago, since repi aka Johan Andersson of DICE was the one to back him up on that. So their solution to making the API go away is to introduce Mantle… an API? And I was right in saying we need something like Direct3D, since although Battlefield 4 will support Mantle, it also includes a Direct3D renderer for compatibility with a larger range of hardware). And lastly… if it is indeed tied so closely to the hardware, what will happen with future architectures from AMD? If we look at the PowerSGL API for example… The first few generations of PowerVR hardware supported the API, but the newer Kyro architecture did not, so you could no longer run any PowerVR-specific code on them, only Direct3D and OpenGL. Mantle may suffer a similar fate.
Update:
Apparently AMD was a bit ahead of itself… Microsoft has officially announced that Mantle is NOT available on Xbox One:
The Xbox One graphics API is “Direct3D 11.x” and the Xbox One hardware provides a superset of Direct3D 11.2 functionality. Other graphics APIs such as OpenGL and AMD’s Mantle are not available on Xbox One.
There are also doubts about the availability of Mantle on the PlayStation 4:
While not confirmed, it’s also likely Mantle is not compatible with the PlayStation 4. AMD’s Ritchie Corpus, director of ISV relations, told TechSpot at the GPU14 Tech Day event that “the current Mantle as we launched it was entirely developed on PC”, and although he didn’t confirm whether a version or subset of Mantle was implemented on consoles, it appears the API is specific to PC.
Really like this blog. All I’ve been seeing lately is “Mantle Mantle Mantle!’ “SteamOS is going to de-throne Windows!”
Good to see a voice of reason.
Does seem more like a biased voice than a voice of reason to me 🙂
Please do explain why.
Otherwise, yours is also just another biased voice.
“Bias” can be claimed to go in both directions, you know. If you’re going to make such an accusation can you at least back it up? There’s plenty of skepticism about both of the subjects of this post coming from those who actually know what they’re talking about, but an expression of skepticism implies a willingness to be proven wrong – which is far more credible than unquestioning belief.
In response to the critisisms about steamos. There still is a prevailing acceptance of the idea that we must know the answers before the questions arise or that we must avoid wasting time on mistakes. Life tells me that mistakes are the only way to progress through life and their avoidance only leads to stagnation and decay.
About time this was said.
I recall seeing a Carmack estimate from back-in-the-day that a Glide version of Quake would get maybe 10% extra performance, which matches nicely with your estimate for Mantle.
For the most part draw calls are a solved problem and have been for years. You batch, you instance, using texture atlasses or arrays to to help manage (part of) your state, and the problem goes away. Most games are going to bottleneck elsewhere in the pipeline – and it’s no good trumpetting 9x draw calls if you’re still bottlenecked on pixel shaders or ROP (which IME is where AMD hardware tends to bottleneck the most – the irony is not lost on me).
There’s also the obvious tradeoff of making your own programs code significantly complex from having to do more work yourself that the API would have otherwise handled automatically for you, and in fairness we don’t yet know for certain if this is going to be the case with Mantle, but it seems at least a reasonable bet that it will be. Not to mention the added complexity which will make it more difficult to achieve that theoretical extra perf (again, we don’t know yet but a reasonable bet).
So to pull some figures out of my ass, say 4x the work, 2x the code complexity, in exchange for maybe 15% extra perf for just over 30% of your target market (significantly less actually given the GCN requirement, but let’s be generous and assume that everyone upgrades), it doesn’t even address what is (again, IME) the primary bottleneck on that hardware anyway, as well as offering no guarantee that it’s going to remain valid for future hardware… hmmm…. is that the aroma of fresh rodent wafting up my nostrils?
Indeed. CPUs in gaming PCs are MUCH faster than those in the PS4 and XBox one. And obviously sending commands to a GPU is very much a single-threaded affair, so it’s the single-core performance that counts here, not the fact that these consoles will have 8 cores.
You generally don’t get into CPU-limited sitations with a decent midrange Core i7 CPU, even with the fastest GPUs. A decent Core i7 will work fine with dual or triple SLI/CrossFire setups. So the CPU really isn’t where you should be looking for big performance gains.
Well, at the very least you will be looking at dual API support, because you don’t want to lock out all the people who do not own an AMD with GCN architecture. So it’s always going to be more complex, even if Mantle itself is no more complex than Direct3D/OpenGL.
So yes, at this point it is looking like Mantle will give a modest performance bonus for people with compatible hardware, in a handful of games where the developers bother to add support. Other than that, nothing will change probably. Games will still support Direct3D, nVidia and Intel are not likely to support Mantle, and in a few years, AMD probably abandons it because it’s not worth the effort.
If Mantle provides a performance gain of 15% then Microsoft and Sony have nothing to fear. 15% is not a game changer relative to consoles.
For Microsoft it is also important that games use Direct3D, so they can be run on all their platforms, on all hardware configurations. Microsoft officially announced that there will be no OpenGL or Mantle support on Xbox One, see the update above.
Consoles in general are already using low level API-s, Mantle is just another low level API for PC. Mantle = bringing “console enviroment/performance on PC” said AMD.
Yes, that’s what they say *now*, but initially AMD led people to believe that the PS4 and XB1 would use Mantle.
So I pointed out that MS/Sony neither need nor want Mantle on their consoles (although the Xbox uses a D3D-ish API, it is a low-level variation, which already gives you the benefits that Mantle would), and it will be PC-only. Which turned out to be correct, as we now know. Mantle is PC-only and GCN-only, and therefore not all that compelling for developers to support. Having one API for both consoles and Windows, even if GCN-only, would be much more interesting.
Agree with Maxx Kilbride. Good blog and finally someone who isn’t blinded by AMD and Linux.
On all forums, all I see is: “Mantle coming, get AMD card” or “AMD cards and CPUs are best for buck” and etc… Just look at this thread, all, bunch of AMD fanboys… Every time someone asks for advice on cards on cpu, they all start speaking about AMD products….
https://linustechtips.com/main/topic/64903-budget-gaming-cpu/#entry895414
Just few things. I don’t think Mantle is like CUDA (that one is higher-level and would likely be easier to implement on non-NVidia hardware). Mantle is similar to NVAPI. (But finding it is quite hard – not at all advertised. I wonder why… 😉 )
Second is , just DIrectX9 and DirectX 11 are massively different (I ported my simple code and it was more like rewrite), so comparison is already hard. I guess resources for driver teams are insufficient and thus they try to move cost onto devs. (And their claim is on par with claim by Valve about OpenGL performance)
BTW: There were simpler times, when drivers didn’t have to do much… Fixed function pipelines. 😀
But Valve’s claims are bogus? http://rootgamer.com/2737/rootgamer/windows-linux-counter-strikesource-benchmark
Right, although I didn’t know somebody tested it, though shouldn’t be surprised. (Do we love measuring, don’t we?)
NVIDIA NVAPI and AMD AGS Library are different things. These are not complete alternatives to DX or OpenGL, these just extend the existing APIs.
Depends on what are differences between public and NDA versions of NVAPI, but you might be right, closest in such case is CUDA driver API http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#programming-interface
anandtech recently commented that mantle may very well be the xbox one’s graphics API ported onto the PC with some modifications. as such, it is entirely correct for microsoft to say that mantle isn’t supported on the xbox one, because it is not. the IS-A relationship only goes in one direction here. what will be interesting, is how much compatibility exists between the xbox one’s graphics API and mantle. it might even be the case (totally guessing here) that any code written for the xbox one will automatically be supported by mantle out of the box.
or in other words, perhaps anything written for the xbox one will automatically support mantle, or can support mantle with very little effort on the developer’s part.
as for hardware architecture, keep in mind that GCN is AMD’s first truly new architecture in a very long time. previous generation of cards have GPUs that have lineages that traces all the way back to the venerable radeon 9700pro. so clearly, AMD’s GPU architectures are expected to last a long time. combine that with both nextgen consoles using GCN GPUs, and the fact that the new consoles are expected to last up to eight years, it stands to reason that all of AMD’s GPUs will be some kind of GCN derivative for the foreseeable future, all of which can support mantle.
If Mantle is the low-level Xbox API, then why would they need DICE to help design and develop Mantle? Then they could just clone Microsoft’s work (which probably leads to lawsuits anyway).
Besides, why would Microsoft deny that it is the low-level API?
No, unlikely story.
Also, you are sorely mistaken that AMD’s hardware goes back to the 9700Pro.
The Radeon 2900 was a clean break from the 9700Pro-based architecture already. Not sure how you can forget that, transition to DX10 with unified shaders was quite a big step.
The newer architectures were built on the 2900 architecture… However, since the 2900 was not very successful, there has been quite a bit of redesign on it over the years.
I am mostly surprised at how much people WANT to believe all these things about Mantle.
AMD had a hand in developing the xbox API as well. after all, it’s their hardware. DX11.x appears to be DX11.2 with low level optimizations added; what’s to stop mantle from being exactly the same? after all, you can’t copyright an API, so it’s unlikely lawsuits will result.
i’m not an expert by any means, but what i do know that radeon cards from the 9700PRO all the way to the HD6970 implemented something called VLIW (very large instruction word). it was very good for rendering pixels on screen, but as GPGPU took off, it quickly became apparent that VLIW is a real bitch to program for. GCN on the other hand, transitioned to SIMD, similar to how nvidia did with the 8800GTX.
and my point still stands: the xbox 360 has a GPU that shares many similarities with the HD2900 (comes with unified shaders, though obviously less powerful). the basic architecture stuck around for a very long time. the xbox one and PS4 all have GCN based GPUs, and we can expect this architecture to stick around as long as the new consoles stay relevant, which can be years. the notion that mantle will be dropped two years down the line due to architecture changes is unfounded.
as for me WANTING to believe these things about mantle: i’m cautiously optimistic, but i have no vested interest in it succeeding or otherwise. if it is successful, great. maybe my next card will be a radeon. if it isn’t successful, who cares? i’ll still end up picking my next card based on price/performance/special-needs anyway.
As I already said: Why would they need DICE to develop the API then? Why is the API still not documented? If it indeed is the DX11.x low-level API, then why didn’t AMD just say so, and why didn’t they just show us the low-level DX11.x API documentation?
I think it’s highly unlikely that they are the exactly the same. They are probably similar, since both use HLSL shader language, and both run on the GCN-architecture… and both render DX11-class graphics… so how different can they be, really? But exactly the same? No, I very much doubt that.
VLIW is just a certain approach to encoding instructions. It is just one aspect of a CPU or GPU architecture. Just because the 9700Pro and the 2×000 through 6×00 cards both used VLIW doesn’t mean they use the same architecture.
What point would that be? You originally referenced the 9700Pro, which is a different, much older architecture than the Xbox 360 GPU and the 2900.
So if your point was about how long architectures last… well, not as long as you thought, apparently. Besides, note that GCN was already released in January 2012, so it is close to 2 years old already.
Sure, the consoles may have a shelf-life of 6 years or more… but Mantle is only for Windows. I don’t see GCN sticking around for another 6 years. Especially not if nVidia can help it. You could say that nVidia forced AMD’s hand with CUDA/GPGPU. AMD *had* to come up with GCN to stay in the race with nVidia. Either nVidia or AMD (or even Intel) will likely come up with another revolution in GPU design before that time, and a matching DX-update will follow, rendering Mantle obsolete.
If I were to hazard a guess, I would say that programmable rasterizing may be the next step: https://mediatech.aalto.fi/~samuli/publications/laine2011hpg_paper.pdf
Well, it’s quite obvious from here that you are NOT an expert, yet you are trying to make technical arguments (and are failing to do so because of your limited understanding) to support, nay, defend Mantle. The only reason why you would want to do that is because you desperately want to believe the Mantle marketing blurb.
you mean, that’s the only reason you can come up with. trying to cast me as some kind of fanboy doesn’t exactly make you sound any more convincing (which btw, you’re convincing enough as is). i’ve already said what my stance regarding it is; it’s up to you if you want to believe it or not.
most of what i posted basically mirrored what anandtech (anand al shimpi) posted a while back. if you have issues with it, you can take it up with him. i’m sure you can have a much better debate with him than with me. i’m just the messenger, so don’t shoot, so to speak.
The thing is, Anandtech published that *a while back*, when AMD’s marketing dept was trying to convince everyone that Mantle was a single API that could be used on Windows, Xbox One and PS4 at the same time. So Anandtech, and many other sites were led to believe that this is the case.
The point of this blog was to point out that this is NOT the case. Which Microsoft had made a lot easier by saying the same thing in their blog.
I’m sure Anandtech has a slightly different stance on Mantle now that Microsoft has officially announced that Mantle is not supported on Xbox.
In fact, in this more recent article, they are saying more or less the same as what I just said: http://anandtech.com/show/7431/amd-blog-post-the-four-core-principles-of-amds-mantle
Apparently they no longer think it *is* the low-level API of the Xbox One, but they do expect it to be similar.
AMD still can’t do tessellation, GCN is PoS garbage.
Tessmark only use one tessellator on AMD GPUs.
The new Hawaii GPU is a tessellation monster in my own tests. If I go smaller than 4 pixels per triangle than the R9 290X is almost 4 times faster than Titan.
Excuse me, but this sounds like nonsense. Tessmark is just a standard OpenGL application, using the standard tessellation API. Why would it make any difference?
What exactly are these ‘own tests’, and can we verify them ourselves?
I think you misunderstood me, or maybe I write it down in the wrong sentence. There is nothing wrong with Tessmark. AMD use an OpenGL driver that only work with one geometry pipeline, even if there is more. The DirectX driver is different, it is capable to access the whole hardware.
I working on an unannounced game, and I’m just play with tessellation and test it with the top cards. I can’t give you the application.
Yes, if that is what you meant, then you clearly phrased it wrong. You should have written “OpenGL only uses one tessellator on AMD GPUs.”
Having said that, I still think it is nonsense.
Firstly, if AMD can do it in DirectX, then why not in OpenGL? The two APIs are nearly identical in terms of tessellation.
Secondly, judging from the architecture, it seems impossible to use less. It’s not about the tessellator (that is just a small fixed-function unit in the pipeline), it is about the geometry engine (which the tesselator is part of).
Now, if I recall correctly, the original DX11 hardware from AMD had only one geometry engine and rasterizer, which was to feed all pixel pipelines. This got bottlenecked for obvious reasons.
AMD later cut the architecture in half, so to say, and had two sets of geometry engines and rasterizers, each feeding half of the pipelines.
With Hawaii, they did the same trick yet again, so you now have 4 clusters of geometry engine/rasterizer/pipelines.
Which means, that if they cannot use all tessellators, then they also cannot use all geometry engines and all rasterizers, which would mean your entire card would only work at a quarter of its full potential. Which is extremely unlikely.
So unless you come up with a plausible explanation, I am inclined to believe you are wrong.
nVidia has obviously actually SOLVED the problem, its PolyMorph engine has a set of 15 geometry engines which can be dynamically allocated.
They don’t update the OpenGL driver because there is no game that use tessellation in that API. They concentrate their resources to DirectX. Sure, sometime there will be an update that helps the tessellation performance in OpenGL, but not for Tessmark, they don’t care about it.
I said *plausible* explanation. What you’re saying right now is already debunked by my earlier reply.
IIRC tessellation under DirectX still isn’t that great either. I’ll see if I can dug up that particular test.
Scali, how well do Intel HD 4000 graphics hold up in somewhat older games, meaning games from the late 90s to about 2007 or 2008 that at best use DirectX 9 (examples, Red Baron 3D, the Prince of Persia games like Sands of Time, also some of the older CODs like Call of Duty 4)??? I realize that a dedicated graphics chip is always better, but I’m looking for a “good enough” solution…maybe the HD 4000 is better than some of the older dedicated graphics cards of about 256MB that were available in laptops and desktops just a few years ago?? Thanks in advance for any feedback.
The CODs run very well, but the POP: Sands of Time is stutter and has some image quality bugs.
Your experience will be depend on the driver. Intel only optimize for the top games, so there are some apps when the hardware don’t utilized well. AMD and NVIDIA has far better software support for some – mostly old – titles.
Thank you sir for the reply. I’m not too worried about Red Baron 3D since it’s isolated in WineSkin and uses a Glide wrapper – so I assume it will be fine no matter what the GPU. Good to know about the CODs. Hmm, worried about the POP series though. I will see if I can test the POPs on an Intel HD 4000. On my old ATI/AMD Radeon 46xx, POP works fine in WineSkin. Perhaps running it in WineSkin will solve some of the problems of software support or lack thereof. COD 4 I will play natively (Mac version).
AMD’S total defeat in tessellation is complete.
Pingback: Independent Mantle benchmarks start to trickle in | Scali's OpenBlog™
Ppl and AAA Companys still dont get it!!!! I use Windows just to be able to play my games, if my games could be played in Fedora 20 or Linux Mint 16 for example i would leave my Windows OS asap on my gaming computer. all my other computers run some kinde of Linux Dist OS.
What exactly don’t they get? People who run linux on their desktop computers are still a small minority. Most people run Windows, and don’t even want to move to linux in the first place, so why should games be playable on linux?
So your already playing those games on Windows wich means your already buying them anyway. Why should they care then? They are not gaining a new customer in that case because you are already a customer and they don’t need to invest in a port. The investment need to make business sense. I don’t see what it is that they “don’t get”.
Pingback: Richard Huddy back at AMD, talks more Mantle… | Scali's OpenBlog™
Pingback: Dahakon, of: laffe personen die mij denken te kennen | Scali's OpenBlog™