Independent Mantle benchmarks start to trickle in

Traditionally I like to look at the benchmarks from Anandtech’s review. And their findings fit exactly with the prediction I made earlier: a high-end setup will likely see 10-15% gain at best. The lower the graphics detail, the more Mantle is able to gain… But why would anyone run on less than Ultra settings with an R9 290X?

Although Direct3D is slightly slower, it still gets well over 60 fps on average, so those few extra FPS that Mantle gives you, aren’t all that relevant. Note also that even underclocking the CPU to 2 GHz and disabling some cores does not affect performance. So you certainly don’t need a fullblown i7-4960X to get these framerates in Direct3D mode. It also seems that Mantle does not do all that much for multithreading, although this was one of the claims made by AMD/DICE initially.

So the first impression is that Mantle is just not that interesting. Its gains are mainly in unrealistic scenarios: running very low detail settings on a high-end rig, or combining a low-end CPU with a top-of-the-line GPU.

I don’t think these gains are much incentive for most developers to start supporting Mantle in their games. Also, these gains will only get smaller over time. CPUs keep getting faster, so the low-end will continue to move up as well. And Direct3D and OpenGL will also continue to reduce CPU overhead in future versions. Which also means that there is little incentive for Intel and nVidia to want to support Mantle themselves.

After all the hype, these results are a bit of a letdown. Mantle does not look like it’s going to be the revolution it was hyped to be. It looks like a short-term solution to a problem that is disappearing anyway.

Update: At Tech Report, they actually bothered to test an nVidia card as well. It turned out to be a lot less CPU-limited in Direct3D than the AMD cards:

One thing we didn’t expect to see was Nvidia’s Direct3D driver performing so much better than AMD’s. We don’t often test different GPU brands in CPU-constrained scenarios, but perhaps we should. Looks like Nvidia has done quite a bit of work polishing its D3D driver for low CPU overhead.

Advertisement
This entry was posted in Direct3D, Hardware news, Software development, Software news and tagged , , , , , , , , . Bookmark the permalink.

55 Responses to Independent Mantle benchmarks start to trickle in

  1. T. says:

    But given that AMD is chasing the low-end gaming market (nobody serious about gaming can consider an AMD-only setup), Mantle is a success. It will make the piss-poor Kaveri APU actually useful to play the Mantle games. Not sustainable in the long term for the reasons you pointed out, but a good stop-gap solution.

    Plus it will force Microsoft to further evolve DX.

    • Scali says:

      I don’t think Microsoft needed any motivation from the likes of AMD/Mantle to further evolve DX.

      • Klimax says:

        Well, Microsoft is so forced to improve DirectX that they have done most of stuff in 2009… (alias went back in time :D)
        It’s just that a particular GPU vendor sucks at writing drivers.

  2. Klimax says:

    Frankly, Driver Command list + proper usage of Instancing would solve most of problems.
    BTW: I have compiled and uploaded updated sample for DirectX showcasing multithreading:
    http://sdrv.ms/1diFjT7
    Includes both source code and binaries. (Compiled using VS 2013)

    Who needs Vendor specific API? Only vendor who sucks at writing drivers and cannot support DCL… (Or who doesn’t want to support DCL.)

  3. Leo says:

    Instead ranting Mantle, better push NV for adopting this technology. Mantle drivers will not stay always in beta state.

    • Scali says:

      No, nVidia should most certainly NOT adopt this technology. There is absolutely no point in having a third 3D API, and fragmenting the market even further.

      nVidia should just stick to the usual path of improving D3D and OpenGL. Which they will.

    • Klimax says:

      NVidia is more or less on record that they won’t support Mantle.

      And then, their driver teams seem to be doing fine, second games written against Mantle like BF 4 apparently use GCN specific optimizations, meaning that wouldn’t run well on Kepler and lastly they got some extensions into OpenGL for this and they support Driver Command Lists, which is DirectX MT solution to CPU problem.

      NVidia wouldn’t gain anything.

    • k1net1cs says:

      No, nVidia shouldn’t bother adopting this…’technology’.

      As you’ve probably read around the ‘net (or haven’t), Mantle is currently optimized for GCN, and quite possibly still be so in the future.
      Yes, the basic feature set can, allegedly, be supported on various other architectures but the optimizations (in regards of Mantle itself, not the performance improvements over DX) won’t be unless nVidia and/or Intel copy the entire GCN architecture verbatim.
      If AMD won’t even bother getting Mantle working on their own pre-GCN cards, why should other vendors try to support Mantle on their architectures?

      While it may seem like a ‘free upgrade’ for the consumer, it really isn’t for non-AMD vendors trying to support Mantle.
      Especially when these vendors would only consistently get sub-par performance from Mantle since their architectures don’t share similarities with GCN at all.
      And it’s not like game devs would bother, or even have the time, applying optimizations to non-GCN cards as well.

      AMD can say ‘it should be easy since Mantle is open standard’ and all that crap but they still quite conveniently forgot that nVidia was the one who got OpenCL supported properly ahead of them on consumer cards.
      nVidia, in general, supports open standards better than AMD as long as it’s architecture-neutral, like OpenGL and OpenCL.
      Mantle may as well be an open standard but it’s definitely not architecture-neutral; most of its optimizations will still be GCN-only.
      That kind of standard is basically useless to competing vendors, open or not.

      • Scali says:

        but they still quite conveniently forgot that nVidia was the one who got OpenCL supported properly ahead of them on consumer cards.

        I would like to add that nVidia supports OpenCL all the way back to the 8800 series from 2006. AMD/ATi support only goes back to the 4000-series, which are some generations newer.
        And these architectures are quite poor at OpenCL, performance-wise. AMD had to go through some big architectural changes before OpenCL performance became competitive. Which is because OpenCL was based on the GeForce 8800 and its CUDA architecture.

        As said, Mantle will probably be a similar story: it won’t perform well on nVidia/Intel GPUs until they redesign them to be Mantle-friendly.

  4. Frank says:

    Even if we realize the technical side, I think Mantle will be a big threat for nVidia/Intel. Many people think they get free performance uplift. They will recommend Radeon with Mantle for their friends, and 20+ games in developement for Mantle. AMD and their partners does an excellent job on hyping this technology. Nearly every gamer aware of this. Even if nVidia/Intel won’t adopt the API, they should think about a marketing strategy.to minimize the threat.

    • Klimax says:

      Threat? Only if NVidia’s driver teams won’t be able to get more out of their GPUs, which frankly is not probable.
      And then there is NO free performance, what you see is GCN specific optimizations and more then likely usage of features present in DirectX for five years, which are touted with Mantle as if magical and new, which they aren’t It’s just incompetence or maliciousness of AMD that those features don’t work well with their drivers.

      Those 20 games might be also last games using Mantle, when everybody discovers there is no free lunch and NVidia is getting that performance anyway…

  5. Dennis says:

    I think we should wait a little bit. I still remind all the benchmarks of BF4 beta or BF4 release and look today how they changed (after a percentage of the technical mess was fixed). Now we talk about a AMD beta driver.

    “But why would anyone run on less than Ultra settings with an R9 290X?” Because there are several settings which look ugly if you set them on ultra (from the point of view of some people like me). I talking about “too heavy and agressive adding blur and post effects too the game” for example which make the game look like a console game and not a PC game. Too much is too much.

    “running very low detail settings on a high-end rig,” As I explained…

    “or combining a low-end CPU with a top-of-the-line GPU.” I know several people who do exactly this. Either because they want to replace the mainboard and cpu later on or because they see still heavy performance increases. Price-performance ratio.

    Don´t take me wrong… Your statements are interesting and I agree to some. But the mentioned scenarios are more common than you might think.

    I tested Mantle and went back to DirectX. With the beta drivers and Mantle activated, I see heavy frame drops and I think this must be fixed first. Friends experienced exatcly the same problem and I think it is still a bad scenario for benchmarks when anything is still faulty.

    • Scali says:

      My point was: if you have a high-end rig, you can run ultra settings if you want. Sure, you can also run low settings… but you wouldn’t need a special API to boost performance.

  6. Kepler says:

    “One thing we didn’t expect to see was Nvidia’s Direct3D driver performing so much better than AMD’s. We don’t often test different GPU brands in CPU-constrained scenarios, but perhaps we should. Looks like Nvidia has done quite a bit of work polishing its D3D driver for low CPU overhead. ”

    “Heck, on the 4770K, the GTX 780 Ti with D3D outperforms the R9 290X with Mantle.”

    Mantle, overhyped garbage that underwhelms, underdelivers and underperforms, just like AMD’s Bulldozer/Piledriver/Steamroller line of CPUs.

    • Scali says:

      Yes, interesting results… I’m surprised that most reviews don’t include nVidia cards, or investigate more CPU configurations.
      nVidia clearly is a lot less CPU-limited than AMD in D3D. Even on the A10-7850K, the 780’s performance is reasonably close to Mantle.

  7. Kepler says:

    Do not believe anything Oxide or AMD says, they intentionally cripple D3D11 performance.

    http://forums.anandtech.com/showthread.php?t=2366731

    DICE and AMD also does dirty things under the table and have been crippling performance for Nvidia users.

    “Enabled tile-based compute shader lighting optimization on Nvidia for improved GPU performance (already active on AMD GPUs) ”

    Shameless dirty companies.

    • mh says:

      This doesn’t make sense because API shouldn’t affect image quality in this way – once the data and commands get to the GPU, everything else should be equal and API should be irrelevant. It looks more like the Mantle renderer is adding some distance-based haze, which should be a toggleable option.

    • k1net1cs says:

      One of DICE devs said that it was a bug on either the game or the engine itself, not because of Mantle; couldn’t really remember.
      Then again the reason I didn’t really bother to remember was probably because it was from the same guy, or at least someone from the same dev team, who said that DX11’s implementation of driver command list was ‘completely broken’.

      • Scali says:

        One thing I hate about the DICE devs is that they claim that all DX11-class GPUs are very similar, and that there is nothing in Mantle that would be GCN-only…
        But, I’m 100% sure that since they have been in the Gaming Evolved program for years, that they have not had any access to low-level information on nVidia architectures AT ALL. So they have no idea how nVidia’s GPUs *really* work, and how similar they really are to GCN. Which means they also cannot judge how well Mantle would map onto these GPUs. Yet they claim they do.

        Given the significant difference in D3D performance between Kepler and GCN, I think this may be partly because Kepler is just designed to match the D3D11 driver model better (perhaps the DCL problem is partly a limitation of GCN, and Mantle is their ‘workaround’).

        And yes, as you say, DICE claims DCL is ‘completely broken’, but fail to add ‘on AMD hardware’ to that statement.

      • k1net1cs says:

        Heh…reminds me of that whole auto-tesselation and Crysis 2 ‘useless triangles’ debacle.
        Oh the amount of workaround AMD had to do because, in real world situation, their GPU couldn’t really cope with tesselation.
        At least not as well as the competition, even Intel.

  8. saifulhaziq says:

    I think It’s too early to shoot Mantle down. Perhaps current game’s engines are designed to be optimized in D3D or DX11 that results as shown? I believe if engine game were build from scratch for Mantle environment, the result would be much better relative to optimized engine for D3D/DX11. But there is a problem: if I were developers, I won’t double my coffee amount just to develop engine for each drivers.

    I really hope that nvidia could be a bit of consumerist by embrace mantle so that developers will likely craft their works exclusively for mantle than DX11.

    • Scali says:

      nVidia is not going to embrace Mantle. nVidia is adding extensions to OpenGL to reduce CPU overhead: http://www.slideshare.net/CassEveritt/beyond-porting

      I think AMD is barking up the wrong tree with Mantle. They should support nVidia’s OpenGL extensions instead. OpenGL is already a standard, supported by various engines.

      • mh says:

        I would have thought that even adding their own vendor-specific GL extensions would have been a better tactic. A reasonable explanation might be that they know exactly how badly broken their own GL driver is, and don’t want to touch it with a 10-foot pole.

        I suspect that Microsoft’s ongoing tying of D3D versions to Windows versions may be hurting them badly. They can’t expose new hardware features to many customers via D3D. They don’t even want to touch their own GL driver. The only option they have left is to invent a new API and try to over-sell it. Seems near the mark to me.

      • Frank says:

        Most of these extensions designed by AMD, so they already support them. And I think this is a good way for the market. The real problem is that many developer want Mantle. Not just DICE or Oxide, but PopCap, Rebellion, Cloud Imperium Games, Nixxes, Visceral, Mohawk, Stardock, and so on.

      • Klimax says:

        @Frank:
        Want or were bribed to want? Like Star Swarm….

      • Frank says:

        @Klimax:
        Want! The whole thing was started by Johan Andersson. Basically he want this, and with the dual console win AMD can afford a low-level API. The others was join the project when they heard about it. Even Bioware is on the board now. The last time I talked to my friend he said they will implement Mantle for their next-gen Dragon Age, Mass Effect and Star Wars titles.

      • Scali says:

        Sorry, I just don’t buy that story. Most developers don’t.
        The whole console-win thing is nonsense, since neither Sony nor Microsoft allow the use of Mantle on their consoles. Microsoft just uses a variation of D3D, which is closer to the Windows-version of D3D than Mantle is. Problem is, for the Windows-version, the IHVs need to develop efficient drivers.

      • Klimax says:

        @Frank. Until they see that they are a wasting money and time, because for all they will get is just matching NVidia.
        Mantle is just giant waste of time and giant vendor lock-in. Just FIY.

    • Klimax says:

      Ha ha. Sorry, no. Not even built for Mantle would save Mantle, because there is nothing new to it. There is nothing early for shooting obviously bad idea reimplementing already existing stuff like multithreading in DirectX.

      it is just vendor specific lock-in and fixing of their bad drivers. Nothing good in there.

    • Scali says:

      Wasn’t part of Mantle’s claim-to-fame that you can use the same GCN/multithreaded optimizations on Windows as you can on PS4 and XB1? Since Battlefield 4 is also available on these platforms, that implies that it is optimized for these platforms, and that Mantle leverages these optimizations.

  9. Maxwell says:

    AMD just got bitchslapped so hard and good.

  10. API says:

    Direct3D Futures – Come learn how future changes to Direct3D will enable next generation games to run faster than ever before! In this session we will discuss future improvements in Direct3D that will allow developers an unprecedented level of hardware control and reduced CPU rendering overhead across a broad ecosystem of hardware.
    http://schedule.gdconf.com/session-id/828181
    Evolving Microsoft’s Graphics Platform – For nearly 20 years, DirectX has been the platform used by game developers to create the fastest, most visually impressive games on the planet. However, you asked us to do more. You asked us to bring you even closer to the metal and to do so on an unparalleled assortment of hardware. You also asked us for better tools so that you can squeeze every last drop of performance out of your PC, tablet, phone and console.
    http://schedule.gdconf.com/session-id/828184
    Approaching Zero Driver Overhead in OpenGL – Driver overhead has been a frustrating reality for game developers for the entire life of the PC game industry. On desktop systems, driver overhead can decrease frame rate, while on mobile devices driver overhead is more insidious–robbing both battery life and frame rate. In this unprecedented sponsored session, Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today’s OpenGL implementations that radically reduce driver overhead–by up to 10x or more.
    http://schedule.gdconf.com/session-id/828316
    Mantle dead on arrival as predicted, repi must be mad as hell he wasted months and years on Mantle.

  11. Iris Pro says:

    Mantle is worthless and it really shows.

  12. Broadwell says:

    http://www.anandtech.com/show/7875/new-unlocked-iris-pro-cpu-broadwell

    It was nice knowing you, AMD. Now Intel is gonna take a commanding lead in IGPU.

  13. mh says:

    http://www.anandtech.com/show/7868/evaluating-amds-trueaudio-and-mantle-thief

    Some interesting benchmarks here. My reading of all of this is:

    If you’re doing heavy GPU work, Mantle isn’t going to do much for you at all. So if you’re a gamer who likes to “max it out”, then enabling Mantle won’t give you playable framerates in cases where you may not have had them before.

    In some cases D3D is going to be faster than Mantle. I interpret this as a result of D3D(11) having a 5-year headstart, developers being more experienced with it, engines using it being more finely-tuned, etc.

    The message I take away is “dial the settings down and you’ll go faster”. But that’s something we’ve known for over 20 years anyway – Mantle just provides a way of going even faster than before when you dial the settings down.

    It’s hard to see what the intended end-result of all of this is. The time invested by AMD in Mantle seems to me as if it would have been more usefully spent improving their existing D3D and (especially) GL drivers – the gains posted here are of the same magnitude that one typically sees from a driver optimization; none of this needs a new API.

    If 5 years time – if Mantle survives – maybe I’ll eat all of these words, but right now it needs to be offering a hell of a lot more than solving a problem that doesn’t even exist for many users, while at the same time taking resources away from solving problems that do exist for the rest. As the benchmarks come out the question is more and more a case of “why would I bother?” – less than 50% with a typical workload at low settings doesn’t seem worth the effort to me, especially considering this will only be available to under half your target audience anyway.

    • Scali says:

      To me it’s quite obvious that AMD went for Mantle to try and frustrate the market. In this case, nVidia is clearly the ‘good guy’, supporting OpenGL, which is a common open standard. I don’t see why Mantle is required, since you can do anything you want with OpenGL extensions.

      But what bothers me most is all those ‘developers’ who keep promoting Mantle. Just yesterday I was talking to someone who works at Nixxes (the studio that worked on Thief). He too claimed that Mantle is the reason why OpenGL and D3D are now focusing on low CPU overhead… I told him that was naive, not to mention that most of the OpenGL extensions have already been out before Mantle was released. Besides… Mantle isn’t even released yet officially. What exactly are you saying then? All we have so far is some beta drivers from AMD and some games that use them, no documentation, API specs or anything. Are you saying that Microsoft reverse-engineered the Mantle drivers, then built the D3D12 API around that, and all that in time for a presentation at GDC, in less than 2 months time?

      That’s just ridiculous. Just as his response to me pointing out the OpenGL extensions. He said: “OpenGL? Who cares! Nobody uses that!”… Wait, what? There are still tons more games and engines that support OpenGL than Mantle, even though neither is anywhere near D3D’s popularity. It still makes OpenGL a better choice than going for a new vendor-specific API.

      Besides, have they forgotten so quickly that Mantle was inspired by the APIs of the Xbox One and PS4? AMD was all about the consoles when they first promoted Mantle, even implying that Mantle *was* the API used on these consoles. Which it clearly wasn’t, so that makes Mantle an API based on the ideas of Microsoft’s and Sony’s APIs. So… you are claiming that an API inspired by Microsoft’s own Direct3D11.x API is what triggered Direct3D12? Microsoft couldn’t possibly have figured any of this out without AMD’s help? Nonsense.

      But well, that’s ‘developers’ for you these days.

  14. API says:

    http://blogs.nvidia.com/blog/2014/03/20/directx-12/
    “NVIDIA will support the DX12 API on all the DX11-class GPUs it has shipped; these belong to the Fermi, Kepler and Maxwell architectural families. With more than 50% market share (65% for discrete graphics) among DX11-based gaming systems, NVIDIA alone will provide game developers the majority of the potential installed base.”
    Nvidia, forward thinking, forward looking.
    HD5000/HD6000 does not support DirectX 12 or Glide 2013, I bet those who bought them must be regretting it now.

    • Scali says:

      HD5000/HD6000 does not support DirectX 12 or Glide 2013, I bet those who bought them must be regretting it now.

      Yea, that would be me 😦 https://scalibq.wordpress.com/2009/11/30/got-my-radeon-5770/

    • Scali says:

      That nVidia blog was pretty devastating in terms of AMD’s Mantle claims…
      nVidia clearly claimed that they’ve been working on DX12 for more than 4 years. Also, they actually demonstrated a working implementation. Which means DX12 is about as mature at this point as Mantle is.
      Makes you think… AMD with its claims that there would not be a DX12, and then introducing their own API (claiming all sorts of ideas as being ‘new’ and their own invention, despite many of them already being available as OpenGL extensions, and as we now know, also available in the upcoming DX12)… As if AMD is afraid of DX12. Perhaps nVidia had a bit too much influence on DX12, and AMD needs an alternative API to remain competitive?

      See also my coverage here: https://scalibq.wordpress.com/2014/03/20/directx-12-a-first-quick-look/

      • mh says:

        My reading (feel free to disagree):

        Mantle *is* DX12. It’s AMD’s DX12 driver, slightly primitive and with some slightly different syntactic sugar around it, but otherwise nothing more.

        This is consistent with AMD’s claims that Mantle could be implemented by other vendors. It’s consistent with AMD limiting DX12 to GCN. It’s consistent with DX12 on the XBone.

        In this scenario, AMD have likewise had DX12 for the past 4-odd years (in various stages of development, of course) but for whatever reason decided to jump the gun on it. Decided to be deceptive to their customers too, but hey, not for the first time, eh?

        Makes sense? I think so.

      • Scali says:

        Yes, I’m quite sure that Mantle and DX12 will have striking similarities… However… I am not so sure that this originated from AMD, as many people like to claim. For example:

        1) nVidia has been publishing OpenGL extensions for various low-CPU features for years (such as bindless resources, which apparently are a ‘thing’ in Mantle and DX12).

        2) Apparently DX12 can run on Fermi and newer, which means that it goes further back on nVidia cards than on AMD cards. A rather strange coincidence if the API originated from your biggest competitor. But it makes perfect sense if it was nVidia who started with DX12/OpenGL at the same time as Fermi (which would fit the 4-year timeframe that nVidia mentioned).
        And recall once again that AMD’s move to GCN was also a move to an architecture much closer to what nVidia had (dropping VLIW for more straightforward SIMD).
        I originally said that OpenCL/DirectCompute were a big catalyst for that move (again technologies that originated from nVidia), but in retrospect, perhaps the route that DX12 was taking could also have had some influence.
        At any rate, nVidia has had DX12-compatible hardware out for 4 years, where AMD has only had it for 2 years. So AMD could not have worked on Mantle before that (at least, not what we now know as Mantle… there could have been work on a low-level API for older architectures, who knows? Richard Huddy first hinted at an alternative for DX in March 2011, when GCN development was well underway already), nVidia could have worked on DX12 before that.

        3) It was nVidia who delivered the hardware and drivers for the DX12 demonstrations at GDC. If AMD was first, with Mantle and all, wouldn’t it make more sense that AMD would also have the better drivers? In fact, isn’t it strange that nVidia has DX12 drivers *at all*, less than 2 months after AMD published the first Mantle beta drivers (and still no official documentation on Mantle available, let alone an SDK)?
        In fact, nVidia’s drivers were probably ready a few months in advance, because they also had to port 3DMark and Forza in time for the presentation.

        As for the XBox One… I’ve been reading some claims that XBone *needs* DX12 because DX11.x is not efficient enough, or too difficult to program for. I don’t buy these claims. The comparisons I’ve seen between XBone and PS4 seem to indicate that there is no CPU-bottleneck. XBone needs to run at a slightly lower resolution than the PS4, which indicates a fillrate problem obviously. It does not reduce the CPU-load. And in terms of graphics quality and other workloads (AI, physics etc), it seems that games are pretty much the same on PS4 and XBone.
        Besides, didn’t they use Forza as an example that DX12 on Windows gives you the performance of DX11.x on XBone? After all, they say: “console-level efficiency on PC”, where ‘console’ means XBox One: “Forza Motorsport 5 is an example of a game that pushes the Xbox One to the limit with its fast-paced photorealistic racing experience. Under the hood, Forza achieves this by using the efficient low-level APIs already available on Xbox One today.”

        So the claims that XBox One *needs* DX12 seem bogus. XBox One *gets* DX12 because that means you can reuse code between XBox One and Windows easily (just like XNA did on earlier XBoxes etc).

        Why did AMD decide to jump the gun? I’m not sure, perhaps Mantle is DX12-but-slightly-different-because-the-real-DX12-was-designed-by-nVidia-and-is-suboptimal-for-us?

      • mh says:

        Oh, I’m not claiming that DX12 originated with AMD. I’m thinking instead that DX12 development is pretty much as described, but part-way through development AMD stuck a different API-front-end on the then current version of their driver and called it “Mantle”. If so, that would be absolutely hilarious: AMD’s great DX-killer turns out to actually have been DX under the hood all along.

      • Scali says:

        I found another interesting presentation from nVidia, dating from 2009: http://developer.download.nvidia.com/opengl/tutorials/bindless_graphics.pdf
        Sounds a lot like stuff that Mantle and DX12 have adopted.

      • Scali says:

        Also funny how the guys here try to pick my blogs apart, but fail (because they selectively quote me, leaving out important info etc): http://semiaccurate.com/forums/showthread.php?t=7858
        I never *denied* that there was a CPU bottleneck in DX11. I merely said it was at most 10-15% on a high-end system. Mantle proved that it was. Also, I merely covered the fact that DX12 addresses the CPU-overhead in DX11. I did not make any judgement on it, because the story is still the same as with Mantle: on a high-end system it is not that big a deal.
        That site is ridiculously pro-AMD. The story on XBox One needing DX12 is positively hilarious. Yet, people buy it (literally, it’s a paywall site).

      • k1net1cs says:

        Oh that site is pretty much pro-AMD for years and they’d proudly admit it.
        They just don’t want to admit they’re being ridiculously pro-AMD.

      • Klimax says:

        @Scali March 23, 2014 at 1:17 pm
        At worst about 25% (IIRC) if you do all things wrong way and call every single function possible as often as possible.
        On average I measured by VTune about 2-8% of runtime. Worst offender is Star Swarm, which calls some functions for each object even if it is not needed. But then that engine has too many batches for DCL.

  15. Pingback: AMD fanboys posing as developers | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s