DirectX 12 is out, let’s review

Well, DirectX 12 is launched, together with a new version of Windows (Windows 10) and an improved driver system (WDDM 2.0).

Because, if you remember, we also had:

  • Windows Vista + DirectX 10 + WDDM 1.0
  • Windows 7 + DirectX 11 + WDDM 1.1
  • Windows 8 + DirectX 11.1 + WDDM 1.2
  • Windows 8.1 + DirectX 11.2 + WDDM 1.3

So, this has more or less been the pattern since 2006. Now, if we revisit AMD and Mantle, their claims were that introducing Mantle triggered Microsoft to work on DirectX 12. But, since DX12 is released together with Windows 10 and WDDM 2.0, AMD is really claiming that they triggered Microsoft to rush Windows 10.

Also, since WDDM 2.0 is quite a big overhaul of the driver system, and both Windows 10 and DirectX 12 are built around this new driver system, Microsoft would have had to work out WDDM 2.0 first, before they could design the DirectX 12 runtime on top of it.

An interesting tweet in this context is by Microsoft’s Phil Spencer:

We knew what DX12 was doing when we built Xbox One.

This clearly implies that the ideas behind DX12 existed prior to the development of the Xbox One, and probably the ideas of using a single OS and API on all devices, from phones to desktops to game consoles (we already saw this trend with Windows 8.x/RT/Phone).

Perhaps the real story here is that Sony rushed Microsoft with the PS4, so Microsoft had to develop Direct3D 11.x as an ‘inbetween’ API until Windows 10 and DirectX 12 were ready, because they couldn’t afford to delay the Xbox One until they were ready. I guess we will never know. We do see signs of this ‘one Windows’ in early news about the Xbox One (by then still known as ‘Xbox 720’) in news articles going back as far as 2012 though. And we know that Direct3D 11 was used on Windows Phone as-is, which makes Windows 8 already mostly a ‘single OS on every platform’, with just Xbox One being the exception (Xbox actually does run a stripped-down version of Windows 8 for apps and general UI, but not for games).

Now, if we move from the software to the hardware, there are some other interesting peculiarities. As I already mentioned earlier, AMD does not have support for feature level 12_1 in their latest GPUs, which were launched as their official DirectX 12 line. I think even more telling is the fact that they do not support HDMI 2.0 either, and all their ‘new’ GPUs are effectively rehashes of the GCN1.1 or GCN1.2 architecture. The only new feature is HBM support, but nothing in the GPU-architecture itself.

I get this nasty feeling that after 5 years of AMD downplaying tessellation in DX11, we are now in for a few years of AMD downplaying conservative raster and other 12_1 features.

It seems that the AMD camp has already started with the anti-12_1 offensive. I have already read people claiming that “DX12.1” as nVidia advertises it, is not an official standard:

I’ve yet to find anything about DX12.1 that isn’t from nVidia, so it’s either an nVidia-specific extension to DX12 (e.g. like DX9a) or it’s a minor addendum (e.g. like DX10.1). Either way it appears the GeForce 900 series are the only thing that support it, and if that’s the case, it’s unlikely to be very important in the long run as obtuse/narrowly supported features tend to be passed over (e.g. like DX9a or 10.1, or other things like TerraScale or Ultra Shadow). Of course history may prove this assumption wrong, but that’s my guess. The Overclock.net link above includes slides from an nVidia PR presentation that shows a few features for DX12 and 12.1; perhaps others can find more about this.

Or this little gem on Reddit:

the guy in that blog surely show an anti amd bias. I would not dig too deep into his comments.

He spent a whole blog post complaining that amd does not have conservative rasterization to a standard that has not been released….

Erm yes, because when AMD released new videocards less than a month before the official release of DirectX 12, they are going to come up with ANOTHER new line of videocards right now with 12_1 support? Well no, we’ll be stuck with these GPUs for quite some time. If AMD had GPUs with 12_1 support just around the corner, they wouldn’t have put all that effort in releasing the 300/Fury line now. A new architecture is likely still more than a year away. AMD’s roadmap does not say much, other than HBM2 support in 2016.

The plot thickens, as it seems that Intel’s Skylake GPU actually does support 12_1 as well, and in fact, supports even higher tiers than nVidia does (Intel does not advertise with ‘DX12.1’ as nVidia does, but they do advertise with ‘DX11.3’, which as we know is a special update including the new DX12_1 features, as mentioned earlier with the introduction of Maxwell v2). Clearly AMD has dropped the ball with DX12. It looks like they simply do not have the resources to develop new GPUs in time for new APIs anymore. Which might explain the Mantle-offensive. AMD knew they couldn’t deliver new hardware in time. But they could deliver a ‘DX12-lite’ API before Microsoft was ready with the real DX12.

Lower CPU overhead, for whom?

The main point of Mantle was supposed to be lower CPU overhead. But is that all that relevant for the desktop? It doesn’t seem that way. Mantle didn’t exactly revolutionize gaming. What about consoles then? Well no, consoles always had special APIs anyway, with low-level access, so they wouldn’t really need Mantle or DX12 either.
But, there are other devices out there, with GPUs and even slower CPUs: mobile devices.
That might have been what Microsoft’s main goal was. Phones and tablets. Getting higher performance and better battery life out of these devices. This is also what we see with Apple’s Metal. They launched it primarily as a new API for iOS. That is not just a coincidence.

Mantle is DX12?

And what of these claims that Microsoft would have copied Mantle? There was even some claim that the documentation was mostly the same, with some screenshots of alleged documentation of both, and alleged similarities. Now that the final documentation is out, it is clear that the two are not all that similar at the API level. DX12 is still using a lightweight COM approach, where Mantle is a flat procedural approach. But most importantly, DX12 has some fundamental differences with Mantle. Such as the distinction between bundles and command lists. Mantle only has ‘command buffers’. Again, it looks like Mantle is just a simplified version, rather than Microsoft cloning Mantle.

Time for some naming and shaming?

Well, we all know Richard Huddy by now, I suppose. He’s made many sorts of claims about Mantle and DX11, changing his story over time. But what about some of the other people involved in this Mantle marketing scheme? I get this very bad taste in my mouth with all these ‘developers’ involved with AMD-related promotions.

First there is Johan ‘repi’ Andersson (Dice). I wonder if he really believed the whole Mantle-thing, even including the part where there were claims of no DX12 in the early days. He sure played along with the whole charade. I wonder how he feels now, now that AMD has pulled the plug from Mantle after little more than a year, and only a handful of games that even support Mantle at all, some of them not even being faster than DX11. It appears he has also lived in an AMD-vacuum as well, with claims such as that DX11 multithreading was broken.

What he really meant to say was that AMD’s implementation of DX11 multithreading was broken. Which you can see in FutureMark’s API overhead test.

As you can see, there is virtually no scaling on AMD hardware. nVidia however gets quite reasonable scaling out of DX11 multithreading. Sure, Mantle and DX12 are better, but nevertheless, DX11 multithreading is not completely broken. The problem is in AMD’s implementation: AMD’s drivers can not prepare a native command buffer beforehand. So the command queue for each thread is saved, and by the time the actual draw command is issued on the main thread, AMD’s driver needs to patch up the native command buffer with the current state. This effectively makes it a single-threaded implementation. As nVidia shows, this is not a DX11-limitation, it *is* possible to make DX11 multithreading scale (and in fact, even single-threaded DX11 scales somewhat on CPUs with more cores, so it seems that nVidia also does some multithreading of their own at a lower level).

Then I ran into another developer, named Jasper ‘PrisonerOfPain’ Bekkers (Dice). He was active on some forums, and was doing some promotion of Mantle there as well, by making claims about DX11 that were simply not true. Claiming that DX11 could not do certain things. When I pointed out certain DX11-features to do the things he claims were not possible, he changed his story somewhat, down to claims that Mantle would be able to do the same more efficiently, in theory. Which is something I never denied, as you know. I merely said that the gains would not be of revolutionary proportions. Which we now know is true.

And a third Mantle-developer I ran into on some forums was Sylvester ‘.oisyn’ Hesp (Nixxes). He also made various claims about Mantle, DX11 and DX12. None of which held up in the end, as more became known about DX12 and the future of Mantle. He also made some very dubious claims, which make me wonder how well he even understands the hardware in the first place (I suppose us oldskool coders have a slightly different idea of what ‘understanding the hardware’ really means than the newer generation). He literally claimed that an API-design such as DX12 could even have been used back in the DX8-era. Now, firstly such a claim is quite preposterous, because you’re basically saying that Microsoft and the IHVs involved with the development of DX have been completely clueless for all these years, and with DX12 they suddenly ‘saw the light’… Secondly, you demonstrate a clear lack of understanding what problem DX12 is actually trying to solve.

That problem is about managing resources and pipeline states, in order to reduce CPU-overhead on the API/driver side. In the world of DX8, we had completely different usage patterns of resources and pipeline states. We had much slower GPUs with much less memory, and much more primitive pipelines and programmability. So the problems we faced back then were quite different from those today, and DX12 would probably be less efficient at handling GPUs and workloads of that era than DX8 was.

And there are more developers, or at least, people who pretend to be developers, who have made false claims about AMD and Mantle. Such as the comment from someone calling himself ‘Tom’, on an earlier blog of mine about DirectX 11.3 and nVidia’s Maxwell v2. In that blog I pointed out that there had been no indication of current or future AMD hardware being capable of these new features. ‘Tom’ made the claim that conservative rasterization and raster ordered views would be possible on existing GCN hardware through Mantle.
Well, DirectX 12 is out now, and apparently AMD could not make it work in their drivers, because they do not expose this functionality.

Or Angelo Pesce with his ‘C0DE517E’ blog, whom I covered in an earlier blog. Well, on the desktop, GCN has not been very relevant at all, since the introduction of Maxwell. AMD has been losing marketshare like mad, and is at an all-time low currently, and dropping fast:
AMDNvidiaMarketShare

And don’t get me started on Oxide… First they had their Star Swarm benchmark, which was made only to promote Mantle (AMD sponsors them via the Gaming Evolved program). By showing that bad DX11 code is bad. Really, they show DX11 code which runs single-digit framerates on most systems, while not exactly producing world-class graphics. Why isn’t the first response of most people as sane as: “But wait, we’ve seen tons of games doing similar stuff in DX11 or even older APIs, running much faster than this. You must be doing it wrong!”?

But here Oxide is again, in the news… This time they have another ‘benchmark’ (do these guys actually ever make any actual games?), namely “Ashes of the Singularity”.
And, surprise surprise, again it performs like a dog on nVidia hardware. Again, in a way that doesn’t make sense at all… The figures show it is actually *slower* in DX12 than in DX11. But somehow this is spun into a DX12 hardware deficiency on nVidia’s side. Now, if the game can get a certain level of performance in DX11, clearly that is the baseline of performance that you should also get in DX12, because that is simply what the hardware is capable of, using only DX11-level features. Using the newer API, and optionally using new features should only make things faster, never slower. That’s just common sense.

Now, Oxide actually goes as far as claiming that nVidia does not actually support asynchronous shaders. Oh really? Well, I’m quite sure that there is hardware in Maxwell v2 to handle this (nVidia has had asynchronous shader support in Cuda for years, via a technology they call HyperQ. Long before AMD had any such thing. The only change in DX12 is that a graphics shader should be able to run in parallel with the compute shaders. Not something that would be that difficult to add to nVidia’s existing architecture, and therefore quite implausible that nVidia didn’t do this properly, or even ‘forgot’ about it). This is what nVidia’s drivers report to the DX12-API, and it is also well-documented in the various hardware reviews on the web.
It is unlikely for nVidia to expose functionality to DX12 applications if it is only going to make performance worse. That just doesn’t make any sense.
There’s now a lot of speculation out there on the web, by fanboys/’developers’, trying to spin whatever information they can find into an ‘explanation’ of why nVidia allegedly would be lying about their asynchronous shaders (they’ve been hacking at Ryan Smith’s article on Anandtech for ages now, claiming it has false info). The bottom line is: nVidia’s architecture is not the same as AMD’s. You can’t just compare things such as ‘engine’ and ‘queue’ without taking into account that they mean vastly different things depending on which architecture you’re talking about (it’s similar to AMD’s poorly scaling tessellation implementation. Just because it doesn’t scale well doesn’t mean it’s ‘fake’ or whatever. It’s just a different architecture, which cannot handle certain workloads as well as nVidia’s).

What Oxide is probably doing, is probably the same thing as they did with Star Swarm: They feed it a workload that they KNOW will choke on a specific driver/GPU (in the case of Star Swarm they sent extremely long command lists to DX11. This mostly taxed the memory management in the driver, which was never designed to handle lists of that size. nVidia fixed up their drivers to deal with it though. It was never really an API issue, they just sent a workload that was completely unrepresentative of any realistic game workload). Again a case of bad code being bad. When you optimize a rendering pipeline for an actual game, you will look for a way to get the BEST performance from the hardware, not the worst. So worst case you don’t use asynchronous shaders, and you should get DX11-level as a minimum (there is no way to explicitly use asynchronous shaders in DX11). Best case you use a finely tuned workload to make use of new features such as asynchronous shaders to boost performance.

It sounds like Oxide is just quite clueless in general, and that isn’t the first time. Remember this?

With relatively little effort by developers, upcoming Xbox One games, PC Games and Windows Phone games will see a doubling in graphics performance.

Suddenly, that Xbox One game that struggled at 720p will be able to reach fantastic performance at 1080p. For developers, this is a game changer.

The results are spectacular. Not just in theory but in practice (full disclosure: I am involved with the Star Swarm demo which makes use of this kind of technology.)

Microsoft never claimed any performance benefits for DX12 on Xbox at all, and pointed out that DX11.x on the Xbox One already gave you these performance benefits over regular DX11. Even so, DX12 gives you performance benefits on the CPU-side, while making the Xbox One go from 720p to 1080p would require more fillrate on the GPU-side. Not something any API can deliver (if the Xbox One was CPU-limited, then you could just bump up the resolution to 1080p for free in the first place). Oxide has a pretty poor track record here, spreading dubious benchmarks, and outright wrong information.

What is interesting though, is that AMD’s Robert Hallock has FINALLY admitted that DirectX 12 is not just Microsoft stealing Mantle, but Microsoft’s own creation:

DX12 it’s Microsoft’s own creation, but we’re hugely enthusiastic supporters of any low-overhead API. 🙂

Glad we got that settled.
So basically, not a lot of what we heard about AMD and Mantle turned out to be true. As I have been saying all along. Welcome to the era of Windows 10 and DirectX 12. These are going to be interesting times for game engines and rendering technology!

Edit: There have been some updates on the async compute shader story between nVidia, AMD and Oxide. See ExtremeTech’s coverage for the details. The short story is exactly as I said above: nVidia’s and AMD’s approach cannot be compared directly. nVidia does indeed support async compute shaders on Maxwell v2, and indeed, there are workloads where nVidia is faster than AMD, and workloads where AMD is faster than nVidia. So Oxide did indeed (deliberately?) pick a workload that runs poorly on nVidia. Their claim that nVidia did not support it at all is a blatant lie. As are claims of “software emulation” that go around.

The short story is that nVidia’s implementation has less granularity than AMD’s, and nVidia also relies on the CPU/driver to handle some of the scheduling work. It looks like nVidia is still working on optimizing this part of the driver, so we may see improvements in async shader performance with future drivers.

So as usual, you read the truth here first 🙂

Advertisements
This entry was posted in Direct3D, Software development, Software news and tagged , , , , , , , , , , , . Bookmark the permalink.

30 Responses to DirectX 12 is out, let’s review

  1. k1net1cs says:

    Scali, have you read about the whole ‘Nano-gate’ saga regarding AMD not giving out review samples of R9 Nano to a few review sites?

    http://www.hardocp.com/article/2015/09/09/amd_roy_taylor_nano_press

    Apparently the situation has been (partly) resolved for now.
    Roy has apologized to TechReport:

    http://techreport.com/news/29011/updated-amd-vp-explains-nano-exclusion-apologizes

    But still no news on TechPowerUp, and [H]ard|OCP just bought one themselves.
    Bit of a shame AMD still need to do something like that.
    I hope the graphics division (Radeon) getting separated from AMD will stop this kind of PR practices in the future (and actually get some real tech working properly).

    Not to mention that the Nano, considering its MSRP, has coil whine according to some other review sites.
    Yep, the little card that’s targeting the SFF cases which users usually prefer quietness from, has coil whine.
    It also doesn’t support HDMI 2.0…while targeting the 4K crowd.

    • Scali says:

      Well, as I said earlier, it seems that the press/community is starting to see through AMD’s shenanigans, and getting fed up with it.

      I wonder how nVidia will respond… They already have the 970 mini, but in theory they could just as well make 980 mini’s, if they start cherry-picking the GPUs, like AMD is doing for Nano (the 970 and 980 are physically the same card in every way, the only difference is that the 970 has some units disabled because it’s a harvested part). It would be quite competitive in performance, at a lower pricepoint, and it would have HDMI 2.0, which is important in this market.

      • k1net1cs says:

        I kinda wonder why Nano doesn’t just use Fiji PRO instead of XT.
        If their reasoning on Fury X being scarce due to low yield, why is XT being used on Nano as well?
        Most reviews also showed that Nano’s performance is a bit under Fury (Fiji PRO) in general, since Nano uses underclocked and thermal-throttled XT.

        Unless, of course, Fury X scarcity is artificial.

        Then again, maybe Nano just uses ‘leftovers’ from Fury X production…it has XTs that didn’t quite make the binning for Fury X.

      • Scali says:

        I wonder if it might be the other way around… Fury X has watercooling and much higher TDP than Nano. So I wonder if Nano gets the cherry-picked dies and the ‘leftovers’ go to Fury/Fury X.

  2. TruthAndDare says:

    Oh c’mon, haven’t you heard the latest spi… ahem, latest news? Mantle is, as apparently it was always ordained to, be by the wise sages of the high church of AMD, a precursor to something much bigger than your silly little DX12. VULCAN, it is coming, and combined with HBM (which is of course unquestionably an AMD invention, the credit for which they mercifully shared with some small company to help them out) and the mighty ZEN, Vulcon will win the universe for AMD once and for all. ….., or so I’ve been led to believe.

  3. Anonymous says:

    Dat pile of nvidia advertisment BS.

    • Scali says:

      Actually, it doesn’t say much about nVidia (even says Intel has the most capable DX12 GPU atm), but is mostly defusing the truckloads of AMD advertisement BS going round the internets. Which you are apparently part of.

  4. Kaotik says:

    This really gives the impression, that you have something between your teeth regarding AMD.
    As you say, MS knew what DX12 was going to be when they designed Xbox One.
    Yet you downplay AMD’s DX12 support, even though GCN 1.1 is exactly what MS used in Xbox One, for DX12 support – heck, AMD’s chip from 2011 is only 1 feature shy of DX12 FL12_0 support, and their first DX12 FL12_0 chip was released in early 2013.
    While you mention Skylake supporting higher tiers than NVIDIA’s FL12_1 chips, you forget to mention that so does GCN on several features – including the DX12/FL11_1 chip from 2011.
    If anything, FL12_1 seems like afterthought added by request from NVIDIA, as MS’s own console doesn’t support it, even though “we knew what DX12 was going to be when we designed it”

    As for Mantle being DirectX 12, of course it’s not. However there’s no denying that it’s quite convenient how suddenly everyone goes low overhead after Mantle is out. Let’s rewind a bit there – according to AMD, they’ve shared Mantle research since the beginning with MS. Obviously this doesn’t mean that DX12 would be copied from this, but it’s not far fetched to assume that Mantles design points (low overhead etc) did indeed affect the route DX12 took. Of course it could be all just a coincidence too, but I suppose we’ll never know for sure.

    Regarding async compute – first one has to define what you mean with “async compute”. Maxwell v2 won’t support async compute in the way AMD defines the term, but it’s up to each to decide wether AMD’s definition is the correct one – if there even is a correct definition. However, where you go wrong is making conclusions based on ExtremeTechs article, which itself says there’s still much unknown in the air.
    There’s no evidence whatsoever that Oxide lied about it, or deliberately chose workload that wouldn’t work on NVIDIA – NVIDIAs support for async compute in any fashion or meaning of the term is (or was at the time anyway) far from what it should be, you can’t blame a game dev utilizing async compute (and utilizing it only a little really) on that, especially when they worked with NVIDIA to get it running without it since it couldn’t run with it.

    • Scali says:

      Yet you downplay AMD’s DX12 support

      I don’t ‘downplay’ anything, I just point out the fact that AMD doesn’t support all features introduced in the DX12 API. Therefore, the common story that we have AMD to thank for DX12, and everything in DX12, is clearly wrong.

      While you mention Skylake supporting higher tiers than NVIDIA’s FL12_1 chips, you forget to mention that so does GCN on several features – including the DX12/FL11_1 chip from 2011.

      That’s not really the point. See, Skylake supports *everything* that nVidia’s 12_1 chips do, and then some higher tiers.
      AMD does not support 12_1 at all.
      That it supports higher tiers in the 12_0 featureset than nVidia is a different story. And one that has been covered everywhere already.

      If anything, FL12_1 seems like afterthought added by request from NVIDIA, as MS’s own console doesn’t support it, even though “we knew what DX12 was going to be when we designed it”

      Firstly, why would you even phrase it like that, ‘afterthought’?
      Secondly, clearly wrong. Read the article again. The two main features in 12_1 are rasterizer ordered views and conservative raster. Intel has supported rasterizer ordered views for years already, and had their own extension for it in DX11 already.
      Trying to claim that this has come from nVidia is rather clueless.
      Whether conservative raster originated from nVidia or Intel is not entirely clear. nVidia got their implementation to market first, but Intel’s implementation is more capable.
      It looks like nVidia and Intel communicated over the implementation however, during the DX12 design phase, so that both could be incorporated in the API, albeit with different tiers (which have to be backward-compatible).

      As for Mantle being DirectX 12, of course it’s not.

      There were plenty of major websites claiming it was. AMD loved to keep up that charade as well, since they actually made public claims that there would be no DX12.

      However there’s no denying that it’s quite convenient how suddenly everyone goes low overhead after Mantle is out.

      There has been talk of this years before Mantle as well. There’s tons of OpenGL extensions developed and presentations given about lower API overhead. There’s Apple releasing their own Metal API before Mantle was made public (indicating that they probably had Metal in development even before Mantle was).

      Obviously this doesn’t mean that DX12 would be copied from this, but it’s not far fetched to assume that Mantles design points (low overhead etc) did indeed affect the route DX12 took. Of course it could be all just a coincidence too, but I suppose we’ll never know for sure.

      You can just as easily turn this around: DX12, OpenGL and/or Metal could have been in development before Mantle was. AMD would have the inside story for every one of these, being one of the major IHVs. AMD could easily have taken design points from these APIs, and build their own Mantle from that. After all, Mantle never actually became a finished product, unlike Metal and DX12. AMD just made a whole lot of marketing noise about it, pretending it was a lot of things that it wasn’t.

      Regarding async compute – first one has to define what you mean with “async compute”. Maxwell v2 won’t support async compute in the way AMD defines the term, but it’s up to each to decide wether AMD’s definition is the correct one – if there even is a correct definition.

      That is exactly my point: different hardware does things differently. APIs such as DX12 are meant to support a wide range of hardware, and as such there usually is not one single hardware implementation that is the ‘One True Implementation’, unlike what a lot of people claim on the net, even going as far as calling alternative implementations ‘software emulation’. That’s about as ridiculous as saying that AMD’s x86 CPUs are a ‘software emulation’ of Intel’s CPUs.

      There’s no evidence whatsoever that Oxide lied about it

      They make hard claims that nVidia’s hardware doesn’t support it. There’s no evidence whatsoever to support that claim.

      , or deliberately chose workload that wouldn’t work on NVIDIA

      Since I don’t make such claims, I don’t see where I ‘go wrong’. That’s just your biased view of things, I suppose. Like how this entire comment of yours was extremely biased towards AMD anyway.
      There is evidence that Oxide chose a workload that doesn’t work very well on nVidia. That’s what started this whole thing, remember?
      I just put ‘deliberately?’ in brackets, since they have done the exact same thing before with Star Swarm. And it clearly was deliberate back then, there is plenty evidence of that.

      But well, it’s fun to see those ignorant AMD fanboys trying to solve everything with personal attacks and character assassination. Trying to spin it like I hate AMD. Why? Because I don’t praise AMD as much as you guys do? Because I don’t try to cover up every technical deficiency? Because I don’t take everything that AMD says at face value? Because I actually use facts and common sense?
      Because if you read what you just wrote, that’s basically what you’re doing here… You’re covering up the fact that AMD doesn’t support DX12_1, and are attacking me personally for stating facts that don’t jive with your biased pro-AMD view of the world.
      (Yes it is actually easy to log the D3D11 calls that an application makes, and to study them. So yes, it is actually easy to show exactly what Star Swarm is doing, and how that is not exactly well-optimized D3D11 code, to say the least).

  5. semitope says:

    funny. fun blog man. Its amazing how unbiased you are…

  6. Pingback: nVidia’s GeForce GTX 1080, and the enigma that is DirectX 12 | Scali's OpenBlog™

  7. Pingback: The damage that AMD marketing does | Scali's OpenBlog™

  8. L_A says:

    Its impressive how much data you copied and pasted from other sites with a level of investigation compared to a scholar… Its also perplexing when you sham Oxide for false marketing on the basis that they implement (DX12 standard) async compute properly.

    For instance im pretty sure not only you know almost nothing about programming (let alone graphics programming) you also dont understand 1 single sentence of your own text but i get your logic, you compensate it by speaking about “bundles” and “command lists”.

    Secondly, Youre trying to tell us that AMD are some sort of devils in disguise because they collaborated with a game engine to use a DX12 feature (properly), a feature that is absolutely 100% available to Nvidia if they wanted to implement it properly, but instead nvidia latest and greatest cards rely on the CPU to perform a function that is supposed to be executed by the GPU? What did you want AMD to do? To tell Oxide and all game companies in the world to STOP using DX12 performance features because NVIDIA engineers were too busy to remake their crappy overpriced architecture? Give me a break.

    Whats even worse, is the fact that you try to accuse AMD of something that they dont actually do, they release their pretty much entire source code for competitors to adopt it as they wish, including the outdated Tress FX, Mantle and even Havok physics engine,

    What about Nvidia?
    Nvidia Physx is an old school piece of crap technology based on Ageia that nobody ever bothered to own because normal game engines could easelly achieve what Ageia had to offer without Ageia hardware, the only valid point of Ageia Physx was that, at the times, in the era of old school gaming, where nobody bothered to use more than 1 CPU Core and 1 Discrete GFX card, their solution could do more without performance impact by reserving all those stupid particles, “physics” and vertex deformation to a secondary discrete Ageia card which was paper weight for anything but ageia physx games, which again were next to none. Nvidia then as the usual innovative and pioneer company in technology that it is, bought the Ageia solution and made it run into a normal GPU, but instead of releasing it to the competition as it is common from AMD, they did the next best thing, they locked the access for such software to Nvidia cards, (not because AMD GPUs architecture couldn’t run the code, because it was proved numerous time when Nvidia Physx was still easy to hack and ported over to AMD cards), but simply because they always lacked anything innovative from day 1, heck i still remember the old days when RAMDAC were important to VGA image quality, NVidia sucked so hard in that departament that their image simply came out much blurrier and washed out compared to ATI cards, when they couldnt compete with ATI performance they again as the pioneers of technology they are, they forced their drivers to run low quality textures to boost their performance at the level of the ATI cards untill ATI countered it with Catalyst AI, Their drivers accidentaly shut GFx cards fans down, their Mobility GPUs were epic fail for 4 consecutive years ranging for G84 and G86 platform not only for low end laptops but also for 1600€+ laptops from Apple and Sony.
    And now they desperately consistently buy out developers to make them use their disgusting Gameworks scripts, which again, easelly doable all of the effects without that proprietary crap.

    As for Tesselation, AMD Actually implemented it properly from day 1, the original logic of Tesselation was that you could have more polygons beying processed by the gfx card at 0 performance cost, AMD implemented it the PROPER way by dedicating a special tesselation processor, Nvidia then as innovative they are, implemented it on their GPU harming GPU performance down by the tesselation factor applied, later they realised that if you add thousands and thousands of polys to a cube you could eventually bottleneck AMD cards by stressing their tesselator processfor for ridiculous and absolutely useless ammount of polygons onscreen which were neither visible nor logical. Thats the only reason tesselation was invented, free eyecandy at 0 performance, Nvidia completely missed the point on that one so they did the next thing, they made games like Crysis 2 have stupidly insane ammount of polys which were not increasing game eye candy and benched the games to make Nvidia hardware look better.

    You see the Hypocrisy in your text now?

    Do yourself a favour, breath in, re-read your text and fix it, i believe youll reach the same conclusion as I, A random Text was written by an nvidia delusional fanboy which is also a hypocrite (intentional or unintentional)

    • Scali says:

      Oh please… You have absolutely no idea, do you?
      I have to admit, I had to lol hard when you tried to claim that AMD opensourced Havok. First of all, AMD does not own Havok, Intel did, and recently Microsoft bought it (yup, AMD does not have any physics solution, both major physics APIs were owned by their direct competitors: Intel and nVidia). Secondly, Havok is not opensource.
      The rest of your wall-of-text is equally poorly informed, but everyone can see that for themselves, no need to even bother with a reply.

      • L_A says:

        Keep replying please, your lack of knowledge is showing by each post you make.

        1) This time you couldnt even grasp the difference between a project and a corporation (slow clap) HAVOK has been originally projected for ATI Cards, wether the havok engine is now owned by AMD or not is irrelevant, the fact that it was developed with ATI since ATI cards era is what matters.

        2) Please press CTRL+F and seek for the word opensource in my “wall of text” then let me know where it is because its impossible for me to find such wording on my reply.
        Nice try to put false words into my mouth though, it only showed for everyone that you have 0 arguments for anything that i said, therefor you resorted to make believe replies which only exists into your mind.
        The fact that AMD releases the source code for ANYONE who wants to adopt it does not mean it is automatically open source, i would tell you to google the many types of licensings in sourcing a program or code for free without open sourcing it but i guess you would fail at that too.

        Im glad you couldnt find 1 mistake with my wall of text and i could find so many in your 7 lines of text.

        Keep it going.

      • Scali says:

        1) Havok was originally a CPU-only solution. There was some talk about a GPU-accelerated solution. It was not an ATi-specific solution, it was designed to run on standard DX9 shader hardware, and nVidia also advertised it: http://www.theregister.co.uk/2006/03/20/nvidia_havok_gpu_physics/
        But Intel acquired Havok shortly after, and clearly wasn’t interested in GPU-accelerated physics, so it never was released.

        2)

        Whats even worse, is the fact that you try to accuse AMD of something that they dont actually do, they release their pretty much entire source code for competitors to adopt it as they wish, including the outdated Tress FX, Mantle and even Havok physics engine

        So firstly, AMD never owned Havok, secondly the source code of Havok was never released to anyone (and certainly not by AMD, they have no rights to do so). Thirdly, the source code of Mantle was never released either. In fact, no development tools for Mantle were released *at all*.
        So don’t even try to start some discussion on different source licenses. Completely beside the point.

        Also, watch 8088 MPH, you may want to adjust your view of who I am, how long I’ve been around, and what I’m capable of.

  9. L_A says:

    Np here you go, ATI X1000 Series using havok
    http://hexus.net/tech/news/graphics/5838-ati-demo-havok-fx-physics-acceleration-radeon-gpus/
    On top of that Havok also worked on older ATI hardware than that, Even 9xxx series cards and it was performed by the CPU only at its initial state, its final state still works CPU only without much performance impact (unlike Nvidia Physx).

    So as i was saying, even though ATI made Havok, they gave the source code to anyone who wanted to use it, including Nvidia, while Nvidia proprietary crap only runs on Nvidia hardware, so again, your entire OP is hypocritical.

    Mantle source code was available to anyone who wanted to develop their games using it, same for Havok and Tress FX.

    As for your 8088MPH messing around with obsolecence, i use VB & C#, .NET, for a living, my hobbies include GML with (surfaces and shader programming), and UE engines from ut2k4 era.

    Youre barking at the wrong tree.

    • Scali says:

      Np here you go, ATI X1000 Series using havok
      http://hexus.net/tech/news/graphics/5838-ati-demo-havok-fx-physics-acceleration-radeon-gpus/http://hexus.net/tech/news/graphics/5838-ati-demo-havok-fx-physics-acceleration-radeon-gpus/
      On top of that Havok also worked on older ATI hardware than that, Even 9xxx series cards and it was performed by the CPU only at its initial state, its final state still works CPU only without much performance impact (unlike Nvidia Physx).

      So as i was saying, even though ATI made Havok, they gave the source code to anyone who wanted to use it, including Nvidia, while Nvidia proprietary crap only runs on Nvidia hardware, so again, your entire OP is hypocritical.

      Are you an idiot or what?
      1) This is a *tech demo*, not an actual game people could buy and use. Havok FX never made it past that stage.
      2) Where the hell do you get the idea that ATi made Havok? Havok was an independent company before Intel acquired them. Havok made Havok. It even says that in your link literally:

      Fast forward to recent times and you’ll maybe remember an announcement by Havok at GDC about Havok FX, an effect physics implementation that runs on compatible Shader Model 3.0 GPUs. NVIDIA demonstrated acceleration with Havok at GDC, if your author remembers rightly, but ATI were absent from that particular physics fanfare.

      And yes, nVidia also demonstrated Havok FX, as you see. Even before ATi did.
      And again, even more far-fetched that ATi, who didn’t even develop Havok, would give the source code to others. The link says nothing of that sort.
      You’re a freaking idiot.

      Mantle source code was available to anyone who wanted to develop their games using it

      Nope, Mantle was a closed beta. Only people in the Gaming Evolved program got access.

      As for your 8088MPH messing around with obsolecence, i use VB & C#, .NET, for a living, my hobbies include GML with (surfaces and shader programming), and UE engines from ut2k4 era.

      Pfft, not impressed in the least. VB and C# are kids stuff. As are using third-party engines. I develop my own custom engines for multi-CPU, multi-GPU and multi-display purposes. For that reason I am in the DX12 Early Access program.
      And yes, I was around in the days of 8088 and CGA as well, and know my way around those systems, among various others, as you can read in the “Just keeping it real”-series on this blog. Where things get REALLY technical. You probably won’t understand anything. It’s on a completely different level. If you want to impress me, make a demo about it. Preferably on 8088 with CGA. Then we’ll see how much of a coder you really are.

      • L_A says:

        Youre not supposed to be impressed, i rather make actual programs, games, shaders, and on screen filters and effects rather than doing a college assignment and call myself a programmer on the internet.

        Have fun beying a dead beat egocentric with no knowledge at all about anything your talking about, its more than clear that you know crap about Havok or Mantle for that matter, since you cant google that information.

      • Scali says:

        Heh, you really have no idea, about anything 🙂 (Which makes you the egocentric. You have a big mouth, but can’t back it up. You said I don’t know anything about (graphics) programming. I just point to an award-winning demo I was involved in. An award of “That’s not possible on this platform” no less. Many game devs expressed their appreciation, even people like John Romero. And who the hell were you again?).
        Also, you’re the one making wrong claims about Havok and Mantle, which everyone can easily verify. Ironically enough, even your own link to ATi’s Havok demo already proves you wrong, as I already said.

    • dealwithit says:

      ATI made Havok was a good joke.
      https://en.wikipedia.org/wiki/Havok_(software)

  10. dealwithit says:

    Self-esteem problems?

    • dealwithit says:

      @Scali, can’t reply directly, sorry. No ‘reply’ ‘button’

      • Scali says:

        I just can’t get over the fact that they keep repeating the same age-old BS. PhysX ported to AMD cards? When did that ever happen? There was a hoax about that once. AMD fanbois are apparently dumb enough to even believe that hoax in 2016.
        Obviously it’s never going to work until you make a full CUDA implementation for AMD hardware. Not something some random guy can just hack up in a few days.

        Also, complaining about nVidia RAMDACs on cards that are more than a decade old… Really?

        And I’ve already said more than enough about tessellation over the years. But some people still don’t get it, apparently.

    • Alexandar Ž says:

      It’s almost like a cult.

      • Scali says:

        It is, they all spew the same nonsense. It’s extremely unlikely that they all came to the same broken conclusions by themselves. So it must have been the cult leaders that did the thinking for them, and they are just going on all sorts of forums, blogs and other social media to spread the message of their cult, like a bunch of zombies.

        They can say whatever they want about me, but at least my blog contains my own original thoughts, and my conclusions are not broken 🙂

  11. dealwithit says:

    @Scali, I remember that there was some time when CUDA was open who wants to support it. AMD never did their part.

    • Scali says:

      These AMD fanbois are getting really annoying. They’re also all over the nVidia site, commenting on driver releases and forum threads. Quite sad.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s