nVidia’s GeForce GTX 1080, and the enigma that is DirectX 12

As you are probably aware by now, nVidia has released its new Pascal architecture, in the form of the GTX 1080, the ‘mainstream’ version of the architecture, codenamed GP104. nVidia had already presented the Tesla-varation of the high-end version earlier, codenamed GP100 (which has HBM2 memory). When they did that, they also published a whitepaper on the architecture.

It’s quite obvious that this is a big leap in performance. Then again, that was to be expected, given that GPUs are finally moving from 28 nm to 14/16 nm process technology. Aside from that, we have new HBM and GDDR5x technologies to increase memory bandwidth. But you can find all about that on the usual benchmark sites.

I would like to talk about the features instead. And although Pascal doesn’t improve dramatically over Maxwell v2 in the feature-department, there are a few things worth mentioning.

A cool trick that Pascal can do, is ‘Simultaneous Multi-Projection’. Which basically boils down to being able to render the same geometry with multiple different projections, as in render it from multiple viewports, in a single pass. Sadly I have not found any information yet on how you would actually implement this in terms of shaders and API states, but somehow I think it will be similar to the old geometry shader functionality where you could feed the same geometry to your shaders multiple times with different view/projection matrices, which allowed you to render a scene to a cubemap in a single pass for example. Since the implementation of the geometry shader was not very efficient, this never caught on. This time however, nVidia is showcasing the performance gains for VR usage and such, so apparently the new approach is all about efficiency.

Secondly, there is conservative rasterization. Maxwell v2 was the first card to give us this new rendering technology. It only supported tier 1 though. Pascal bumps this up to tier 2 support. And there we have the first ‘enigma’ of DirectX 12: for some reason hardly anyone is talking about this cool new rendering feature. It can bump up the level of visual realism another notch, because it allows you to do volumetric rendering on the GPU in a more efficient way (which means more dynamic/physically accurate lighting and less pre-baked lightmaps). Yet, nobody cares.

Lastly, we have to mention Asynchronous Compute Shaders obviously. There’s no getting around that one, I’m afraid. This is the second ‘enigma’ of DirectX 12: for some reason everyone is talking about this one. I personally do not care about this feature much (and neither do various other developers. Note how they also point out that it can even make performance worse if it is not tuned properly, yes also on AMD hardware. Starting to see what I meant earlier?). It may or may not make your mix of rendering/compute tasks run faster/more efficiently, but that’s about it. It does not dramatically improve performance, nor does it allow you to render things in a new way/use more advanced algorithms, like some other new features of DirectX 12. So I’m puzzled why the internet pretty much equates this particular feature with ‘DX12’, and ignores everything else.

If you want to know what it is (and what it isn’t), I will direct you to Microsoft’s official documentation of the feature on MSDN. I suppose in a nutshell you can think of it as multi-threading for shaders. Now, shaders tend to be presented as ‘threaded’ anyway, but GPUs had their own flavour of ‘threads’, which was more related to SIMD/MIMD, where they viewed a piece of SIMD/MIMD code as a set of ‘scalar threads’ (where all threads in a block share the same program counter, so they all run the same instruction at the same time). The way asynchronous shaders work in DX12 is more like how threads are handled on a CPU, where each thread has its own context, and the system can switch contexts at any given time, and determine the order in which contexts/threads are switched in a number of ways.

Then it is also no surprise that Microsoft’s examples here include synchronization primitives that we also know from the CPU-side, such as barriers/fences. Namely, the nature of asynchronous execution of code implies that you do not know exactly when which piece of code is running, or at what time a given point in the code will be reached.

The underlying idea is basically the same as that of threading on the CPU: Instead of the GPU spending all its time on rendering, and then spending all its time on compute, you can now start a ‘background thread’ of compute-work while the GPU is rendering in the foreground. Or variations on that theme, such as temporarily halting one thread, so that another thread can use more resources to finish its job sooner (a ‘priority boost’).

Now, here is where the confusion seems to start. Namely, most people seem to think that there is only one possible scenario and therefore only one way to approach this problem. But, getting back to the analogy with CPUs and threading, it should be obvious that there are various ways to execute multiple threads. We have multi-CPU systems, multi-core CPUs, then there are technologies such as SMT/HyperThreading, and of course there is still the good old timeslicing, that we have used since the dawn of time, in order to execute multiple threads/asynchronous workloads on a system with a single CPU with a single core. I wrote an article on that some years ago, you might want to give it a look.

Different approaches in hardware and software will have different advantages and disadvantages. And in some cases, different approaches may yield similar results in practice. For example, in the CPU world we see AMD competing with many cores with relatively low performance per core. Intel on the other hand uses fewer cores, but with more performance per core. In various scenarios, Intel’s quadcores compete with AMD’s octacores. So there is more than one way that leads to Rome.

Getting back to the Pascal whitepaper, nVidia writes the following:

Compute Preemption is another important new hardware and software feature added to GP100 that allows compute tasks to be preempted at instruction-level granularity, rather than thread block granularity as in prior Maxwell and Kepler GPU architectures. Compute Preemption prevents long-running applications from either monopolizing the system (preventing other applications from running) or timing out. Programmers no longer need to modify their long-running applications to play nicely with other GPU applications. With Compute Preemption in GP100, applications can run as long as needed to process large datasets or wait for various conditions to occur, while scheduled alongside other tasks. For example, both interactive graphics tasks and interactive debuggers can run in concert with long-running compute tasks.

So that is the way nVidia approaches multiple workloads. They have very high granularity in when they are able to switch between workloads. This approach bears similarities to time-slicing, and perhaps also SMT, as in being able to switch between contexts down to the instruction-level. This should lend itself very well for low-latency type scenarios, with a mostly serial nature. Scheduling can be done just-in-time.

Edit: Recent developments cause me to clarify the above statement. I did not mean to imply that nVidia has an entirely serial nature, and only a single task is run at a time. I thought that it was common knowledge that nVidia has been able to run multiple concurrent compute tasks on their hardware for years now (introduced on Kepler as ‘HyperQ’). However, it seems that many people are now somehow convinced that nVidia’s hardware can only run one task at a time (really? never tried to run two or more windowed 3D applications at the same time? You should try it sometime, you’ll find it works just fine! Add some compute-enabled stuff, and still, it works fine). I am strictly speaking about the scheduling of the tasks here. Because, as you probably know from CPUs, even though you may have multiple cores, you will generally have more processes/threads than you have cores, and some processes/threads will go idle, waiting for some event to occur. So periodically these processes/threads have to be switched, so they all receive processing time, and idle time is minimized. What I am saying here deals with the approach that nVidia and AMD take in handling this.

AMD on the other hand seems to approach it more like a ‘multi-core’ system, where you have multiple ‘asynchronous compute engines’ or ACEs  (up to 8 currently), which each processes its own queues of work. This is nice for inherently parallel/concurrent workloads, but is less flexible in terms of scheduling. It’s more of a fire-and-forget approach: once you drop your workload into the queue of a given ACE, it will be executed by that ACE, regardless of what the others are doing. So scheduling seems to be more ahead-of-time (at the high level, the ACEs take care of interleaving the code at the lower level, much like how out-of-order execution works on a conventional CPU).

Sadly, neither vendor gives any actual details on how they fill and process their queues, so we can only guess at the exact scheduling algorithms and parameters. And until we have a decent collection of software making use of this feature, it’s very difficult to say which approach will be best suited for the real-world. And even then, the situation may arise, where there are two equally valid workloads in widespread use, where one workload favours one architecture, and the other workload favours the other, so there is not a single answer to what the best architecture will be in practice.

Oh, and one final note on the “Founders Edition” cards. People seem to just call them ‘reference’ cards, and complain that they are expensive. However, these “Founders Edition” cards have an advanced cooler with a vapor chamber system. So it is quite a high-end cooling solution (previously, nVidia only used vapor chambers on the high-end, such as the Titan and 980Ti, not the regular 980 and 970). In most cases, a ‘reference’ card is just a basic card, with a simple cooler that is ‘good enough’, but not very expensive. Third-party designs are generally more expensive, and allow for better cooling/overclocking. The reference card is generally the cheapest option on the market.

In this case however, nVidia has opened up the possibility for third-party designs to come up with cheaper coolers, and deliver cheaper cards with the same performance, but possibly less overclocking potential. At the same time, it will be more difficult for third-party designs to deliver better cooling than the reference cooler, at a similar price. Aside from that, nVidia also claims that the whole card design is a ‘premium’ design, using high-quality components and plenty of headroom for overclocking.

So the “Founders Edition” is a ‘reference card’, but not as we know it. It’s not a case of “this is the simplest/cheapest way to make a reliable videocard, and OEMs can take it from there and improve on it”. Also, some people seem to think that nVidia sells these cards directly, under their own brand, but as far as I know, it’s the OEMs that build and sell these cards, under the “Founders Edition” label. For example, the MSI one, or the Inno3D one.
These can be ordered directly from the nVidia site.

Advertisements
This entry was posted in Direct3D, Hardware news, OpenGL and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

116 Responses to nVidia’s GeForce GTX 1080, and the enigma that is DirectX 12

  1. Redneckerz says:

    ”Secondly, there is conservative rasterization. Maxwell v2 was the first card to give us this new rendering technology. It only supported tier 1 though. Pascal bumps this up to tier 2 support. And there we have the first ‘enigma’ of DirectX 12: for some reason hardly anyone is talking about this cool new rendering feature. It can bump up the level of visual realism another notch, because it allows you to do volumetric rendering on the GPU in a more efficient way (which means more dynamic/physically accurate lighting and less pre-baked lightmaps). Yet, nobody cares.”

    Nobody cares because nobody in games is using it, yet. I have had this conversation with you before and you were so certain it was going to be used in the upcoming months.
    That was June last year.

    I told you back then that game devs were only now moving on from DX9 based standards to DX11. DX11 games were there before obviously, DX9 was always there as a fallback option since last-gen consoles supported a similar featureset. With current-gen consoles doing at minimum DX11, the standards are raised. DX12 is still very much a ”new” thing, hence the slow adoption rate.

    Since only Pascal is apparently supporting the Tier 2 support *for now*, it is now beneficial for game developers to purely focus on Nvidia hardware when AMD is still around. With Polaris, i reckon they bring their featuresets to similar heights. And at around that time, i am talking Q1/Q2 2017, DX12 will start to gain momentum and will those features be exploited.

    Also: In the meantime, and totally unrelated, Nvidia Kepler performance is becoming less and less when compared to similar AMD cards:
    http://www.neogaf.com/forum/showpost.php?p=151767464&postcount=5671 (Where one asks about evidence supporting this notion)
    http://www.neogaf.com/forum/showpost.php?p=151771229&postcount=5673 (Evidence given)
    http://www.neogaf.com/forum/showpost.php?p=165601331&postcount=183 (Both posts combined in one)

    And the resultant thread about it:
    http://www.neogaf.com/forum/showthread.php?t=1058295
    And now a second thread, which is updated with the latest 1080 chip and also shows Maxwell decreased performance (You will need the first threads to see which games are used): http://www.neogaf.com/forum/showthread.php?t=1220928

    • Scali says:

      Nobody cares because nobody in games is using it, yet.

      Rise of Tomb Raider is using it. Other games are likely to follow soon (the technology is part of GameWorks).
      Not to mention that the feature can also be used in DX11, via the DX11.3 update.
      So you are wrong, uninformed, and the rest of your comment is meaningless.

      Not to mention the obvious: the exact same non-arguments would hold for async shaders.

      • Redneckerz says:

        ”Rise of Tomb Raider is using it. Other games are likely to follow soon (the technology is part of GameWorks).”

        But that is being done by a patch for PC users, not as a ”native” thing. And if it can be done by DX11.3, why do you pose the question then that nobody cares about it? Clearly, people should be doing that then.

        Also glad that you dont bother to address the rest. Very telling.

        ”So you are wrong, uninformed, and the rest of your comment is meaningless.”

        Ah, the typical Scali blind eye turns its ugly rear end again. So i take it you have no answer to the mysterious Kepler and Maxwell degrading performances. Else you wouldnt call it ”meaningless”.

        It would at the very least been classy if you bothered to address it. If not, then you shouldnt be surprised that people see you as a pro-Nvidia guy and an anti-AMD person, if you arent willing to also see *any* negative news about Nvidia being mentioned.

      • Scali says:

        But that is being done by a patch for PC users, not as a ”native” thing.

        Lolwut? It’s an official update for the game, by the original developer. Sure it’s “native”, whatever that means.

        And if it can be done by DX11.3, why do you pose the question then that nobody cares about it? Clearly, people should be doing that then.

        Doing what? The ‘nobody’ is the average customer/tech-site commenter. I’m not talking about hw/sw developers obviously, because we do care.

        Ah, the typical Scali blind eye turns its ugly rear end again. So i take it you have no answer to the mysterious Kepler and Maxwell degrading performances. Else you wouldnt call it ”meaningless”.

        I have nothing to say about some random benchmarks on some random forums (especially not as a comment that is completely unrelated to this blogpost). If reputable review sites start publishing properly documented, reproducable figures that support your theory, then we can talk.
        Until then, it’s not even news to begin with. It may as well just be AMD fanboys trying to spread rumours to discredit nVidia.

        Don’t forget, I develop D3D software myself. If there are funky things going on in driver updates, I’d be the first to know about it. I haven’t seen any of the claimed dropoff. On the contrary.

  2. Redneckerz says:

    ”Lolwut? It’s an official update for the game, by the original developer. Sure it’s “native”, whatever that means.”

    ”Native” in this context being the game is developed against DX12 directly, not to be introduced by patch.

    ”Doing what? The ‘nobody’ is the average customer/tech-site commenter. I’m not talking about hw/sw developers obviously, because we do care.”

    Have any blog or post by your colleagues that supports that?

    ”If reputable review sites start publishing properly documented, reproducable figures that support your theory, then we can talk.”

    Which is literally what the links in those posts do, they refer to an established benchmark site that is used as a source for these things. So yeah, the theory is supported in the very least.

    ”Until then, it’s not even news to begin with.”

    It is, but if you dont bother reading up on it, then i guess it isnt.

    ”It may as well just be AMD fanboys trying to spread rumours to discredit nVidia.”

    The mentioned site is likely one of the least biased sites you can have on benchmarks so ironically enough, your citation shows more of an apparent bias right here 😉

    • Scali says:

      ”Native” in this context being the game is developed against DX12 directly, not to be introduced by patch.

      How is that even relevant in this context? We were talking about conservative rasterization. In Tomb Raider, this is done in DX11.3, the DX12 version of the game does not support that feature yet.

      Have any blog or post by your colleagues that supports that?

      DX11.3 and DX12 support conservative rasterization. Various GPUs on the market support conservative rasterization. Rise of the Tomb Raider supports conservative rasterization.
      The Codemasters EGO engine supports conservative rasterization: http://www.dsogaming.com/news/codemasters-ego-engine-4-0-supports-dx12-raster-ordered-views-conservative-rasterization/
      And it’s not just nVidia, even Intel talks about it in this presentation: http://www.slideshare.net/IntelSoftware/more-explosions-more-chaos-and-definitely-more-blowing-stuff-up
      See slides 49 and 50.
      I can probably dig up tons of other stuff, but clearly HW and SW devs have been working on this feature. What more ‘evidence’ do you need?

      Which is literally what the links in those posts do, they refer to an established benchmark site that is used as a source for these things. So yeah, the theory is supported in the very least.

      Well no, because these are taking benchmarks from different reviews. They may have used different hardware, different OSes, different drivers, a different collection of games, different versions of the game, different settings etc. You can’t just compare these 1:1 and jump to conclusions like that.
      That’s not bias by the way, that is common sense. Something you seem to lack, else you could have figured this one out before I spoonfed it to you. I guess you just like what you see, and don’t bother to do any critical thinking. Known as confirmation bias.

      • Redneckerz says:

        ”How is that even relevant in this context? We were talking about conservative rasterization.”

        I werent.

        ”I can probably dig up tons of other stuff, but clearly HW and SW devs have been working on this feature. What more ‘evidence’ do you need?”

        Thank you.

        ”Well no, because these are taking benchmarks from different reviews. They may have used different hardware, different OSes, different drivers, different versions of the game, different settings etc. You can’t just compare these 1:1 and jump to conclusions like that.”

        They are RELATIVE numbers, not ABSOLUTE. What you seem to do, is jumping to conclusions, by asserting that it ”Might be AMD fanboys trying to spread rumours to discredit nVidia.”. They arent rumours. Relatively speaking, a theory has arisen (And which those sources back up) that seems to suggest that Nvidia is doing planned obsolescence on their hardware.

        ”I guess you just like what you see, and don’t bother to do any critical thinking. Known as confirmation bias.”

        Gee, i wonder if that same conclusion cant be applied to you, given how you have been banned from most self-respecting hardware sites.

        By the way, you also claimed to be in the DX12 Early Access Program and when asked for evidence, you said: ”Wait till the NDA has been expired and ill show you. I really have those sources, just wait.” So, can you prove now that you were involved there and show us these documents, as you promised?

      • Scali says:

        I werent.

        Yes you were, you just have the habit of losing track of your own discussions. Just scroll up and see. The line of discussion is this:
        1) You quote a passage from this blogpost and claim there aren’t any games using conservative rasterization
        2) I point out that Rise of the Tomb Raider uses conservative rasterization
        3) You start some nonsense about how that doesn’t count because it’s a ‘patch’ instead of being ‘native’.
        4) I say it’s an official patch, so what are you talking about?
        5) You try to move the goalposts from “using conservative rasterization” to “ROTR has DX12 patched on later”
        6) I say that the DX12-mode isn’t relevant to this discussion since the conservative rasterization feature is used in DX11-mode.

        Still about conservative rasterization.

        Thank you.

        What do you mean, “thank you”?
        When are you going to start doing some critical thinking of your own, and look for your own information? Most of what I said was already discussed earlier, or is common knowledge. The rest was easy to google yourself.
        You have to be pretty dumb to just keep throwing questions back at me, to make me do your thinking for you. It says a lot about you that you don’t put any thought or effort in your own responses. Every discussion I’ve had with you has been the same. You just have these preconceived ideas, but no actual knowledge or thinking behind it. You just jump to conclusions, rather than actually researching the topics I talk about.

        They are RELATIVE numbers, not ABSOLUTE.

        Yes, relative numbers of the different benchmarks run during a single review. You can’t compare these numbers between different reviews for a number of reasons, as I already pointed out.

        What you seem to do, is jumping to conclusions, by asserting that it ”Might be AMD fanboys trying to spread rumours to discredit nVidia.”.

        No, I am just doing the same as you’re doing: I’m throwing some crackpot theory out there. But that one clearly went over your head as usual.

        They arent rumours.

        They are a creative interpretation of some unrelated test results, leading to some crackpot theory.

        Gee, i wonder if that same conclusion cant be applied to you, given how you have been banned from most self-respecting hardware sites

        I obviously do critical thinking (you don’t get to where I am if you can’t do critical thinking, and have a thorough understanding of technology, mathematics and logic. Something like 8088 MPH can only be done if you can study material and teach yourself. Those tricks will not be in any books you find, and were not known to any programmers before we made the demo. Yet, the demo exists. It exists because we were not just willing to let others tell us what is possible, but because we gathered our own information, did our own analysis and did a lot of critical thinking to arrive at various conclusions of what the hardware can be made to do). This is not appreciated in some circles.
        Yet you come here, instead of going to your ‘self-respecting hardware sites’. I think that says a lot.

        By the way, you also claimed to be in the DX12 Early Access Program and when asked for evidence, you said: ”Wait till the NDA has been expired and ill show you. I really have those sources, just wait.” So, can you prove now that you were involved there and show us these documents, as you promised?

        I think that would be an exercise in futility. I mean, I could show screenshots, or provide documents, but then you wouldn’t believe they were real. And even if I would manage to prove that they are real, you wouldn’t believe that I got these first-hand, etc… Before we know it, we’d be back at you demanding more information from me than I am willing or legally able to provide. Besides, I don’t care whether you believe me or not.

      • Scali says:

        Relatively speaking, a theory has arisen (And which those sources back up) that seems to suggest that Nvidia is doing planned obsolescence on their hardware.

        Someone already debunked that theory: http://www.neogaf.com/forum/showpost.php?p=193814937&postcount=221
        Clearly, over a 3 year period, nVidia’s drivers only increase performance in games, or worst-case, some games remain at the same performance level.
        I think two of the most telling examples are Dirt 3 CE and Just Cause 2. After the launch of the 900-series, nVidia apparently did something in their drivers that these older games benefit from, even on older cards, and they get a significant boost.

        In general, such threads are cringe-worthy to read. Most people have such an overly simplistic view of how drivers are developed and optimized. In many cases, game-specific optimizations deal with the layer between API and the low-level GPU driver, and do not target any specific GPU. So even if the IHV decides at some point to stop doing specific GPU optimizations for older cards (which would mainly involve shady practices such as shader replacements), these older cards would still benefit from the more generic optimizations They would also benefit from improvements in the shader compiler for example. Or from general improvements in memory-management inside the driver or such.

        The only way for nVidia to stop these improvements from trickling down to older cards is to remove support for these cards from new releases of their unified drivers. Which they don’t.

        I think this blogpost is somewhat related to nVidia’s care for older cards: https://scalibq.wordpress.com/2013/12/01/nvidia-stability-issues-on-geforce-400500-series/
        The 400-series was some 3 years old when some nasty bug in the power management in newer drivers was causing stability issues.
        nVidia cared enough to thoroughly investingate the issue, and even asked users with troublesome cards to send their cards to nVidia (they were likely caused by custom BIOSes in factory-OC’ed cards). Eventually they found the bug, sent us some beta drivers to test, and after we reported that the issue was solved, they incorporated it in a new WHQL release.
        Far from planned obsolescence.

  3. Redneckerz says:

    ”Yes you were, you just have the habit of losing track of your own discussions. ”

    Cant be NOT condescending for once can you. Alright, giving you the point here.

    ”What do you mean, “thank you”?”

    Well, basically what any normal person means by it: Thank you. For the evidence you lined up there.

    ”You have to be pretty dumb to just keep throwing questions back at me, to make me do your thinking for you.”

    And you think being insultive makes you a great discussion partner?

    ”Every discussion I’ve had with you has been the same.”

    Likewise. But unlike you, i dont have to retort to my own blog to hear myself talking.

    ”Yes, relative numbers of the different benchmarks run during a single review. You can’t compare these numbers between different reviews for a number of reasons, as I already pointed out.”

    The general consensus is that there are strange anomalies regarding Nvidia performance on older cards. If there werent any strange anomalies, such theory wouldnt have sprung. So there obviously is something there that fuels that theory, hence why i say relative numbers and not absolute.

    ”No, I am just doing the same as you’re doing: I’m throwing some crackpot theory out there.”

    It would be funny if it werent so ironically true in your case. You go at lengths to discredit one party and dismiss any critique about the other party. People from both sides called you on it, and the result is always the same – THEY are wrong, YOU are right.

    ”But that one clearly went over your head as usual.”

    If thats what you want to believe, thats alright. Id prefer what .oisyn said about you. His description would make much sense given your responses.

    ”I obviously do critical thinking. This is not appreciated in some circles.”

    ”Some circles” being every self-respecting hardware/programming site out there. Stay classy.

    ”Yet you come here, instead of going to your ‘self-respecting hardware sites’. I think that says a lot.”

    Does it? Do you assume that i think there is some ”magic truth” to be found here that only Scali knows about?

    ”I think that would be an exercise in futility.”

    I think you are just digging yourself in already. Last time you couldnt show them because it was under NDA, but ”I have the sources, i swear. Just wait.”. Well, i waited. And now im asking you to make those documents, those sources you claimed about, public. And what do you do? Say ”It would be futile to do so”. That really isnt a good look to have.

    ”I mean, I could show screenshots, or provide documents, but then you wouldn’t believe they were real.”

    And the digging starts. No, if you provided documents (as you promised), i would totally believe them.

    ”And even if I would manage to prove that they are real, you wouldn’t believe that I got these first-hand, etc…”

    And the assumptions continue. You know, if you are going to be digging yourself in already, perhaps you shouldnt have made the claim in the first place. Because i waited, and now when i ask you for release, you dig yourself in.

    ”Besides, I don’t care whether you believe me or not.”

    Credibility is a delicate thing to have. It can break in an instant. By your answers above, why should anyone believe you to begin with? You claim you never deleted any blogs – I gave you the URL to one. When asked to release the NDA documents, since you said last year that i should wait, and that you had the sources, you go on wild assumptions and state ”It would be futile to do so. Besides that, i dont care if you believe me or not”.

    Why should anyone believe you if you are purposefully lying about things and arent willing to back up your own promises?

    • Scali says:

      And you think being insultive makes you a great discussion partner?

      Firstly, my aim was never to be a great discussion partner to you. You came here with a reply to my blog, with various insults and insinuations. I merely took the courtesy to answer your post, instead of just trashing it and blocking you.
      You might want to ask yourself that question first, if your aim is to have a good discussion. Because clearly you are not approaching it in a very productive way.

      The general consensus is that there are strange anomalies regarding Nvidia performance on older cards. If there werent any strange anomalies, such theory wouldnt have sprung. So there obviously is something there that fuels that theory, hence why i say relative numbers and not absolute.

      Why are you telling me this? I am obviously not interested (you’re not very good at taking hints, are you?). If you think there is something to it, I suggest you take this to the various review sites out there, and ask them if they are willing to investigate the issue.

      Does it? Do you assume that i think there is some ”magic truth” to be found here that only Scali knows about?

      It apparently means that you still read my blogs, and feel the need to repeatedly reply to them (I don’t know what for really, our levels of knowledge are so far apart that meaningful discussions aren’t possible anyway. And if that wasn’t a problem, then your extremely unpleasant demeanor would be. Ever heard of a ‘leading question’? Pretty much everything you say is a leading question).

      And the assumptions continue. You know, if you are going to be digging yourself in already, perhaps you shouldnt have made the claim in the first place. Because i waited, and now when i ask you for release, you dig yourself in.

      I actually did post a screenshot (and one that actually proves one thing I claimed earlier: as of October 2014, AMD did not have any drivers for DX12 available, while nVidia and Intel did). But you act as if it’s not even there (you could at least have checked to see if that forum actually exists). I see no need to provide any further proof.
      Also, I never made any promises. I don’t make promises to the likes of you.

      Why should anyone believe you if you are purposefully lying about things and arent willing to back up your own promises?

      Foot-in-mouth disease again.

  4. qwerty says:

    Hey Scali, been waiting for an update.

    Regarding the Founder Edition: the vapor chamber apparently comes only with the 1080. The lower tier products like the 1070 come with the regular blower-type cooler, and the price difference between the 1070 FO and the non-FOs MSRP is a bit smaller ($70, compared to the $100 for the 1080).

    Secondly, I’m sorry but watching an AMD drone debate (above) is like watching a dog chase its own tail – amusing and depressing at the same time. Talk about a mind job by AMD’s marketing, eh? This man obviously doesn’t have an IT related job, so whatever time he gets out of making sandwiches or fries (and running Ashes of Singularity beta :P), he spends on checking your references, talking to people you mentioned in your old entries, etc. Freaky man, just freaky.

    • Scali says:

      Regarding the Founder Edition: the vapor chamber apparently comes only with the 1080. The lower tier products like the 1070 come with the regular blower-type cooler, and the price difference between the 1070 FO and the non-FOs MSRP is a bit smaller ($70, compared to the $100 for the 1080).

      Do you have a link to some good pictures and/or info about the 1070? So far I’ve only seen the 1080.
      Edit: At the bottom here, they mention the possible lack of vapor chamber on 1070: http://www.anandtech.com/show/10336/nvidia-posts-full-geforce-gtx-1070-specs
      But the ‘pictures’ still seem CGI, not the real 1070 card.

      Secondly, I’m sorry but watching an AMD drone debate (above) is like watching a dog chase its own tail – amusing and depressing at the same time. Talk about a mind job by AMD’s marketing, eh? This man obviously doesn’t have an IT related job, so whatever time he gets out of making sandwiches or fries (and running Ashes of Singularity beta :P), he spends on checking your references, talking to people you mentioned in your old entries, etc. Freaky man, just freaky.

      Yes, he basically had only one ‘technical’ argument when he started this ‘discussion’: namely, in his AMD-sheltered world, he had apparently not seen any games, engines, GPUs or APIs with support for conservative rasterization yet.
      Once I pointed out beyond a shadow of a doubt that these do exist, it basically only boiled down to personal attacks and crackpot anti-nVidia theories, completely unrelated to the content of this blog post.
      Also telling that he does not comment on the screenshot from the DX12 Early Access forum (which neatly shows AMD being late to the DX12 party). He was probably convinced I was bluffing. What I said couldn’t possibly be true because cognitive dissonance.

      • qwerty says:

        “Do you have a link to some good pictures and/or info about the 1070?”

        Sorry, but so far Nvidia is being somewhat tight-lipped beyond the basic specs ( http://www.geforce.com/hardware/10series/geforce-gtx-1070 ), and there is just a lot of speculation and CGI. I guess they’re waiting for the 1080 fanfare to quite down, before they start the next party. Marketing, right?

        “What I said couldn’t possibly be true because cognitive dissonance.”

        lol

  5. Redneckerz says:

    ”Firstly, my aim was never to be a great discussion partner to you.”

    That as much is true. And i like how you later go on making even more assumptions against qwerty. For what its worth, i was going to address that picture in this post, but you just HAD to assume that i purposefully didnt read it and all that.

    ”Why are you telling me this? I am obviously not interested (you’re not very good at taking hints, are you?). If you think there is something to it, I suggest you take this to the various review sites out there, and ask them if they are willing to investigate the issue.”

    Ofcourse you arent interested… but as soon as AMD slips up, you are the first in line to cry wolf. And no, that isnt because i dont have AMD hardware (Unless you can count the X360 Xenos as one) and i like Nvidia for a lot of different things really, like the Ansel announcement.

    ”It apparently means that you still read my blogs, and feel the need to repeatedly reply to them”

    Much assumption, once more. I just saw this latest blog and spotted a few things i wanted to comment about. But sure, if that means i really all of your blogs, have at it.

    ”I actually did post a screenshot but you act as if it’s not even there (you could at least have checked to see if that forum actually exists).”

    I didnt comment about it at first because like you say, i checked to see if that forum actually exists (Which its obviously does). I found the url kinda curious, but there was nothing to it. But its great that you make the assumption i wasnt going to check up on it, let alone that i was going to ignore it on purpose.

    ”I see no need to provide any further proof.”

    Well, i didnt ask for a screenshot, i asked for the NDA documents that you couldnt publish last year because they were NDA. You said you had those sources, so thats why i asked if you could publish them. Showing a screenshot of the forum is nice since it shows atleast that, but like i said – I asked to make the NDA documents public.

    ”Yes, he basically had only one ‘technical’ argument when he started this ‘discussion’: namely, in his AMD-sheltered world, he had apparently not seen any games, engines, GPUs or APIs with support for conservative rasterization yet.”

    Quite childish of you to play the ”He is an AMD-shill” card again since that is pretty much your tactic all the time. I dont think id need to bring back the whole UE4 and VXGI debate again where you literally were caught on selective reading, do i?

    @ Qwerty:
    ”Secondly, I’m sorry but watching an AMD drone debate (above) is like watching a dog chase its own tail – amusing and depressing at the same time. Talk about a mind job by AMD’s marketing, eh?”

    Please dont make the assumption that im an AMD mercenary-for-hire – I am not. I firmly believe both parties have both their ups and downs, as is to be expected when they are eachother’s competitor. What Scali does, is only telling one side of every story – One filled with a lot of guesswork, selective choice of words, and assumptions from his own. That would be all fine if Scali was able to take his losses, but thats where it goes wrong – No matter if its a Nixxes programmer or anybody else that does the correcting, the end result is that ”they” are always wrong and Scali is always right. Regardless of what i think about him, i think that is just an unhealthy attribute to have. If you go at lengths to discredit one party but dont even do nearly the same for the other party, it is difficult not to see an agenda being in play.

    ”This man obviously doesn’t have an IT related job, so whatever time he gets out of making sandwiches or fries (and running Ashes of Singularity beta :P), he spends on checking your references, talking to people you mentioned in your old entries, etc. Freaky man, just freaky.”

    I dont even run that game from Oxide. What i do agree with Scali is that you cant use a singular game (like Ashes) to benchmark all the cards, especially when such benchmark is heavily optimised for one architecture (GCN). I mean, i can see why you would do that if you targeted a console release since thats the architecture there, but for a PC-only release like Ashes, it makes no sense.

    Also @ Scali what was the name of the company that you had?

    • Scali says:

      For what its worth, i was going to address that picture in this post, but you just HAD to assume that i purposefully didnt read it and all that.

      Your reply made no indication of that, you still explicitly said I was lying about being in the DX12 Early Access program. Even though the screenshot should at least make you withhold judgement on that until further investigation.

      Much assumption, once more. I just saw this latest blog and spotted a few things i wanted to comment about. But sure, if that means i really all of your blogs, have at it.

      You clearly read this blog post. But in one of your replies you also accused me of deleting another blog post. How would you even know that the blog post was there, if you hadn’t visited my blog before?

      Well, i didnt ask for a screenshot, i asked for the NDA documents that you couldnt publish last year because they were NDA. You said you had those sources, so thats why i asked if you could publish them. Showing a screenshot of the forum is nice since it shows atleast that, but like i said – I asked to make the NDA documents public.

      I don’t see why you’re so hung-up on this. The screenshot shows I have access to the forum where said documents are published. Ergo, I have access to these documents.
      Publishing these documents however is another thing. You are basically asking me to pull a wikileaks. The NDA is about the *contents* of the documents. I am free to discuss those now (and that is all I said), to a certain degree. Then again, most of that is already published on MSDN anyway, so what’s the point? The documents themselves are however still confidential as far as I know, and I am not going to make them public. Especially for such a lousy reason as some retarded AMD fanboi demanding them. What documents do you even want, and what did you think you would be able to do with them?

      I dont think id need to bring back the whole UE4 and VXGI debate again where you literally were caught on selective reading, do i?

      Please do! This is going to be good! *Grabs popcorn*

      Please dont make the assumption that im an AMD mercenary-for-hire – I am not.

      We’re not as stupid as you are. You clearly have an agenda. In your very first post here, you came out with this crackpot anti-nVidia theory of ‘planned obsolescence’.
      Either you’re really that stupid, or someone is paying you to spread that nonsense.

      Other than that, you’ve only been trying to insult me, and try to make me out as a fanboy, and make false claims that I would have made wrong statements. Problem is, the things I say always check out, even if they happen to be unfavourable to AMD (such as AMD not having DX12 drivers ready).
      And yes, .oisyn may work at Nixxes, but he’s not exactly the bigshot CPU/GPU/API expert there. The guy at Nixxes who is, is a friend of mine. We both come from the Amiga, and are in the same demo group. He and I are ‘in the same league’, you could say.
      Anyway, you can see .oisyn’s lack of experience and understanding in his actions: he was all about Mantle. You may think that Nixxes was all about Mantle and AMD hardware. But Nixxes is not. In a poetic twist of irony, Nixxes is also the company responsible for Rise of the Tomb Raider, with the conservative raster patch, using nVidia’s technology, and are even featured on nVidia’s site: http://www.nixxes.com/nixxes/newsdetail/20-uk/news/186-hdr
      So .oisyn’s youthful over-enthusiasm does not necessarily reflect Nixxes’ stance. Nixxes clearly didn’t see Mantle as the be-all-end-all API, but just saw a nice business opportunity. Just as they saw a nice business opportunity with nVidia for Tomb Raider. This is what I tried to tell .oisyn, but apparently he preferred to find out the hard way, and make a fool of himself in public.

      People like you would side with .oisyn, because of ‘argument from authority’ (the guy works at Nixxes). But, in retrospect, despite him working for Nixxes, eventually, Nixxes’ actions proved I was right and he was wrong about Mantle/DX12/conservative raster.

  6. qwerty says:

    @Redneckerz
    “Please dont make the assumption that im an AMD mercenary-for-hire – I am not.”

    You would wish that I made that assumption. Mercenaries get paid for their job, marketing victims like you do their work for free out of some misplaced sense of brand loyalty.

    “What Scali does, is only telling one side of every story – One filled with a lot of guesswork, selective choice of words, and assumptions from his own.”

    So may be he’s wrong on every account, what is it to you? For you to take so much time out of your life in trying to “correct” him. Even if Nvidia pays him to be nice to them (not that they need to, considering their market share), again so what? Why do you feel compelled to thwart his “evil plans”? What are you? The online version of Batman? What other brands do you spend your personal time defending on the internet? Adidas? Pepsi? BMW? Get a life mate, before your time on this planet runs out.

    • Scali says:

      So may be he’s wrong on every account, what is it to you?

      You’d think that he wouldn’t have this much trouble to prove me wrong, if what he claims is true 🙂

      Anyway, he basically pulls my statements out of context (he’s very good at that). I don’t just go out and say “Oh, nVidia is good at this-and-that”. No, let me paint how the discussion actually went:

      1) Some news item appears where DX12 featurelevels are disclosed

      2) Discussion ensues over current and future GPUs from various vendors, and what features they do or do not support

      3) At some point, the discussion turns towards DX12_1, which as it happens, only nVidia supported at that time (or at least, as far as we knew. It later became known that Intel’s Skylake supports it as well: https://software.intel.com/sites/default/files/managed/22/6d/6th_gen_graphics_api_dev_guide_dec_2015.pdf).

      4) AMD fanboys chime in, pointing to consoles, and claiming how PC games somehow HAVE to be designed with consoles as the lowest common denominator, and no extra features can be added

      5) I point out that historically this has rarely been the case (eg DX10/DX11 flourished on the PC platform, in a time where Xbox 360 and PS3 were only DX9-level), and that developers could certainly add the DX12_1 features to the PC version of the game as an optional extra, as they have been doing for years. I further point out that nVidia will likely be promoting these features via the TWIMTBP/GameWorks program, as they have done in the past with other special features, such as PhysX and tessellation

      6) AMD fanboys say I’m wrong

      7) Games turn up with DX12_1 features through GameWorks.

      8) AMD fanboys say I’m wrong

      At no point did I come out and promote nVidia. I was just talking about the new DX12 features, and corrected some false information spread by AMD fanboys. Wasn’t trying to be pro-nVidia (and technically I wasn’t, since Intel also supports DX12_1, and although they don’t have something like GameWorks, they do work eg with Codemasters to get their hardware supported). Just trying to set the record straight about DX12.

      • qwerty says:

        @Scali

        Of course man, I know the history. If I had personally found you to be dishonest, I wouldn’t visit here. I was merely trying to point out the futility of this guy’s obsession, to prove you wrong without any logical analysis of his own arguments. Obsessive behavior is the telltale mark of a fanboy.

        The problem with the regular tech sites (AnandTech, Tom’s, Guru3D, etc) is that for the sake of ad money and getting review samples, even when they are not outright publishing marketing brochures as featured articles, they tend to use a very diplomatic and “padded” language. When their readers come across blogs like yours, the lack of suger-coating lets them taste the bitter truth, and their tender fan-feelings get hurt. Hey no one said honesty comes cheap. 😉

      • Scali says:

        Yup, I suppose that is true. I don’t get any ad-money, and I don’t review products, so I don’t have any interests there.
        I also write from a different mindset. I discuss things from a developer’s point-of-view, not from a consumer/gamer point-of-view.
        So where the average review site would go “Okay, so they say it has simultaneous multiple projections, but we don’t have any games supporting it”, I see a new feature like that, and I immediately think of ways that I could use this in my own projects.

        It’s a shame things turned this way though. Back in the 80s, computer magazines went way more in-depth with the hardware. Some of the authors were developers themselves, and part of the magazine was about programming, often programming hardware directly in assembly.
        Check out the “Bonus material” part in this recent article for example: https://scalibq.wordpress.com/2015/12/15/pc-compatibility-its-all-relative/
        I think the average tech journalist today has no clue anymore what goes on under the hood of their CPU or GPU. It’s just some black box that they can benchmark, but everything is just superficial. They wouldn’t be able to write their own benchmarks, and target specifics of a new architecture.

    • Redneckerz says:

      ”You would wish that I made that assumption. Mercenaries get paid for their job, marketing victims like you do their work for free out of some misplaced sense of brand loyalty.”

      How is it brand loyalty when the very PC i am typing this on has an Nvidia GPU? (An incredibly crappy one, at that, but still, its Nvidia).

      ”So may be he’s wrong on every account, what is it to you?”

      That its kinda scary that the people who ”agree” with this dude only question one side of the story, and not the other side. Because you would know that Scali would run with it if AMD had some kind of performance defect in their hardware. Do you ever read anyone up here with a different opinion that Scali can actually agree with? Do you ever question why he gets banned from every self-respecting hardware site? He claims its because they dont ”get” him, all those sites its because ”He appears to be knowledgable, but its just mostly unsourced assumptions combined with a typical anti-AMD dislike” Hell, even search this very blog with the keyword ”Fanboy”. 95% is about discrediting AMD, and the one blog that isnt, still has him saying that there are ”AMD fanboys” out there.

      ”For you to take so much time out of your life in trying to “correct” him. Even if Nvidia pays him to be nice to them (not that they need to, considering their market share), again so what?”

      Its still way more less time than the time he spends in his life discrediting AMD. Why arent you questioning that, but feel needed to attack my POV? ”Its okay if Scali does it, but you, you nonsense stranger, you dont get a free pass”?

      ”Why do you feel compelled to thwart his “evil plans”? What are you? The online version of Batman? What other brands do you spend your personal time defending on the internet? Adidas? Pepsi? BMW? Get a life mate, before your time on this planet runs out.”

      Why do you feel compelled to defend his evil plans? What are you? The online version of Rita Repulsa? No honestly, what is ticking you that you think ”I agree with what this man says and i turn a blind eye when a criticial POV appears on the scene.”?

      • Scali says:

        That its kinda scary that the people who ”agree” with this dude only question one side of the story, and not the other side.

        That’s rich, coming from you.
        Even if we only look at your responses here, people can already see that:

        1) In your first post, you make claims that nobody is going to support conservative rasterization. Even *after* a game with support for that has already been on the market for a few months. So you clearly only see the AMD-fanboy-distorted view of the world, where DX12_1 features are irrelevant, because AMD doesn’t support them. You don’t even bother to check the facts, and see that nVidia is indeed using GameWorks to put DX12_1 features in games (exactly as I said).

        2) In your first post, you link to some forum posts that present some theory. You present it as an absolute truth, even though if you read through the forum posts, there are various people discrediting the theory, with very good arguments and empirical data. Again, you only see the AMD-fanboy-distorted view, not the arguments and data from the other side.

        It’s obvious that all your information comes via some AMD-filter. And for people like you, indeed, many things I say are completely unheard of (literally), because they are just filtered away. That doesn’t mean they aren’t there.
        The result is that people like you derail every discussion. That doesn’t make you right, and it certainly doesn’t make me wrong.

      • qwerty says:

        “How is it brand loyalty when the very PC i am typing this on has an Nvidia GPU? (An incredibly crappy one, at that, but still, its Nvidia).”

        Considering the arguments taken from the small book that you have typed so far in this comment section, the details of your personal hardware are irrelevant.

        “Because you would know that Scali would run with it if AMD had some kind of performance defect in their hardware.”

        He has criticized Nvidia and Intel in the past as well. Perhaps he has written about fewer of the faults by the aforementioned than AMD, because there actually were fewer faults to find? The market seems to agree with that hypothesis at least.

        “Do you ever read anyone up here with a different opinion that Scali can actually agree with?”

        Okay, first of all you need to understand what a blog is. It is not like a regular post in a forum. When someone has made a blog post, they probably took a long time to think and research and scrutinize it. Sometimes it may be based on the years of experience the writer has in that field. It would be highly unlikely that a poorly thought out off-the-cuff remark would quickly change their mind.

        “Its still way more less time than the time he spends in his life discrediting AMD. Why arent you questioning that, but feel needed to attack my POV?”

        I read his arguments and yours, and questioned only that which appeared questionable.

        “Why do you feel compelled to defend his evil plans? What are you? The online version of Rita Repulsa?”

        lol, I don’t even know who that is. Also, you have an irrationally high opinion of your “repartee”, if you think someone needs to defend the OP from it. No man, I just get irritated when this sort of stuff spills over here from the usual forums.

      • Scali says:

        Perhaps he has written about fewer of the faults by the aforementioned than AMD, because there actually were fewer faults to find?

        Yes, perhaps the world just isn’t fair!
        As you can read from my latest blogpost, I aim to be neutral and objective, not politically correct.
        I mean, if you want to start a discussion on eg DX12_1, there is no getting around the fact that nVidia and Intel support it, and AMD does not.
        Somehow he is trying to blame this state of affairs on me. I refuse to sugarcoat it and make some politically correct nonsensical statement such as “Okay, so AMD doesn’t have DX12_1, but that’s okay, their GPUs are special too! We are all happy special snowflake GPUs!”

        Okay, first of all you need to understand what a blog is. It is not like a regular post in a forum. When someone has made a blog post, they probably took a long time to think and research and scrutinize it. Sometimes it may be based on the years of experience the writer has in that field. It would be highly unlikely that a poorly thought out off-the-cuff remark would quickly change their mind.

        Exactly. But let’s get back to the original question. Why do there need to be different opinions in the first place, and why would I have to agree with such statements? This all comes back to the nonsensical special snowflake-drivel again.
        I write about technology, not about opinions. Most of what I write doesn’t deal with what I think about a certain product or company, but with what the product actually means technologically, what it is capable of etc.
        Such as how I didn’t just give some random opinion on async compute like “AMD does it better” (as you read on most forums), but I actually explained the whole idea behind the API, and drew up some analogies to try and demonstrate how nVidia and AMD differ, and what those differences may result to in practical situations. In the end I don’t even bother to ‘declare a winner’, but I suggest we sit on the fence for a while to see where the industry will be going with this new feature. So in a way I don’t even give an opinion at all.
        This is just on a completely different level.

  7. Redneckerz says:

    ”Your reply made no indication of that, you still explicitly said I was lying about being in the DX12 Early Access program. Even though the screenshot should at least make you withhold judgement on that until further investigation.”

    You would have preferred it if i instantly dismissed that screenshot then?

    ”You clearly read this blog post. But in one of your replies you also accused me of deleting another blog post. How would you even know that the blog post was there, if you hadn’t visited my blog before?”

    Does that mean i read all of your blogs? No, not quite. But tell me, you didnt bring that blog back up just to save face, do you?

    ”I don’t see why you’re so hung-up on this. The screenshot shows I have access to the forum where said documents are published. Ergo, I have access to these documents.”

    So publish them. You now make it sound like its a ”big” thing you showed the forum where you got that from (Since you didnt wanted to refer to it last year) but you could have EASILY mentioned that forum back then, since already in March last year, Heise linked to that very same forum – http://www.heise.de/newsticker/meldung/GDC-DirectX-12-beschleunigt-Spiele-um-20-Prozent-2570450.html.

    It atleast it would have made your point a lot stronger back then if you had done that.

    ”Then again, most of that is already published on MSDN anyway, so what’s the point?”

    So link to those then?

    ”The documents themselves are however still confidential as far as I know, and I am not going to make them public.”

    You literally said last year that you would show the documents if the NDA expired, and you were constantly referring to these documents. For you to now say ”They are on MSDN anyway” (Without providing evidence of such) and ”The documents are still confidential” is just twisting. Back then i already said that you kept on making claims that you had documents, but werent able to prove that. ”Its under the NDA, those documents”, was your answer back then. And now when that NDA has expired, you still dont show those documents, now claiming that ”Its still confidential”.

    Like i said last year: Stop making claims if you cant back them up and you consistently have to grab back at secondary reasons (Its under the NDA, Its on MSDN already, Its still confidential) to not make them public. Mind you: You were the one that constantly kept claiming you had these documents and that you were going to show them after the NDA had expired.

    ”Especially for such a lousy reason as some retarded AMD fanboi demanding them.”

    For someone who is so bent on saying he has those documents and that he knows more about DX12 than everyone else because of said documents, you are awfully closed about releasing them for the reasons above.

    ”What documents do you even want, and what did you think you would be able to do with them?”

    The documents you consistently referred to. Again, you made the claim you had these documents and you were going to show them after NDA expiration, not me.

    ”Please do! This is going to be good! *Grabs popcorn*”

    Well you are on Tweakers still, no? I gladly continue that discussion there. Or were you booed off the site after harassing enough people about not ”being taken seriously” and you were saying ”Those simpleminded Tweaker people dont understand what i am talking about. I was right in what i predicted, as always”, right?

    ”We’re not as stupid as you are.”

    Who is ”we” here? You dont have a second personality, do you?

    ”You clearly have an agenda. In your very first post here, you came out with this crackpot anti-nVidia theory of ‘planned obsolescence’.”

    Because relative numbers on benchmarks that suggest a planned obsolescence is obviously ”crackpot theory” right there. Its evidenced. Since you arent ”interested” into investigating that theory for that matter (Ofcourse you wouldnt, its a negative theory about Nvidia) you call it ”crackpot theory” right there. But if it was AMD having this theory/issue of planned obsolescence, you would be the first one to run with it.

    ”Either you’re really that stupid, or someone is paying you to spread that nonsense.”

    You arent going to answer what the name of the company is that you have? You have made multiple references to it in the past, but nothing is to be found from it. If its a company that works on state-of-the-art tech, one would atleast be able to be found on the web. Unless, nothing about it is true and its just a cloak to appear knowledgable.

    ”make false claims that I would have made wrong statements.”

    If they were false claims you have no problem proving it wrong (By showing documents, for example.) An expert isnt made on claims alone, he/she is made on the proof he supplies.

    ”Problem is, the things I say always check out, even if they happen to be unfavourable to AMD (such as AMD not having DX12 drivers ready).”

    Which is why the rest of the Interweb is wrong i suppose.

    ”And yes, .oisyn may work at Nixxes, but he’s not exactly the bigshot CPU/GPU/API expert there. The guy at Nixxes who is, is a friend of mine. We both come from the Amiga, and are in the same demo group. He and I are ‘in the same league’, you could say.”

    So by what name goes that ”guy at Nixxes” then?

    ”Anyway, you can see .oisyn’s lack of experience and understanding in his actions: he was all about Mantle. ”

    Yet he ships games on consoles and PC and all you have is this non-findable company with no publically searchable accomplishments.

    • Scali says:

      So publish them. You now make it sound like its a ”big” thing you showed the forum where you got that from (Since you didnt wanted to refer to it last year) but you could have EASILY mentioned that forum back then, since already in March last year, Heise linked to that very same forum – http://www.heise.de/newsticker/meldung/GDC-DirectX-12-beschleunigt-Spiele-um-20-Prozent-2570450.html.

      It atleast it would have made your point a lot stronger back then if you had done that.

      The forum URL wasn’t exactly a secret, so it would prove nothing if I gave it to you. You could have found it yourself, as you demonstrated.
      Showing a screenshot of having access to the forum would violate the NDA however, so that was not an option.

      So link to those then?

      Again, you can find these yourself on MSDN, what’s the point of me linking to it?

      You literally said last year that you would show the documents if the NDA expired, and you were constantly referring to these documents.

      No, I did not say I would show the documents.
      And it should be obvious why not.
      I said I could discuss their contents if the NDA expired.

      For you to now say ”They are on MSDN anyway” (Without providing evidence of such) and ”The documents are still confidential” is just twisting.

      No, the preliminary versions of the documents I have are confidential, the released versions are not, and are on MSDN. I link to them all the time, to point out DX12 features, specs etc. I did so in this very blogpost for example, see the part on asynchronous compute.
      You’re just looking like a complete idiot asking me to link to MSDN, when:
      1) I obviously already do so on a regular basis
      2) MSDN is a publicly accessible resource, and anything is super-easy to find by just googling eg “MSDN Asynchronous Compute” etc.

      For someone who is so bent on saying he has those documents and that he knows more about DX12 than everyone else because of said documents, you are awfully closed about releasing them for the reasons above.

      I still don’t see what the point of releasing them would be.
      However, what you *could* do… and here’s just a crazy thought… is that you could take some things I claimed back then, and then look them up in the public MSDN documentation. If what I said back then checks out with the DX12 documentation that is out now, then I either I made some really lucky guesses, or I actually had access to these documents (and was able to understand their contents) before they were released.
      But dangit, here I am doing the thinking for you again.

      You see, the problem with fact-checking is that there’s actually some work and effort involved. If you want to verify my claims (or refute them), you’ll have to put in the work first.

      The documents you consistently referred to.

      You’ll have to be more specific. What is it that you’re looking for in these documents?

      Well you are on Tweakers still, no? I gladly continue that discussion there.

      Now there’s a cop-out if I ever saw one.

      Because relative numbers on benchmarks that suggest a planned obsolescence is obviously ”crackpot theory” right there. Its evidenced. Since you arent ”interested” into investigating that theory for that matter (Ofcourse you wouldnt, its a negative theory about Nvidia) you call it ”crackpot theory” right there. But if it was AMD having this theory/issue of planned obsolescence, you would be the first one to run with it.

      Actually no.
      First of all, it’s not my fault that the crackpot theories tend to come from the AMD-camp, not the nVidia-camp. I mean, case in point: You come here and post that crackpot theory of nVidia’s planned obsolescence on my blog.
      It’s not like I went out looking for it.
      Secondly, if the shoe is on the other foot, I certainly do not run with it. Case in point: https://scalibq.wordpress.com/2010/05/29/the-moment-many-of-you-have-been-waiting-for-it%e2%80%99s-the-nvidia-fanboys%e2%80%99-turn/

      I could have run with the theory that AMD’s texturing is fake/broken/whatever, but I didn’t. I pointed out the facts, and explained the technology and specs behind texture-filtering, and the verdict happened to be in AMD’s favour.

      You arent going to answer what the name of the company is that you have? You have made multiple references to it in the past, but nothing is to be found from it. If its a company that works on state-of-the-art tech, one would atleast be able to be found on the web. Unless, nothing about it is true and its just a cloak to appear knowledgable.

      Well, that company got me into the DX12 Early Access program, apparently.
      But no, I certainly do not plan to associate myself or this blog with my company. Before you know it, people will not just attack me personally, but also my company, its products and associates.

      Which is why the rest of the Interweb is wrong i suppose.

      Sadly, yes. Case in point: see the above example of games with conservative rasterization/GameWorks being released.
      The Interwebs claimed that would never happen, but it did.

      So by what name goes that ”guy at Nixxes” then?

      I respect his privacy. He doesn’t go out on the internet as “Hey, I’m that guy from Nixxes” like eg .oisyn does. He has his reasons, similar to my own, I suppose.
      If you ask .oisyn, he’ll know who I mean, and he will confirm it.

      • Dodecahedron says:

        There actually are some flaws in AMD’s texture filtering. Perhaps the best known are ‘discontinuities’ in textures (banding). They were first addressed in the Radeon 6000 series and then in Fiji, but the problem is still there to some extent. For instance, here are some images produced using D3D AF-tester:



        I have constructed the images so that the left side is taken from AMD Tahiti, the right side represents AMD Fiji. Although there is less banding with the newer GPU, many of the abrupt transitions are still present.

        There is also the problem of texture aliasing (shimmering). In most cases there is not much difference between a Geforce and a Radeon, but sometimes you get this:

        The image is taken from Assassin’s Creed III. In motion the carpet shimmers badly on a Radeon but looks rather acceptable on a Geforce (Fermi).

        I don’t have enough experience with Geforce cards to say whether they or the Radeons filter textures better in general, but so far I prefer the Geforce.

      • Scali says:

        There actually are some flaws in AMD’s texture filtering.

        The point I was trying to get across in that blog is that anisotropic texture filtering is flawed by definition.
        And yes, there is always room for improvement.
        However, it is NOT true that AMD’s texture filtering did not meet the D3D/OpenGL specifications, as claimed.
        They were operating perfectly within spec (of course there are some variables at play there as well, which can be tuned by the driver, eg what the AI feature does. So the filtering hardware may be capable of higher quality than what your driver is currently using).

        Another thing was that the rendering flaws they cherry-picked, did not match up with their ‘explanations’ of why this would be broken/out-of-spec/whatever.
        The arguments were along the lines of “I see gray pixels! There is no way that a black-and-white checkerboard can result in gray pixels!”. This is so painfully obviously wrong that it hurts.

  8. Redneckerz says:

    @ Qwerty:
    ”Considering the arguments taken from the small book that you have typed so far in this comment section, the details of your personal hardware are irrelevant.”

    Yet you were the one that came out making the accusation of brand loyalty. I said that wasnt the case, and then you say ”Oh well, but thats irrelevant”. If the answer to the accusation is seemingly irrelevant, then why the accusation itself isnt?

    ”He has criticized Nvidia and Intel in the past as well.”

    And as you can see he has to grab back to a blog from 2010 to make that point. Do you see any recent critical blogs against Nvidia? Not from this year there arent, and certainly not from last year. Do we have to turf this? Starting from late 2014:

    https://scalibq.wordpress.com/2014/09/21/direct3d-11-3-and-nvidias-maxwell-mark-2/ – Positive talk about Nvidia, discredits AMD.
    https://scalibq.wordpress.com/2015/06/09/no-dx12_1-for-upcoming-radeon-3xx-series/ – Positive talk about Nvidia, discredits AMD.
    https://scalibq.wordpress.com/2015/08/07/nvidia-does-support-d3d_feature_level_12_0/ – Positive talk about Nvidia, discredits AMD once more. (And note how it links to DX12 documentation, you know, that kind of stuff we all had to ”wait” for, but hey, the real magic, the part he actually meant, thats still ”confidential”. Obviously.)
    https://scalibq.wordpress.com/2015/09/02/directx-12-is-out-lets-review/ – It even says ”Time for some naming and shaming?” there, again heavily discredits AMD, not a single line of critique about Nvidia.
    https://scalibq.wordpress.com/2016/01/13/the-myth-of-hbm/ – one big AMD discredit post (”AMD tricks their consumer base, they must have brainwashed their followers”)
    https://scalibq.wordpress.com/2016/03/12/rise-of-the-tomb-raider-dx12-update/ – Only talks positive about Nvidia here.

    Lets not get taken away by his claim of being ”neutral”: Scali spends far more time discrediting AMD than he spends discrediting Nvidia (Which isnt anything in almost 2 years). That when there are a ton of instances that he could have written about – The planned aging theory, The vague notion from Nvidia that its Tegra X1 SoC can yield a teraflop of power without really saying in PR that this is about FP16 performance, not about the more commonly used FP32 performance, or the huge GTX 970 mix-up about the 0.5 GB of slow ram without Nvidia telling the press about this at first. Only in hindsight, it was acknowledged.

    Thats a lot of instances Scali could have set out from to discredit Nvidia or atleast question why Nvidia does these things. But those posts arent here. Because Scali rather spends an ungodly amount of time nitpicking every little flaw or mistake AMD makes, yet he doesnt do the same thing about Nvidia.

    Anyone can read those blogs in succession and even by taking a quick look, its obvious that Scali’s claim of being ”neutral” doesnt hold ground. At all.

    ”When someone has made a blog post, they probably took a long time to think and research and scrutinize it.”

    I agree, Scali spends a LOT of time to think, assume and ”research” everything wrong by AMD. That much is true. Now, if only that same amount of effort was also applied to Nvidia, then atleast Scali could rightfully claim he was neutral.

    ”It would be highly unlikely that a poorly thought out off-the-cuff remark would quickly change their mind.”

    Given how the whole interweb, from high-standing programmers to graphics enthusiasts seem to have it always wrong and Scali is always right, its indeed highly unlikely that anyone with an opposite opinion would make him change his mind.

    ”I read his arguments and yours, and questioned only that which appeared questionable.”

    It doesnt surprise me that you dont question his apparent bias which is clearly demonstrated in the linked blogs above. The fact that Scali has to grab back to 2010 to say ”See? I was critical of Nvidia before! You are wrong” when the displayed AMD discredit ratio is atleast 10 to 1 certainly doesnt raise any questions, apparently.

    ”lol, I don’t even know who that is.”

    It was a popular thing in the 90’s. Look up the Power Rangers.

    @ Scali:
    ”The forum URL wasn’t exactly a secret, so it would prove nothing if I gave it to you. You could have found it yourself, as you demonstrated.”

    Like i said, it would have made your case a lot stronger back then. It would atleast prove something. Heck, you could have linked back to Heise and you would still appear credible then.

    ”Again, you can find these yourself on MSDN, what’s the point of me linking to it?”

    For similar reasons as the above. All you do is claim and even boast about how you are in the DX12 forum with access to documents, but they either were under NDA, or confidential, or you say ”Look on MSDN then” without providing links to the exact documents that you consistently boast about having. If you are always so ”right”, it would be no problem showing those documents.

    ”No, I did not say I would show the documents. And it should be obvious why not. I said I could discuss their contents if the NDA expired.”

    You must have a hazy memory then. And yet you still dont discuss, or show those ”contents”. Keep on making claims without any evidence aside ”You just have to trust me”. That kind of stuff wouldnt hold up in court you know.

    ”I still don’t see what the point of releasing them would be.”

    Well, what you *could* do by it, is to support your claims with grounded evidence.

    ”But dangit, here I am doing the thinking for you again.”

    Heh, i didnt make those claims of having DX12 documentation Scali. YOU did. I just asked you to show these, which you still havent done so. The burden of proof is on you, and has been on you since the day you started claiming you were in the DX12 Early Access forum and had access to certain documents.

    ”You see, the problem with fact-checking is that there’s actually some work and effort involved.”

    So ironic when you have been repeately called out, both by pro-AMD and pro-Nvidia sides, on reading selective parts to make your arguments. You dont fact-check, you just read a few lines, throw in some unverified assumption, some anti-AMD rhetoric and voila, a new Scali post/blog is born.

    ”If you want to verify my claims (or refute them), you’ll have to put in the work first.”

    If you are going to make some hard claims, you damn well got to have some grounded evidence supporting those claims and be willing enough to release them when asked. All you do is claim without sources. Anyone can do that, and no, that doesnt make you ”neutral”.

    ”You’ll have to be more specific. What is it that you’re looking for in these documents?”

    Verification of your claims.

    ”Now there’s a cop-out if I ever saw one.”

    Its not really a cop-out, but i do think you have an issue into admitting that you got booed of the site. Supposely because ”they” didnt get ”it”, and ”you” obviously do.

    ”First of all, it’s not my fault that the crackpot theories tend to come from the AMD-camp, not the nVidia-camp.”

    ”Tend”. And yet you still maintain that you are ”neutral”. Please Scali. You can do better than that.

    ”Secondly, if the shoe is on the other foot, I certainly do not run with it. Case in point: https://scalibq.wordpress.com/2010/05/29/the-moment-many-of-you-have-been-waiting-for-it%e2%80%99s-the-nvidia-fanboys%e2%80%99-turn/

    The fact that you had to go back to 2010 to say ”See? I was critical of Nvidia before! You are wrong” when the displayed AMD discredit ratio is atleast 10 to 1 certainly doesnt rule in your favour.

    You *definitely* would run with it if AMD had this issue of planned aging, your own blogs prove that you would.

    ”I could have run with the theory that AMD’s texturing is fake/broken/whatever, but I didn’t. I pointed out the facts, and explained the technology and specs behind texture-filtering, and the verdict happened to be in AMD’s favour.”

    I wasnt even talking about that, you introduced that bit yourself, possibly to deflect the attention away from the other issues at hand. Dont play dumb.

    ”Well, that company got me into the DX12 Early Access program, apparently.”

    ”Apparently” indeed. Since there are no searachable accomplishments from that company (Pretty amazing really that not even Google has it indexed, unless.. the company is just bogus), all we have is once again your claim about it. And the fact that you are making up excuses again to not mention the name of the company (Yet you are egocentric enough to mention that you have a company to start with) really makes your credibility be on thin ice.

    ”But no, I certainly do not plan to associate myself or this blog with my company.”

    Likely because that company is run by your imagination and not by verifiable sources. There, i called your claim. If you want to verify (or refute) them, YOU will have to put in the work first. 😉

    ”The Interwebs claimed that would never happen, but it did.”

    I know someone else who also makes these rampant claims and thinks all of the Interweb is *always* wrong. People call him MisterX, and he sure likes Microsoft. Maybe you have something in common with him and you two can work together.

    ”I respect his privacy.”

    And another case of being egocentric enough to make a claim that you know a guy at Nixxes, without going into details. So once more, all we have is your ”claim”. Is that how it works with you? Throwing in random claims and when called for details, you throw various smokescreens? (”Its still confidential, i dont want to associate my company with that, i respect his privacy”)

    Maybe it would be best if you stopped making these kind of claims if you dont want to back them up and you dont allow people to verify them on their own and they have to rely on your ”claim” about it.

    ”If you ask .oisyn, he’ll know who I mean, and he will confirm it.”

    Oh, i will. But i am asking you, not Oisyn. Stop putting it on other parties and answer this one yourself. Who is that ”Guy at Nixxes” you speak of?

    • Scali says:

      This is completely pointless.
      You’re not even being specific about *what* claims I allegedly would have made.
      Let alone how I could either prove them, or how other people would allegedly have proven me wrong.
      In this whole discussion, the only one who has repeatedly been proven wrong here is you.

      These are just baseless accusations.
      The fact of the matter is that unlike you, I don’t deal in rumours and theories. I only post about things that I know to be true. If possible, I give the proof right away. If not, I at least know that the proof will become public after an NDA has lifted, a product is launched, or whatever (eg I pointed out that AMD did not support DX12_1 before they launched their product. After the launch, this was confirmed).
      And I only post about things that aren’t already covered in the mainstream. I don’t waste my time parroting the same info that you can find on every tech site out there.
      Oh, and of course I only post about technology that I am actually involved with/interested in.

      And once again: neutral does not mean what you mean it thinks. The world isn’t fair.

      I mean, just look at the blatant lies that Richard Huddy spreads about Mantle here:
      http://techreport.com/news/26922/amd-hopes-to-put-a-little-mantle-in-opengl-next

      Speaking of which, Huddy told us 75 developers are now working on Mantle titles in the consumer realm. Enthusiasm, he added, “seems unbridled.” Some of those developers see Mantle as a stepping stone to DirectX 12, while others view the API as an “opportunity to differentiate themselves.”

      75 developers? Wow! So where are all those Mantle games then?

      Huddy expects developers to keep using Mantle after DX12 arrives, too, for two reasons. First, AMD can add support for new GPU features very quickly—much quicker than Microsoft, which rolls out major DirectX updates only every 4-5 years.

      Oh yea, except Mantle was pronouced dead by AMD even before DX12 arrived.
      Mantle never even made it past the closed beta-stage, so there never really was a release. Let alone any updates.
      And oh the irony that he talks about new GPU features, where DX12 supports more features than AMD’s hardware even has, let alone what Mantle supports.
      The featurelevels make it very easy to add new features, as MS has also demonstrated by introducing DX11.3.

      I didn’t make this up. AMD did!
      Anyone can still look up all these lies from AMD/Huddy about Mantle. Yet I don’t see anyone talking about this. Apparently you think AMD can just get away with this sort of stuff time and time again.

      • Redneckerz says:

        ”This is completely pointless. You’re not even being specific about *what* claims I allegedly would have made. Let alone how I could either prove them, or how other people would allegedly have proven me wrong.”

        Oh i were, but its just what i said: ”you just read a few lines, throw in some unverified assumption, some anti-AMD rhetoric and voila, a new Scali post/blog is born.”.

        ”In this whole discussion, the only one who has repeatedly been proven wrong here is you.”

        Sure Scali, anything to not actually provide evidence to your claims.

        ”These are just baseless accusations.”

        Linking to 6 of your blogs showcasing the apparent bias = ”These are just baseless accusations”. Anything to not admit that you dont have a neutral point of view.

        ”The fact of the matter is that unlike you, I don’t deal in rumours and theories.”

        Sure you dont. But you spend all of your time nitpicking AMD at every corner and you dont do the very same thing to Nvidia, despite having enough reasons to atleast question their actions, just as much as how you question AMD’s actions.

        ”I only post about things that I know to be true.”

        Ofcourse you think your own made-up rhetoric is true, thats the whole point of making up such stories! Look, if there was even a single SHRED of evidence pointing to you being right, than why are you banned on all these sites, that include programmers, developers, and the like? If even a single thing of your anti-AMD talk was correct, than you wouldnt be booed off these sites, right? Or are you going to assert all these places are Pro-AMD bastions that dont ”get” you?

        ”If possible, I give the proof right away.”

        Stop decieving yourself. You made 3 claims in this whole discussion and you have yet to prove them.

        – NDA documents – Previously said ”Just you wait, i will show you”, now ”Its on MSDN” (Without links), ”Its confidential” and the only thing you posted as ”proof”, was a site that was already reported by Heise. Factual evidence: 0.
        – When asked if you had a company since its not found on Google (Which is practically impossible to do these days if you want to have any ”name” to sell to these other companies you supposely work with, but giving you the benefit of the doubt here) – ”It got me in the DX12 program right? I dont want to associate my company with this blog” – Deflective argument, all we have is the claim from YOU. Factual evidence: 0.
        – Asserts knowing ”A guy at Nixxes” that can vouch for him. When asked who it is – ”I respect his privacy. Ask oisyn, he will tell you.” – I am asking you, not him. Again all we have is the claim from YOU. Factual evidence: 0.

        End result: 3 claims, no sources, no credibility. Stop making claims if you dont want to back up your claims and you dont allow people to verify them on their own and they have to rely on your ”claim” about it.

        ”If not, I at least know that the proof will become public after an NDA has lifted, a product is launched, or whatever (eg I pointed out that AMD did not support DX12_1 before they launched their product. After the launch, this was confirmed).”

        And you forgot to name that at that time Nvidia didnt support all tiers either. (Only Intel did i believe.) But i dont see you name that in this discussion. Its all on to discredit AMD.

        ”I don’t waste my time parroting the same info that you can find on every tech site out there.”

        I know. You rather waste time discrediting a single party without discrediting the other and throwing some logical hoopla in the mix to make it ”seem” like it makes sense, atleast for you.

        ”Oh, and of course I only post about technology that I am actually involved with/interested in.”

        Which is apparently only Nvidia given the clear anti-AMD bias displayed on your blog for the last 2 years, and you have to crawl back to 2010 to get to a post that is critical of Nvidia? 😉

        ”I mean, just look at the blatant lies that Richard Huddy spreads about Mantle here:
        http://techreport.com/news/26922/amd-hopes-to-put-a-little-mantle-in-opengl-next

        Linking to a post from August 2014, convincing. It doesnt struck you that the author of the post might give his own interpretation on things? Because you know, thats what people tend to do. You dont seem to question his skills, but you instantly skip over to Huddy and his ”lies”.

        Do i really need to link the endless Tweakers threads where people called you out on this kind of matter when A: It aint even relevant anymore and B: The conclusion of those threads was that you just selectively read a single line, ran with it, saw it as a reason that AMD is purposefully scamming its users, threw some anti-AMD rhetoric in the mix, and presented it as your ”evidence”. People laughed you off back then, and for good reason.

        Two years later you are still trying to ”prove” how horrible it all was. Nobody cares what Huddy said back then since the theory of Mantle (a low-level api for PC’s) has desolved in DX12. Its just becoming sad at this point. And again: Do you even look that critical to Nvidia? Answer: No, no you dont. You have to go back to 2010 to find something critical on your own, the rest is just discrediting AMD. That is an ”agenda” if i ever saw one.

        ”75 developers? Wow! So where are all those Mantle games then?”

        75 developers isnt that much in terms of game development these days. So all those games would be rather just a few and rather be a patch to highlight the performance bump when using a low-level API as a showcase for what can be achieved. And gee whiz, look at that: https://en.wikipedia.org/wiki/Category:Video_games_that_support_Mantle – That is 12 games having that support. Mostly from the same developers (EA, which, as you know, is a HUGE corp with a ton of devs). My theory of Mantle being a showcase API makes a lot more sense than your ”Hurr durr Huddy lied and he fooled all of you!”.

        ”Oh yea, except Mantle was pronouced dead by AMD even before DX12 arrived.
        Mantle never even made it past the closed beta-stage, so there never really was a release. Let alone any updates.”

        So how come 12 games got Mantle support then? Clearly, some kind of release happened to developers to have these games supported. Now why would they do that if its still in beta? Wait, you dont say.. maybe those titles were used as a showcase??? 😉

        ”And oh the irony that he talks about new GPU features, where DX12 supports more features than AMD’s hardware even has, let alone what Mantle supports.”

        And oh the irony how clearly you display the anti-AMD bias here. Still trying to argue how DX12 was better than Mantle when the two didnt even had the same goals to begin with.

        ”I didn’t make this up. AMD did!”

        And yet i dont see you posting about the Nvidia scandals that have come out. No, you rather focus on this Mantle thing, obsess yourself over it, making up a big story about how ”evil” AMD is and ignore every negative thing from Nvidia. Clearly, the neutrality of your character is at stake here.

        ”Anyone can still look up all these lies from AMD/Huddy about Mantle.”

        Anyone can also still look up your arguments and conclude they are a load of gobbledigook with selective citing, made up arguments and logic-hoops the size of Mars. Reputed websites dont ban you off their site just because they are scared that you found out about their ”truth” Scali. They ban you off their site because you talk nonsense.

        ”Yet I don’t see anyone talking about this.”

        Because its made up. Cant argue with something when only one person knows the story – YOU.

        ”Apparently you think AMD can just get away with this sort of stuff time and time again.”

        And apparently you spend a lot of time discrediting one party and not even nearly doing the same amount of effort for the other party. The very core of this blog still remains: ”you just read a few lines, throw in some unverified assumption, some anti-AMD rhetoric and voila, a new Scali post/blog is born.”.

      • Scali says:

        I think we’re done here. You’re sick in the head.
        Who the hell do you think you are, making such demands on me, using such language and making such retarded accusations, even on my own blog?
        You don’t even have half a clue about the things I discuss here, so it doesn’t surprise me in the least that you don’t understand how neutral I am.
        That is your problem, not mine.
        Also, go talk to someone. It’s not healthy to have such an irrational craving for anti-nVidia blogposts.

        Do i really need to link the endless Tweakers threads where people called you out on this kind of matter when A: It aint even relevant anymore and B: The conclusion of those threads was that you just selectively read a single line, ran with it, saw it as a reason that AMD is purposefully scamming its users, threw some anti-AMD rhetoric in the mix, and presented it as your ”evidence”. People laughed you off back then, and for good reason.

        I hate to break it to you, but yes, they were all wrong.
        A few years back, I went up against John Fruehe, saying his claims about Bulldozer are highly unrealistic, pointing out where the problems in the architecture were, and even referring to AMD’s own material.
        Yes, people ‘laughed me off’, and I got banned at some forums. However, as we all know, everyone was wrong, except me.

        Likewise, I was the one to say Mantle wasn’t going to give more than 10-15% gains over DX11 in games, and that nVidia and Intel would never support it, it would never become an open standard, and DX12 would become the mainstream next-gen API.
        Yes, people may have “laughed me off”, but again, everyone was wrong, except me.
        Mantle was never released, nVidia and Intel never bothered with it, AMD pulled the plug, reviews showed about 8% gains in BF4, even less in Thief, and the focus is now entirely on DX12.

        The fact that most people on those site are clueless AMD fanboys such as yourself, and that they ban people who don’t agree with them, no matter how well thought-out their arguments are, and how much evidence they present says a lot more about these communities than it does about me.
        And if you only hang around in such communities, you will get a distorted view of reality, like you suffer from.

        My blog still gets hundreds of hits a day, and it is often linked to in forum discussions or news items. Sometimes people from review sites such as Tomshardware or ExtremeTech also contact me for input.
        So I’m not that bothered by what a nobody such as yourself thinks.

  9. Redneckerz says:

    ”Who the hell do you think you are, making such demands on me, using such language and making such retarded accusations, even on my own blog?”

    And yet all i did was link back to your own blog to display the clear bias at hand here and asking to proof your claims with evidence. And yet you consistently dance around it.

    ”You don’t even have half a clue about the things I discuss here, so it doesn’t surprise me in the least that you don’t understand how neutral I am.”

    If being ”neutral” is you not even admitting that your AMD discredit ratio is far bigger than your Nvidia discredit ratio, than that is your problem, not mines.

    ”Also, go talk to someone. It’s not healthy to have such an irrational craving for anti-nVidia blogposts.”

    But it sure is healthy to have such an irrational obsession about AMD huh 😉

    ”I hate to break it to you, but yes, they were all wrong.”

    Ofcourse. Who was the ”special snowflake” again? You are just making generalizations here. ”They were all wrong, i was right!” Seriously, what is with the obsession that you feel the need to pretend that you are ”smarter” than any other developer out there?

    ”A few years back, I went up against John Fruehe, saying his claims about Bulldozer are highly unrealistic, pointing out where the problems in the architecture were, and even referring to AMD’s own material.”

    Again sidestepping, and not really relevant. And oh, discrediting AMD once more. Do you also go up against key Nvidia people or are your contacts not that far reaching?

    ”Yes, people ‘laughed me off’, and I got banned at some forums.”

    Those ”some” forums just being where most of the experts on programming reside, but thats about it. And you still dont seem to realize why people laugh you off to start with.

    ”However, as we all know, everyone was wrong, except me.”

    Who is ”we” again? A voice in your head? But to stay on the point – Yeah, everyone was wrong, except you. Want a medal for that nonsense (That isnt even correct but heck who am i trying to fool here)?

    ”Likewise, I was the one to say Mantle wasn’t going to give more than 10-15% gains over DX11 in games,”

    Gee, wouldnt that be because Mantle was a SHOWCASE API to start with? It never was going to compete with DX12.

    ”and that nVidia and Intel would never support it,”

    Ofcourse they wouldnt – The whole goal of Mantle was to showcase a low level API for PC’s to highlight increased performance and other benefits. It would make zero sense for either Nvidia or Intel to support an API that isnt meant for future use. What do you think a showcase is supposed to be?

    ”DX12 would become the mainstream next-gen API.”

    So you were right about something that anyone with common sense knows since DX is the pre-dominant API out there for the majority of PC’s and games. Basically you are celebrating a hollow victory here.

    ”Yes, people may have “laughed me off”, but again, everyone was wrong, except me.”

    Because you predicted that DX12 would be the mainstream API? Thats like saying that Nvidia will come out with a new graphics card in the future – Its called having common sense.

    ”Mantle was never released,”

    And again being stubborn. Why do 12 released games have support for an API if it isnt released?

    ”nVidia and Intel never bothered with it,”

    They had no reason to support an API that was made as a showcase. There really is nothing more to it.

    ”AMD pulled the plug,”

    Ofcourse they did. Their showcase was succesful, showed increased performance, and its design philosophy carried on in DX12. And that sounds like a far more sensible theory than you thinking AMD placed agents in their PR pipeline to make Mantle more than what it really turned out to be – A showcase.

    ”and the focus is now entirely on DX12.”

    Ofcourse. Its design philosophy was proven, and it fueled inspiration for Vulkan.

    ”The fact that most people on those site are clueless AMD fanboys such as yourself,”

    And the generalizations continue. You werent called out by only AMD fanboys about your nonsense Scali. Even pro-Nvidia didnt agree with it.

    ”they ban people who don’t agree with them, ”

    They ban people like you who go on spamming 3 moderators with the same story all over again and ignoring their explanations. Basically anyone who fails to see the ”Scali Brilliance” gets dumped on by you. Sounds to me an issue of communication is in play here.

    ”how much evidence they present says a lot more about these communities than it does about me.”

    I agree. They actually present evidence, you, on the other hand – As you have displayed so nicely here – only come up with claims that you refuse or simply cant prove. It says more about you than about them.

    ”And if you only hang around in such communities, you will get a distorted view of reality, like you suffer from.”

    Because only Scali sees what not even the greatest minds cant see, clearly. You are dangerously close to entering delusional territory here. Literally your whole game here is that you are always right and they are always wrong and nobody apparently sees it like you are some huge ”misunderstood” wonder boy.

    ”My blog still gets hundreds of hits a day, and it is often linked to in forum discussions or news items.”

    Well, ofcourse it gets hits – Its not that one can visit an online freak show exhibit that often.

    ”and it is often linked to in forum discussions or news items”

    Usually as an example of what narcissism can lead to 🙂

    ”Sometimes people from review sites such as Tomshardware or ExtremeTech also contact me for input.”

    Ah, another claim. I like to see the receipts, please.

    ”So I’m not that bothered by what a nobody such as yourself thinks.”

    Yeah, clearly you arent bothered.. 😉 How many times have you updated this comment of yours since the publish date? 😉

    • qwerty says:

      @Redneckerz

      Wow, still at it, eh? lol, you’ve got way too much free time, mate. Find a woman (no, Ruby doesn’t count :p). If you already have one, dump her, and find a new one.

      • Redneckerz says:

        @ Qwerty:
        I am wondering why you criticize me for having too much free time but dont question the same thing to Scali. Aside it being a non-issue, if i do it, its used as reason to argue, but Scali gets a free pass? Look at how he now is talking about Intel Atom when i didnt even asked for that in the first place. Its just making continous side steps, ”proving” things i didnt even asked for to start with, and not ”proving” things i did ask for.

        @ Nomis:
        ”I see, your still harping on the same issues in a vain attempt to make scali seem less credible instead of arguing the points brought up.”

        If the whole Interwebs is wrong (Even the people from Beyond3D?) according to Scali’s definition, who is more credible here then? Wonderboy Scali who apparently saw something not even the brightest minds didnt see, or all those established sites who have gained a reputation based on their experience?

        ”I mean who cares if he really was in the DX 12 early access program.”

        Scali does. I dont give one cent if he was part of the program or not, the issue is that he consistently referred to documents he is unwilling to link to, or else makes up excuses for it. Compare that to what he is talking now about Intel Atom with sources, something i didnt even ask him to do. He is just sidestepping the issues at hand here and hopes you wont notice.

        ”AMD never claimed that mantel was supposed to be a showcase API.”

        In hindsight its pretty obvious that it was. The reason that AT THAT time they didnt say this so clearly was to generate interest. I mean, ”Mantle is a low level API for PC’s” sounds a lot better than ”Mantle is a showcase low level API for PC’s”, right? Anyone with a brain could tell it was a showcase API. But i see, AMD really had to make it crystal clear to everyone since when it isnt literally stated that its a showcase API, it calls for interpretation. But when Nvidia isnt crystal clear about the Tegra X1 one teraflop performance figure, its just taken like cake, it seems.

        I mean, its clear from the get go that Mantle always was a showcase API to begin with:
        From the AMD FAQ:
        http://support.amd.com/en-us/search/faq/185
        ”Mantle complements DirectX® by providing a lower level of hardware abstraction for developers who want and need it, echoing the capabilities they have come to expect on dedicated game consoles.​”

        At best, it was an extension, not a standardized component. They have literally said this even BEFORE DX12 was announced.

        http://support.amd.com/en-us/search/faq/184
        ”To validate the concepts”.

        But before you ask: It also says something that seemingly goes against what i just said:
        ”Our intention is for Mantle, or something that looks very much like it, to eventually become an industry standard applicable to multiple graphics architectures and platforms.”

        Back then, DX12 wasnt announced yet, but they obviously knew that it was in development given they presented in a slide after the DX12 announcement that they were working with Microsoft on it: http://videocardz.com/49975/microsoft-announces-directx-12-coming-2015

        The whole reason they were saying that their ”intention” was to be an industry standard and ”Mantle complements DirectX® by providing a lower level of hardware abstraction ” was a way to publicly say to Microsoft: ”Hey, we want low-level API’s, look at our limited solution.” (And i am sure they have said so internally with them aswell.) They couldnt just run around and say: ”Mantle is our solution for a low level API that DX12 will feature also” when DX12 itself, and thus what it was going to do, ”to the public” was still unannounced. Doing so would have broken NDA.

        ”AMD even claimed that there will be no DX 12”

        So what does this slide say then? – http://cdn.videocardz.com/1/2014/03/a2.jpg
        Besides that, they made claim in 2013, A full year before DX12 was even on the cards. You can speculate that Mantle was made in a response to an initial draft of DX12, which wasnt going to feature a lower level API, but that is speculation.

        ”that mantle will live on despide DX12.”

        Which in a cryptic sense has proven to be correct: It lives on in Vulkan, which is part of the Khronos group that has the other major API at play here, OpenGL – https://en.wikipedia.org/wiki/Vulkan_%28API%29

        The way i see it, is that AMD made Mantle to publicly showcase the benefits of a low-level API to both the public and to Microsoft, whilst working closely with said Microsoft to implement this philosophy in DX12 after it was shown these benefits and, after DX12 was announced, given the code to Khronos so they could develop their own deritative of it that we now know as Vulkan. I find that a far better supported theory than the endless AMD discredit Scali has been persuing for the last years.

        ”And about your stupid assertion that he can’t possibly be neutral by counting his blog posts.”

        Its clear as night and day that at best he is spending copious amounts of time discrediting AMD, at worst its just an agenda he runs. There is zero evidence to his claim that he is a neutral party. If he were, he would research both sides of the coin.

        ”For someone who supposedly don’t read his other blog posts you seem to know every single one of them.”

        It just takes some scrolling. You can do the same thing and see the pattern.

        ”Who cares what the ratio is.”

        It should care enough to question the claim if one is a neutral party or not based on one’s output.

        ”It’s certanly orders of magnitues more confincing than your drivel.”

        I like to think that Scali thinks he is some 21st century Heaven’s Gate and that the people who cite him as a ”source” are his disciples.

        @ Scali:
        ”You yourself are proving my case right here: You keep throwing all sorts of AMD-fanboy theories in my face.”

        I actually went out of my way this time to source them in case the logical common sense explanations werent enough. Feel free to disagree with them above 🙂

        ”The burden of proof is on you to prove that nVidia and Intel have exactly the same kind of fanboy following, which produce exactly the same volume of crackpot theories, and that nVidia and Intel themselves produce exactly the same amount of misleading marketing as AMD does.”

        I didnt bring Intel in here, but nice try. And i had already given 3 prime examples of misleading marketing from Nvidia, feel free to read those again.

        ”Because only under these conditions can one expect to have the same amount of posts on all vendors on a neutral blog.”

        If that is how you want to justify your obvious anti-AMD agenda than be my guest. Everyone that isnt part of Scalitown sees it for what it is and has deemed it not worthy of discussing, unless, like i said before, it is used as an example of what classic narcissism can lead to 🙂

        ”The reality is that this is not the case. I can easily demonstrate this to a certain extent by the fact that many of my posts that you claim to be ‘anti-AMD’ have received quite a lot of attention.”

        Oh i know the kind of people who link to your blog. Let me just say they arent the resident experts from say a Beyond 3D that do so 🙂

        ”The few posts that you claim to be ‘anti-nVidia’ or ‘anti-Intel’ don’t receive much attention at all. There’s rarely any comment there, and they aren’t often discussed in other places either.”

        Ofcourse they dont get attention since there are so few of them in numbers. Heck, like i demonstrated before, over the last 2 years you havent even written a single blog that is critical of Nvidia. You have to link back to 2-0-1-0 to find a post that isnt positive of them. Thats 6 years Scali. Ofcourse you dont get attention from people if there is only a single post from 6 years ago about it.

        ”The facts clearly show I was right and they were wrong.”

        The facts clearly show that you dont even read your own sources and claims right (Like i have just shown above). As always: ”you just read a few lines, throw in some unverified assumption, some anti-AMD rhetoric and voila, a new Scali post/blog is born.”.

        ”Sure, I’ll dig up some examples, anything to show how wrong you are (not for you, because you’re obviously beyond help, but I can make an example out of you for the other readers): http://www.tomshardware.com/news/STEAM-AMD-FX-Processors-BSOD-bios,15630.html

        Interesting. Aside that ”sometimes they ask me for input” is from more than 4 years ago, i wonder, why didnt Doug credit you on it, seeing as how you are implying that Joel is part of the team that Doug also is part of? Its not very clear sadly.

        ”The final article: http://www.extremetech.com/computing/129363-amd-detonates-trinity-behold-bulldozers-second-coming – I believe his article was the only one covering F16C at all.”

        So, if i read this right, Joel contacted you based on the first link, got your answer, and made the post at Extremetech? Again: Why arent you credited there?

        Unrelated, but it seems you are fairly affiliated with Extremetech, or atleast in the sense that you get called out there aswell (Just showing one link) – http://www.extremetech.com/gaming/180088-nvidias-questionable-geforce-337-50-driver-or-why-you-shouldnt-trust-manufacturer-provided-numbers

        ”And oh, the irony that he contacted me regarding an AMD CPU review, and I actually gave a well-balanced objective response of AMD’s shiny new feature, which one could say is favourable to AMD.”

        All we have is your words for it however. (Gee, where i have heard this before? Oh right.) I figured Joel would be a good sports and atleast made a credit if it all was true.

        ”Oh, and while I’m at it, here’s another interesting thing.”

        Thanks, but obviously that is no relevance to the other issues here. I do find it interesting that you go on proving a point nobody asked for, yet refuse to provide proof for the 3 claims i mentioned earlier. Just a nice behavioral pattern, is all.

        ”So, once again proof that indeed, in some cases I know more about technology than what you can read on tech-sites.”

        And again, its just words. I cant verify them on their legitmacy.

      • Scali says:

        If the whole Interwebs is wrong (Even the people from Beyond3D?) according to Scali’s definition, who is more credible here then? Wonderboy Scali who apparently saw something not even the brightest minds didnt see, or all those established sites who have gained a reputation based on their experience?

        Firstly, I never claimed the ‘brightest minds’ didn’t see it. You wrongly assume that all participants on Beyond3D are the ‘brightest minds’, and that none of them saw it.
        Secondly, reputation means nothing in the face of facts, logic and evidence. This should have been obvious to you by now, given all the examples pointed out.

        Compare that to what he is talking now about Intel Atom with sources, something i didnt even ask him to do.

        You asked me to prove that I am sometimes in contact with writers of well-known tech sites. So I picked 3 examples (two which can be seen as favourable to AMD, one which can be seen as unfavourable to Intel).

        Besides that, they made claim in 2013, A full year before DX12 was even on the cards.

        Nope, there were meetings about DX12 even in 2013. I may have a screenshot of that as well somewhere.

        I didnt bring Intel in here, but nice try. And i had already given 3 prime examples of misleading marketing from Nvidia, feel free to read those again.

        These fail to meet the standards of my work, as already pointed out by various people.
        Besides, even if they did, 3 wouldn’t be enough to balance nVidia and AMD out.

        Interesting. Aside that ”sometimes they ask me for input” is from more than 4 years ago, i wonder, why didnt Doug credit you on it, seeing as how you are implying that Joel is part of the team that Doug also is part of? Its not very clear sadly.

        ‘Joel’ is Joel Hruska of ExtremeTech, and that is part of the second link. He did not credit me, but feel free to contact him and verify my claim, if the presented evidence does not convince you: http://www.extremetech.com/author/jhruska

        The first link speaks for itself, so I just posted it as-is:

        A reader of Tom’s Hardware, Scali, brought this problem to our attention. Scali wrote a nice little blog post about this BSOD back in October of 2011, which still seems to be plaguing users to this day.

        In that particular instance, I spoke with Chris Angelini. Doug credits me and even links my blog directly, so I don’t see what you’re even trying to say.

        Once again I get the distinct impression that you did not even bother to read my comment and the presented links properly (something that you ironically accuse me of doing), and already had your conclusion ready even before I posted the evidence.
        But do go on. The more you post, the more you look like an idiot. Did I mention that I get hundreds of hits a day? Google is being peppered with ridiculous fanboi-posts from you. So the next time someone googles ‘Redneckerz’ (your future boss?), they’ll get a nice chuckle. I mean, google for “Redneckerz DX12” right now, and *boom*, this very discussion is right there in the top entries. And unlike a forum, you can’t edit/delete your posts here, or remove your account. And I am certainly not going to do that for you.

    • Scali says:

      If being ”neutral” is you not even admitting that your AMD discredit ratio is far bigger than your Nvidia discredit ratio, than that is your problem, not mines.

      qwerty already explained it to you: The world is not fair.
      You yourself are proving my case right here: You keep throwing all sorts of AMD-fanboy theories in my face.
      That’s just what mostly happens.
      The burden of proof is on you to prove that nVidia and Intel have exactly the same kind of fanboy following, which produce exactly the same volume of crackpot theories, and that nVidia and Intel themselves produce exactly the same amount of misleading marketing as AMD does.
      Because only under these conditions can one expect to have the same amount of posts on all vendors on a neutral blog.

      The reality is that this is not the case. I can easily demonstrate this to a certain extent by the fact that many of my posts that you claim to be ‘anti-AMD’ have received quite a lot of attention. Eg, they have quite a few comments by AMD-fanboys disagreeing with the post. And these posts are also linked in many places on forums etc.
      The few posts that you claim to be ‘anti-nVidia’ or ‘anti-Intel’ don’t receive much attention at all. There’s rarely any comment there, and they aren’t often discussed in other places either.

      There aren’t any ‘Redneckerz’ on the nVidia or Intel side.

      You are just making generalizations here. ”They were all wrong, i was right!” Seriously, what is with the obsession that you feel the need to pretend that you are ”smarter” than any other developer out there?

      The facts clearly show I was right and they were wrong. Draw whatever conclusions you want. I’m not pretending anything.

      Ah, another claim. I like to see the receipts, please.

      Sure, I’ll dig up some examples, anything to show how wrong you are (not for you, because you’re obviously beyond help, but I can make an example out of you for the other readers):
      http://www.tomshardware.com/news/STEAM-AMD-FX-Processors-BSOD-bios,15630.html

      And for this one, Joel Hruska contacted me asking about F16C.
      My reply:

      I think this is what was originally introduced as the ‘CVT16’ portion of SSE5. I believe it is the subset of AVX2 instructions that can convert between 16-bit floating point (half) and 32-bit floating point (single).
      That would basically include the following instructions:
      VCVTPH2PS xmmreg,xmmrm64*,imm8
      VCVTPH2PS ymmreg,xmmrm128,imm8
      VCVTPH2PS ymmreg,ymmrm128*,imm8
      VCVTPS2PH xmmrm64,xmmreg*,imm8
      VCVTPS2PH xmmrm128,ymmreg,imm8

      (But I could have missed some… Intel doesn’t bother to group them specifically, for them it’s all lumped together as AVX2).

      They are documented in Intel’s AVX docs: http://software.intel.com/file/36945
      It also documents the F16C bit in the CPUID flags.

      I would say that this is mainly useful as a sort of data compression at this point. You can reduce the memory footprint of floating point data to half the size (if you can get away with the limited precision obviously). But since there are no instructions yet that process 16-bit floats directly, as far as I know… you’d have to unpack them to 32-bit, then process them with regular 32-bit operations, so there won’t be any gains in actual processing speed (apart from the direct results of the smaller footprint, being better cache coherency and lower bandwidth requirements).
      Aside from that, it might be useful for homogenous processing, since GPUs also have a native 16-bit float format (assuming they’re compatible). If the GPU stores its result in a 16-bit buffer, the CPU can convert them more easily, and vice versa.

      The final article:
      http://www.extremetech.com/computing/129363-amd-detonates-trinity-behold-bulldozers-second-coming

      Piledriver also adds support for two additional instructions, FMA3 (Fused Multiply-Add) and F16C. FMA3 is a different form of the FMA4 instruction Bulldozer supported. AMD has beaten Intel to the punch on this one; Intel’s own FMA3 support will debut in 2013, with Haswell. Both instructions can improve code execution efficiency by fusing operations and performing them in a single clock cycle, but neither FMA3 or FMA4 is expected to provide significant speed boosts. F16C is a method for converting and storing 32-bit floating point values using 16-bits. AMD might make use of this for the GPU (GPUs have a native 16-bit floating point shader capability), but that’s an unknown as well.

      I believe his article was the only one covering F16C at all.
      And oh, the irony that he contacted me regarding an AMD CPU review, and I actually gave a well-balanced objective response of AMD’s shiny new feature, which one could say is favourable to AMD.

    • nomis says:

      I see, your still harping on the same issues in a vain attempt to make scali seem less credible instead of arguing the points brought up. After loosing the original argument you just went into full crazy mode trying your best to discredit scali while in reality just making a fool of yourself. I mean who cares if he really was in the DX 12 early access program. You are just uterly unable to argue any points he makes so you go on and on about it and how he supposedly needs to proove it to you.

      You also try to argue how one person could possibly not be right when there is a whole forum full of “experts” having a different opinion, althought he shows you some verifiable cases where he actually turned out to be right.

      It’s definitly quite entertaining.

      AMD never claimed that mantel was supposed to be a showcase API. Thats fabricated by you to move the goalposts. AMD even claimed that there will be no DX 12 and later said that mantle will live on despide DX12. A claim that was brought up in the discussion before and that you choose to ignore.
      And no, with “released”, he means like, a public SDK outside of a few developers who are in the gaming evolved program. There was never a public SDK for any developer to use. You can’t be that stupid.

      And about your stupid assertion that he can’t possibly be neutral by counting his blog posts. For someone who supposedly don’t read his other blog posts you seem to know every single one of them. Who cares what the ratio is. His blog posts are well written and facts brought up are usually well sourced. It’s certanly orders of magnitues more confincing than your drivel.

      • Scali says:

        I mean who cares if he really was in the DX 12 early access program.

        Make that *is*.
        Much to my own surprise, even after the release of DX12, I sometimes get emails from Microsoft, pointing to new stuff that can be found on the forum.
        I thought it would be over and done with after the release of Windows 10/DX12.

  10. Redneckerz says:

    ” Firstly, I never claimed the ‘brightest minds’ didn’t see it. You wrongly assume that all participants on Beyond3D are the ‘brightest minds’, and that none of them saw it.”

    So, who else saw what you saw then?

    ”Secondly, reputation means nothing in the face of facts, logic and evidence.”

    I already refuted some of your earlier claims that Nomis came up with (You have made the same claims in the past) yet i dont see you refute them. You just ignore them at the stand.

    And reputation means something in the face of the consequences: You arent just banned from random websites, but from quite respected websites like Beyond3D for your clear anti-AMD bias. Your ”reputation” made sure of that.

    ”You asked me to prove that I am sometimes in contact with writers of well-known tech sites. So I picked 3 examples (two which can be seen as favourable to AMD, one which can be seen as unfavourable to Intel).”

    That is true. I did ask that. I didnt ask to talk about Intel Atom specifically tho 😉

    ”Nope, there were meetings about DX12 even in 2013”

    DX12 wasnt even announced a full year from it and not even released until 2 years later.

    ”I may have a screenshot of that as well somewhere.”

    Thanks for the picture. It proves that DX12 was even earlier in development than was publicly known – No mention of the presence of a low-level API though. Given how AMD worked with MS at DX12, i wouldnt put it past them that at some point (Perhaps even in October 2013) to have thrown that idea around. When Microsoft wasnt convinced, AMD set out on developing Mantle as a showcase. (The rest of the story has been addressed earlier and you havent refuted that so i guess thats that) That is ofcourse speculation on my part, but that definitely sounds like a plausible theory.

    ”These fail to meet the standards of my work, as already pointed out by various people.”

    Ah, those rather big examples of misleading marketing fail your ”standards” of your work. Yet you are so busy with twisting literally every single statement from AMD, no matter how small, to add your own anti-AMD rhetoric to it. Because obviously every tiny thing AMD does is under much more scrutiny than whatever Nvidia does. If that isnt an obvious bias, then i dont know what is.

    ”Besides, even if they did, 3 wouldn’t be enough to balance nVidia and AMD out.”

    That is your argument? ”AMD had more scandals than Nvidia”? Again – Your blog defacto proves that you spend 10 times more effort on discrediting and dissecting every little statement from AMD even when the resultant assumption has been proven wrong, than you spend criticizing Nvidia.

    ”Once again I get the distinct impression that you did not even bother to read my comment and the presented links properly (something that you ironically accuse me of doing), ”

    Admittely i read past the mention in the link, not purposefully. My apologies for that.

    ”But do go on. The more you post, the more you look like an idiot.”

    Once again I get the distinct impression that you did not even bother to read my comment properly to Nomis and selectively ignored the sourced parts.

    ”Did I mention that I get hundreds of hits a day?”

    You did, but i can tell you want to be on a high horse and mention it again to satisfy the ego.

    ”Google is being peppered with ridiculous fanboi-posts from you.”

    Fan of hyperbole, i see. Again, you didnt refute any of the Mantle/DX12 comments i made to Nomis.

    ”So the next time someone googles ‘Redneckerz’ (your future boss?),”

    Is that why you run your own, non-verifiable ”company”? 😉 because there is no boss in the world to work with you?

    • Scali says:

      So, who else saw what you saw then?

      At the very least, the few members of Beyond3D who were also in the DX12 Early Access program.
      You can tell from my screenshot that at least Andrew Lauritzen is among those.

      Your ”reputation” made sure of that.

      Erm, no. AMD-fanboys like yourself who do nothing but character-assassination on me made sure of that.
      Besides, that was not the point. The point was that you claim Beyond3D has a ‘reputation’, and that therefore nobody on that forum can be wrong, ever.
      Clearly it doesn’t work like that.

      I didnt ask to talk about Intel Atom specifically tho😉

      Nor did you ask me to talk about Piledriver and F16C specifically, but I don’t see you complaining there.

      DX12 wasnt even announced a full year from it and not even released until 2 years later.

      Not publicly, but obviously the general public does not get the full inside story (wait, where did I hear that before?). As the screenshot demonstrates. This of course is not the first ever DX12-meeting. Apparently they were quite a ways down the line in development already, had the general idea of the API and driver interface laid out, and were working on the details of the various objects in the API.
      Not something you would just do in a few weeks or months even. So it pretty much proves that AMD was deliberately lying when they publicly claimed that there would be no DX12. It was already being worked on.

      It proves that DX12 was even earlier in development than was publicly known – No mention of the presence of a low-level API though.

      It specifically talks about Pipeline State Objects.
      These form the basis of the low-level, high-efficiency of DX12/Mantle/Vulkan.
      See also: https://msdn.microsoft.com/en-us/library/windows/desktop/dn899196(v=vs.85).aspx

      PSOs in Direct3D 12 were designed to allow the GPU to pre-process all of the dependent settings in each pipeline state, typically during initialization, to make switching between states at render time as efficient as possible.

      That is ofcourse speculation on my part, but that definitely sounds like a plausible theory.

      Which it doesn’t, because Xbox One already had D3D11.x, which already gives you the efficiency of DX12/Mantle. And they had that long before October 2013 (Xbox One was launched in November 2013). So at that time Microsoft certainly didn’t need any convincing from AMD.
      According to Wikipedia, the offical ‘Durango’ SDK for the Xbox One was available in mid-2012: https://en.wikipedia.org/wiki/Xbox_One
      So at that point in time, the D3D11.x design was done, and a workable implementation was available to developers.

      But do tell me something… Apparently you are allowed to make these wild speculations (and you try extremely hard to somehow justify Mantle and AMD’s actions, even in light of the facts and evidence over time)… While at the same time, I say things while I actually *know* some inside information on DX12 development (obviously I knew this when I made the statements on Tweakers and other sites), I just couldn’t make it public because of the NDA, but you have constantly been on my case about that?
      I mean, even without proof, my statements could still be considered ‘speculation’, and the theory was certainly plausible (heck, it was the truth!).

      Again, you didnt refute any of the Mantle/DX12 comments i made to Nomis.

      There is no need to. Your arguments are so weak that nobody takes them seriously.

      • Redneckerz says:

        ”Erm, no. AMD-fanboys like yourself who do nothing but character-assassination on me made sure of that.”

        Sure, because literally every site you got banned on must be a pro-AMD bastion out there to get you, right? Generalizations galore.

        ”Besides, that was not the point. The point was that you claim Beyond3D has a ‘reputation’, and that therefore nobody on that forum can be wrong, ever.”

        It has a reputation, yes. I never said that nobody on that forum could be wrong, ever.

        ”Nor did you ask me to talk about Piledriver and F16C specifically, but I don’t see you complaining there.”

        Thanks for the heads up. I didnt ask to talk about that either 😉

        ”Not publicly, but obviously the general public does not get the full inside story (wait, where did I hear that before?). As the screenshot demonstrates.”

        And obviously wonderboy Scali who is so easy to work with knows the scoops from in and and out 🙂

        ”So it pretty much proves that AMD was deliberately lying when they publicly claimed that there would be no DX12. It was already being worked on.”

        Just going to copy-paste that one:

        So what does this slide say then? – http://cdn.videocardz.com/1/2014/03/a2.jpg

        At the very least we can question your point since that slide proves that they worked on DX12 before and their ”lie” was more PR speak than factual truth. Another copy-paste:

        I mean, its clear from the get go that Mantle always was a showcase API to begin with:
        From the AMD FAQ:
        http://support.amd.com/en-us/search/faq/185
        ”Mantle complements DirectX® by providing a lower level of hardware abstraction for developers who want and need it, echoing the capabilities they have come to expect on dedicated game consoles.​”

        At best, it was an extension, not a standardized component. They have literally said this even BEFORE DX12 was announced.

        http://support.amd.com/en-us/search/faq/184
        ”To validate the concepts”.

        But before you ask: It also says something that seemingly goes against what i just said:
        ”Our intention is for Mantle, or something that looks very much like it, to eventually become an industry standard applicable to multiple graphics architectures and platforms.”

        Back then, DX12 wasnt announced yet, but they obviously knew that it was in development given they presented in a slide after the DX12 announcement that they were working with Microsoft on it: http://videocardz.com/49975/microsoft-announces-directx-12-coming-2015

        The whole reason they were saying that their ”intention” was to be an industry standard and ”Mantle complements DirectX® by providing a lower level of hardware abstraction ” was a way to publicly say to Microsoft: ”Hey, we want low-level API’s, look at our limited solution.” (And i am sure they have said so internally with them aswell.) They couldnt just run around and say: ”Mantle is our solution for a low level API that DX12 will feature also” when DX12 itself, and thus what it was going to do, ”to the public” was still unannounced. Doing so would have broken NDA.

        ”These form the basis of the low-level, high-efficiency of DX12/Mantle/Vulkan.”

        Took you long enough linking to these. If you just had done so in the beginning..

        ”Which it doesn’t, because Xbox One already had D3D11.x, which already gives you the efficiency of DX12/Mantle”

        You never wondered how D3D11.x is more likely a specific low-level API for that specific hardware, and not just a generic low-level API like you seem to imply? And also what Oisyn and PrisonerofPain disagree with:

        http://tweakers.net/nieuws/97839/directx-12-kan-energieverbruik-halveren-of-fps-met-de-helft-verhogen.html?showReaction=7122265#r_7122265, http://tweakers.net/nieuws/97839/directx-12-kan-energieverbruik-halveren-of-fps-met-de-helft-verhogen.html?showReaction=7122296#r_7122296

        You stating ”its a prototype API” whilst you dont have internal XBO documentation (and they do) just proves that in the very least your claim can be questioned and is questioned by people who work with XBO hardware for a living.

        (And its also very funny/sad, but predictable, that you start asking for proof and then go ask what PoP’s latest Amiga demo were. Its always about ”winning” with you, but more on that later)

        ”But do tell me something… Apparently you are allowed to make these wild speculations (and you try extremely hard to somehow justify Mantle and AMD’s actions, even in light of the facts and evidence over time)…”

        Even when AMD came around first to the market with DX11 GPU’s, instead of complimenting them (Just like how you are complimenting Nvidia now with their supposed DX12 features) you had to discredit them instantly, talking how all Nvidia hardware up to that point supported DirectCompute, with AMD only having HD4.x supporting it. (Yeah, i have actually started gone around to read up your previous entries to research where this clear bias comes from. Sue me.)

        If that isnt hinting at a clear bias, then what is?

        In fact, you dont even stop there. You even go mention how Nvidia has PhysX and how it was already used in games.. unlike DX11. Your AMD discreditential efforts are going through the roof, honestly. But after all that discreditiing, even before AMD announced their cards, you already conclude how others will respond to it: They are all ”bitter”.

        Nah Scali, its crystal clear who really is ”bitter”. You buying one of these cards later on doesnt prove there isnt an agenda – It just proves you bought that card just so you can argue that you aren that biased. Heck, you even say yourself that you bought it just to ”toss around”.

        Yet ironically enough the 5770/5870 cards with all of their flaws (As you sure were to highlight in your professional ”neutral” point of view) are still frequently listed as minimum system requirements for games like the 8800 GT was, since they were the first DX10 and DX11 GPU’s on the market (The GTX 460 actually is listed aswell, although it launched 10 months later, thus not the first to market and it just goes to show how much longevity 5770/5870 had like the 8800 Nvidia series before it)

        ”While at the same time, I say things while I actually *know* some inside information on DX12 development (obviously I knew this when I made the statements on Tweakers and other sites), I just couldn’t make it public because of the NDA, but you have constantly been on my case about that?”

        So why call PoP out on his NDA’s yet i cant do the very same and why do you make it so difficult on yourself to just adhere to the requests made? You really have double standards going on here (or are just being hypocritical)

        ”There is no need to. Your arguments are so weak that nobody takes them seriously.”

        Sure.. just like how there was no grounds earlier on be bothered investigating the 3 examples of questionable Nvidia marketing i mentioned earlier on. If you cant refute it, you ignore it.

        Since its apparently okay to throw in completely different topics:

        You are the same guy that made the claim that AMD paid Oxide for their benchmarks, which is just a completely unverified claim to make unless you literally saw the transactions being made. Its like me saying that Nvidia pays developers to use Hairworks or any other Nvidia technology for promoting purposes.

        (You wrote this, by any means? Wouldnt be the first time you would use duplicate accounts to keep arguing your points – like i said, i am reading up on your past blogs – https://forums.geforce.com/default/topic/878225/geforce-900-series/pascal-gpus-manufactured-on-tsmc-16nm-finfet-node-/post/4817405/#4817405)

        You also are the same guy that claimed that current-gen consoles were already ”outdated and underpowered” at launch. Aside the indeed crappy performance of the Jaguar cores, one apparently forgets that they are semi-custom APU’s and in the case of PS4 its heavily tweaked towards asynchronous compute, whilst XBO has embedded memory up its sleeve (32 Mb of it). Titles like Quantum Break and Uncharted 4/The Order show what the hardware is capable off, with The Tomorrow Children to follow.

        Instance of PS4 Liverpool GPU customizations: https://en.wikipedia.org/wiki/PlayStation_4_technical_specifications#Graphics_processing_unit
        Xbox One essentially is a underclocked Radeon 7790/R7 260 with the benefit of 32 MB ESRAM embedded memory.

        In fact, you even one upped yourself and stated that even X360 and PS3 were outdated at launch and asserted that they stayed at DX9 level features whilst Vista/DX10 were on the desktop. Apparently you are forgetting that in the case of X360, Xenos was the first unified shader hardware on the market, supported a feature set beyond DX9/SM3, and came around in November 2005. Vista/DX10 wasnt even on the market yet, desktop equivalents (from Nvidia) came around nearly a year later, and if you are going to get pedantic about it, DX10 SDK was only publically available in 2007, although internally it was as early as December 2005 (Which is still a month later than X360, which was already publically available at the time). And PS3 introduced massive parallelization into the gaming industry, something for which wasnt much focus on till then.

        For you say then ”They arent very relevant to desktop games” well thanks to consoles, Cards like the 8800 GT and HD 4870 held out so long because developers targeted consoles first, PC later.

        The only reason you call these consoles ”outdated” is because 2 out of 4 are made on AMD CPU and GPU’s, and X360 having a (now AMD) GPU. And because its AMD, you got out of your way to make completely illogical claims about them when its clear cut that they are wrong.

        These are just things you claimed late last year, so fairly recent.

        Basically, your behavior is: ”You dare insult the intelligence of the great Scali? For I am all-knowing, I can write 3D engines and be in a DX12 Early Access forum. Fear my knowledge of DX12 and AMD in general you small minded minions! Even Richard Huddy cannot grasp the depth of knowledge I possess!” Its just socially flawed. You should even count how many times you use the ”I” in your blogs and postings, especially when being disagreed upon, even on this very topic.

        Aside all that Scali, no matter how wrong or right you are – Your attitude sucks. You against site owners of Beyond 3D and the like with the same kind of behavior you always go into your own blogs trying to prove people wrong – Condescending and snarky. Just look at your own blogs, or on Tweakers for that matter. And when all else fails – you challenge them. Either for a coding competition or throwing back the blame by saying ”How many games have you written, hm? You know nothing”, or you go on putting their names to slander, like you have recently shown doing with oisyn and Dahakon.

        And thats why most people dont bother a discussion with you – Not because you ”know” your shit, but because your attitude is incredibly childish most of the time and almost obsessively focussed on ”winning” debates, no matter from who or what.

  11. Scali says:

    Just a quick question to whoever has bothered to read this far:
    Is anyone even remotely convinced by Redneckerz’ arguments?
    I mean, take his ‘proof’ for Mantle: Apparently a slide released AFTER DX12 was announced (which was around March 2014: http://www.extremetech.com/gaming/177992-microsoft-confirms-long-overdue-directx-12-will-be-unveiled-at-gdc), showing that AMD has DX12-support, is somehow an excuse for AMD clearly stating that there would not be DX12 a few months earlier?
    It’s pretty much a no-brainer that any GPU vendor puts a big “We support DX12” all over their presentations once the general public is aware of DX12 coming.
    But even if you’re under NDA, there’s no way you can make the statement that there will be NO DX12, unless you have an agenda, where DX12 does not fit that well.

    Or linking to entries from the Mantle FAQ, which again was published AFTER DX12 was announced?
    I don’t know about you, but when I read that FAQ I think it is mostly written as ‘damage control’ after DX12 was announced, and ‘there will be no DX12’ could no longer be sustained: It answers many questions that arose from earlier coverage of AMD’s Mantle, and tries to shoehorn everything into a world where DX12 exists and consoles do not run Mantle (“let’s reinvent it as a ‘showcase’, and then we can let it die off in a few months”). Some of it is in direct conflict with what people like Richard Huddy have said about Mantle in the media earlier. And all that can easily be found by googling for some older tech-site articles/interviews on Mantle.

    The whole showcase-thing doesn’t mesh with ‘there will be no DX12’ obviously. What’s the point of making a showcase for an API that will never exist?
    And for OpenGL/Vulkan you wouldn’t need to go as far as actually building the drivers and releasing a commercial game, before sending your API ‘proposal’ to Khronos. They would have accepted a much more basic ‘showcase’/’proof of concept’.
    And if MS wasn’t interested in a new DX anyway, Khronos was the first and only party you would have to approach.

    Heck, just take this article: http://www.anandtech.com/show/9036/amd-lays-out-future-of-mantle-changing-direction-in-face-of-dx12-and-glnext
    Specifically says AMD is changing direction. So anything published after that date is no longer representative of Mantle as it was originally portrayed (which does not necessarily correspond with AMD’s actual strategy and goals), and can not be used as ‘evidence’ to support Redneckerz’ claims.

    • Redneckerz says:

      If you are going to be that much of a poor sport and not show the whole response i made prior (Else why would you ”suddenly” ask your audience if anyone is convinced by the arguments ive made?) then its just another indicator of you not being ”neutral”.

      But i digress:

      ”I mean, take his ‘proof’ for Mantle: Apparently a slide released AFTER DX12 was announced, showing that AMD has DX12-support, is somehow an excuse for AMD clearly stating that there would not be DX12 a few months earlier?”

      You dont refute what is written on that slide: AMD was working with Microsoft on DX12. You dont link to the statement that AMD has said that there would be no DX12 which obviously is PR speak. Even so, its obviously clear why they would yell that – To publicly ”provoke” Microsoft. You already must have forgotten the whole explanation how Mantle was just a showcase API and not much more.

      ”It’s pretty much a no-brainer that any GPU vendor puts a big ”We support DX12” all over their presentations once the general public is aware of DX12 coming.”

      And it just shows that their previous statements were, well, to provoke and to direct attention on Mantle, which, as is quite obvious at this point, was a showcase API after all.

      ”Specifically says AMD is changing direction.”

      *Sigh* ofcourse they do. They have showcased the benefits of a low-level API, as is later seen in DX12, and given the tech to Khronos for Vulkan. What has AMD to gain to keep Mantle in the air? Their ”strategy” worked.

      But lets copy-paste that bit again:

      I mean, its clear from the get go that Mantle always was a showcase API to begin with:
      From the AMD FAQ:
      http://support.amd.com/en-us/search/faq/185
      ”Mantle complements DirectX by providing a lower level of hardware abstraction for developers who want and need it, echoing the capabilities they have come to expect on dedicated game consoles.​”

      At best, it was an extension, not a standardized component. They have literally said this even BEFORE DX12 was announced.

      http://support.amd.com/en-us/search/faq/184
      ”To validate the concepts”.

      But before you ask: It also says something that seemingly goes against what i just said:
      ”Our intention is for Mantle, or something that looks very much like it, to eventually become an industry standard applicable to multiple graphics architectures and platforms.”

      Back then, DX12 wasnt announced yet, but they obviously knew that it was in development given they presented in a slide after the DX12 announcement that they were working with Microsoft on it: http://videocardz.com/49975/microsoft-announces-directx-12-coming-2015

      And specifically, read the bit beflow:

      ”The whole reason they were saying that their ”intention” was to be an industry standard and ”Mantle complements DirectX by providing a lower level of hardware abstraction” was a way to publicly say to Microsoft: ”Hey, we want low-level API’s, look at our limited solution.” (And i am sure they have said so internally with them aswell.) They couldnt just run around and say: ”Mantle is our solution for a low level API that DX12 will feature also” when DX12 itself, and thus what it was going to do, ”to the public” was still unannounced. Doing so would have broken NDA.”

      All that boasting was done on purpose.

      ”But even if you’re under NDA, there’s no way you can make the statement that there will be NO DX12,”

      Wait a second.

      Do you literally think that AMD LEGITMATELY thought that there would be NO DX12?

      Because if you did, that would be the most insane assumption to make.

      So when AMD posted that ”There will be no DX12” through their PR, you thought anything else but ”This is an outlandish claim to make, this must be PR in order to generate attention on Mantle”?

      Whilst i wait on your answer, Isnt that what the whole purpose of PR talk is? Boasting outlandish claims for attention? It clearly helped. You gotta do what you gotta do to get the attention for your API right? But ofcourse, instead of just using common sense and think ”Well its PR speak so saying there will be no DX12 is quite an outlandish claim to make and looks like its only used to get the attention on Mantle” you go into tinfoil hat territory and say ”Unless you have an agenda”. Well, they did, and i have given the reason why. (Especially when AMD at the time was already working with Microsoft on that very same DX12.)

      AMD PR speak translates in Scali’s mind into ”Having an agenda” which in the end is the result of a probable gross assumption on his end. Nvidia does the exact same thing, but also misleads its users by not being upfront about it. What do you think, that they have an ”agenda” aswell? In the end of it all, its all just PR speak. If you dont know how that works, then dont try to twist and spin it into something it isnt. Because that is obviously what you are doing right now.

      I told you this before. Specifically, read the bit beflow:

      ”The whole reason they were saying that their ”intention” was to be an industry standard and ”Mantle complements DirectX by providing a lower level of hardware abstraction” was a way to publicly say to Microsoft: ”Hey, we want low-level API’s, look at our limited solution.” (And i am sure they have said so internally with them aswell.) They couldnt just run around and say: ”Mantle is our solution for a low level API that DX12 will feature also” when DX12 itself, and thus what it was going to do, ”to the public” was still unannounced. Doing so would have broken NDA.”

      All that boasting was done on purpose. Was it a risky game that was played? Sure. You are boasting outlandish claims to start with. AMD can only make those when it KNOWS that DX12 is coming and will have that low-level API. Mantle was just a public limited showcase API and internally a good topic starter for AMD and Microsoft in the design stages for DX12. After all, a limited implementation is faster to build than the full-on thing, like Glide did in the 90s.

      ”The whole showcase-thing doesn’t mesh with ”there will be no DX12” obviously.

      It does if you simply took that statement for what it was: A PR statement. Again: Do you literally think that AMD LEGITMATELY thought that there would be NO DX12?

      ”What’s the point of making a showcase for an API that will never exist?”

      Because AMD KNEW about DX12 internally already. Thats why they could make such an outlandish claim in the first place. You cant possibly believe that AMD legitmately thought that there would be no DX12 would you?

      ”And for OpenGL/Vulkan you wouldn’t need to go as far as actually building the drivers and releasing a commercial game,”

      Those 12 games were just showcase support for a low-level API. And they didnt make the drivers for Khronos at first, they made that whole Mantle thing as a showcase for Microsoft to show the benefits of having a ”console-style” API.

      Who knows how it would have been if AMD went to Khronos first and didnt work with Microsoft on DX12.

      ”I don’t know about you, but when I read that FAQ I think it is mostly written as ”damage control” after DX12 was released: It answers many questions that arose from earlier coverage of AMD’s Mantle,”

      Ah, now its ”damage control”. How many logical hoops are you going to jump into before you run into common sense Scali? You see things that arent there and it clearly has gone to your head.

      ”tries to shoehorn everything into a world where DX12 exists and consoles do not run Mantle.”

      Well, current-gen consoles DO NOT run Mantle, right? Or are you going to complain about that too?

      And now the previous remarks, that you havent addressed for whatever reason:

      ”Which it doesn’t, because Xbox One already had D3D11.x, which already gives you the efficiency of DX12/Mantle”

      You never wondered how D3D11.x is more likely a specific low-level API for that specific hardware, and not just a generic low-level API like you seem to imply? And also what Oisyn and PrisonerofPain disagree with:

      http://tweakers.net/nieuws/97839/directx-12-kan-energieverbruik-halveren-of-fps-met-de-helft-verhogen.html?showReaction=7122265#r_7122265, http://tweakers.net/nieuws/97839/directx-12-kan-energieverbruik-halveren-of-fps-met-de-helft-verhogen.html?showReaction=7122296#r_7122296

      You stating ”its a prototype API” whilst you dont have internal XBO documentation (and they do) just proves that in the very least your claim can be questioned and is questioned by people who work with XBO hardware for a living.

      (And its also very funny/sad, but predictable, that you start asking for proof and then go ask what PoP’s latest Amiga demo were.)

      ”But do tell me something… Apparently you are allowed to make these wild speculations (and you try extremely hard to somehow justify Mantle and AMD’s actions, even in light of the facts and evidence over time)…”

      Even when AMD came around first to the market with DX11 GPU’s, instead of complimenting them (Just like how you are complimenting Nvidia now with their supposed DX12 features) you had to discredit them instantly, talking how all Nvidia hardware up to that point supported DirectCompute, with AMD only having HD4.x supporting it. (Yeah, i have actually started gone around to read up your previous entries to research where this clear bias comes from. Sue me.)

      If that isnt hinting at a clear bias, then what is?

      In fact, you dont even stop there. You even go mention how Nvidia has PhysX and how it was already used in games.. unlike DX11. Your AMD discreditential efforts are going through the roof, honestly. But after all that discreditiing, even before AMD announced their cards, you already conclude how others will respond to it: They are all ”bitter”.

      Nah Scali, its crystal clear who really is ”bitter”. You buying one of these cards later on doesnt prove there isnt an agenda – It just proves you bought that card just so you can argue that you aren that biased. Heck, you even say yourself that you bought it just to ”toss around”.

      Yet ironically enough the 5770/5870 cards with all of their flaws (As you sure were to highlight in your professional ”neutral” point of view) are still frequently listed as minimum system requirements for games like the 8800 GT was, since they were the first DX10 and DX11 GPU’s on the market (The GTX 460 actually is listed aswell, although it launched 10 months later, thus not the first to market and it just goes to show how much longevity 5770/5870 had like the 8800 Nvidia series before it)

      And then ill just let the offtopic remarks about consoles slide. (For now.)

      • Scali says:

        Okay, I’m not going to bother responding to every single thing, but basically you’re saying:
        “Yes, I agree with you that AMD knew about DX12, and was probably already involved with MS in developing DX12 when they made the statement that there would be no DX12. But that’s okay because it was just a PR-statement.”

        So, being PR makes everything okay? You can say anything you want, just to promote your products? Even when it isn’t even remotely accurate. Even when it’s patently dishonest?
        Well, then that is the core of our disagreement then. I don’t think you can say anything you want. And neither does the law.

        But wait, at the same time I see you talking about nVidia’s Tegra X1, and how they give a FLOPS rating for FP16, rather than the usual FP32.
        And then you go on about the GTX970, where they say it has 4 GB, but don’t go into detail about the 3.5/0.5 GB segmentation.

        So, you’re saying: “If AMD does it, anything is fine with PR. However, if nVidia makes statements that are not even lies, in fact, technically they are 100% true, they are just not as specific as I would like them to be, then PR is not fine.”

        Also, I had to chuckle about you trying to even frame my statement of underpowered consoles as anti-AMD.
        I say they’re underpowered because they’re underpowered. As in: previous gen could do gaming in 720p, this generation can’t quite make the jump to 1080p. At the same time, PC gamers are moving to 1440p and higher without a problem. Not because of the brand of the APU they used.

        As for DirectCompute: AMD claimed the HD5000-series was the first GPU with DirectCompute on the market. Which was not true, because as I said, the venerable 8800 series also supports DirectCompute, and that certainly was on the market long before the HD5000-series (where the 5000-series was the first DX11-hardware on the market, the 8800 was the first DX10-hardware on the market). I even pointed out the specific driver revision for nVidia which enabled DX11/DirectCompute support, which was available before AMD’s hardware was on the market (and which I was already using in development).
        After all, modern GPGPU started on the 8800 with Cuda, which spun off OpenCL and DirectCompute. Both of which work on the original 8800, even though these APIs arrived years later.
        It’s about giving credit where credit’s due. Which is strange. Even in the face of contradicting evidence, you still want to give AMD full credit for Mantle and kick-starting DX12 (and by extension WDDM 2.0 and Windows 10, without which the new DX12 API cannot function, because the new low-level API cannot be retrofitted to the DX11 driver interface in WDDM 1.3 and older).
        At the same time, you have issues with giving nVidia credit for kick-starting GPGPU/DirectCompute/OpenCL with Cuda, while unlike Mantle, this is not controversial in the least.

        You see, pretty much all of the things you claim to be ‘anti-AMD’ are actually triggered by statements that AMD or their fanboy following make. I don’t just ‘invent’ things myself, I respond to things I see and hear, and which I don’t agree with, because they are factually inaccurate, or worse.
        I’d get triggered by the same sort of stuff from any other vendor, it’s just far less common.

  12. Redneckerz says:

    Let me preface this by saying that when i figured that the replies still werent online after 8 hours, i assumed you held them back. Gross assumption on my part, for that my apologies.

    ”Okay, I’m not going to bother responding to every single thing,”

    I know you wouldnt.

    ”but basically you’re saying: ”Yes, I agree with you that AMD knew about DX12,”

    Always trying to make it sound like you were right from the start huh? I didnt agree with you.

    You said: ”But even if you’re under NDA, there’s no way you can make the statement that there will be NO DX12, unless you have an agenda, where DX12 does not fit that well.”

    You called AMD out on having an agenda with DX12 and i laid out why that wasnt the case.

    ”So, being PR makes everything okay?”

    Making outlandish claims is a part of PR, yes. And people fall for it, that is the nature of the beast. http://time.com/money/2823108/why-we-always-fall-for-products-making-outlandish-claims/

    Heck, look at the American Presidential Bid – Both parties continously make outlandish claims to eachother. People eat it regardless of whether its the truth or not. That is literally how some forms of PR work. It goes around in companies too – They all do it. Its not like that only AMD makes outlandish claims that blatantly obvious arent true, they all do. Thats why its called PR for a reason.

    Or are you going to suggest that only AMD is ”guilty” of this behavior?

    ”You can say anything you want, just to promote your products? Even when it isn’t even remotely accurate. Even when it’s patently dishonest?”

    Till a certain level, yes. You cant obviously claim on Nike shoes that they also possess the ability to time travel. What you can do, is say that they give you wings and then clip to a reel of Michael Jordan flying in the air when scoring a point. PR speak still has to maintain the suggestion that things may be true, even when in reality they obviously arent. So there is a limit on how outlandish your claim can be.

    ”Well, then that is the core of our disagreement then.”

    It seems so. But that is how PR works. So basically, all you have on your ”AMD says there is no DX12” is simply because you disagree that such a bold claim (Which they obviously can only make when they already know of the existence of a new DX) was a PR statement at the time.

    ”I don’t think you can say anything you want.”

    Till a certain point you can, see the above.

    ”But wait, at the same time I see you talking about nVidia’s Tegra X1, and how they give a FLOPS rating for FP16, rather than the usual FP32. And then you go on about the GTX970, where they say it has 4 GB, but don’t go into detail about the 3.5/0.5 GB segmentation.”

    Because unlike AMD’s statement, it is misleading and it can be verified as such. AMD’s statement was just an outlandish claim, based on an English translation from a poorly worded German source. In that same translation they already made nuances aswell (More on that later).

    Obviously, what Nvidia is still PR speak aswell since saying that your SoC can yield 1 teraflop sounds a lot nicer than ”Its 1 teraflop, but its with reduced FP16 precision”. And as for GTX 970 it should be fairly obvious why they should have communicated that bit. Its not like its that big of a point that Nvidia does these things, i love their Tegra X1 SoC and their other technologies.

    My point was more that you NEVER address these things yourself.

    ”So, you’re saying: ”If AMD does it, anything is fine with PR. However, if nVidia makes statements that are not even lies, in fact, technically they are 100% true, they are just not as specific as I would like them to be, then PR is not fine.”

    If that is how you want to call it. I think there is a difference between making a bold claim that everyone can instantly see isnt true, and a (perhaps intentional) misleading claim that avoids the specific and you have to dig to find the answer. Again, thats also PR and i dont hold it against them as much as how you hold AMD’s words against them, i am just stating that its telling a half-truth to your customer.

    As for the origin link of your whole ”AMD said there will be no DX12” statement:

    http://www.hardwarecanucks.com/news/amd-roy-taylor-directx12/

    Since you dont provide the origin of the ”There will be NO DX12” statement, with some Googlefu i found your original link, which was an English translation of the German source. You do know that the original German source is really poorly worded right?

    And what surprises me is that its from April 2013. What surprises me even more is that they later nuanced their obvious ”AMD NO DX12” claim: ”As far as we know there are no plans for DirectX 12,”

    ”As far as we know”. – It wasnt a lie at all, they were just making guesses. They obviously cant say that DX12 is coming when it was barely in the starting phases. Such would have broken NDA. If they were already working together that is.

    The fact that they made that nuance in the first place is an obvious hint that AMD already knew about the existence of DX12.

    You first claimed your statementwas in this thread – http://tweakers.net/nieuws/101673/amd-tegen-ontwikkelaars-richt-je-op-dx12-of-glnext-in-plaats-van-mantle.html?showReaction=7542242#r_7542242 where you had a similar discussion and where your original statement was: ”AMD slowed down DX12.”. (Note how that already is different from ”AMD said there will be no DX12”, like you later claimed.)

    Like user Belgar then said: http://tweakers.net/nieuws/101673/amd-tegen-ontwikkelaars-richt-je-op-dx12-of-glnext-in-plaats-van-mantle.html?showReaction=7542348#r_7542348

    ”AMD didnt slow down DX12, at the time they werent invited by Micrsoft to talk about the future of DirectX. So obviously they couldnt slow down DirectX. You do have to put the situation in the correct timeframe.”.

    Note how that comment supports the theory why AMD was making Mantle at the time, which was announced in September 2013 (A month before your picture states about DX12) since a limited sub-set of a low-level API takes less time than the full thing, it clearly supports the theory they wanted to show the public, and Microsoft of the benefits of a low-level API.

    PS: Your answer to that argument was this: http://tweakers.net/nieuws/101673/amd-tegen-ontwikkelaars-richt-je-op-dx12-of-glnext-in-plaats-van-mantle.html?showReaction=7542376#r_7542376

    ”No, you shouldnt believe everything that AMD tells you. That this ”dude” said this, doesnt mean its true”.

    So basically you ran out of arguments and you go on making an unverified assumption. Your whole original statement of ”AMD slowed down DX12” was unverified, you failed to provide proof back then, and it was completely debunked. That still didnt stop you from spreading that misinformation, changing it to ”AMD said there will be no DX12” hence why your disciples like Nomis make the same claims – They ran with your wrong assumption that AMD legitmately thought that there would be no DX12, who do so without verifying the original sources at hand which obviously makes a lot more nuances.

    In the end, nobody really knows if DX12 was started because of Mantle or not, given how close together they started developing them. What we do know is that Mantle was announced first and demonstrated first, since its just a sub-set of a complete API, as a showcase for low-level API’s.

    And that the whole ”There will be no DX12” was an obvious PR statement back then. AMD didnt ”lie”, didnt ”slow down” things – You just took what they said for real and didnt even think one second about the possibility that it would be pretty dumb of AMD to legitmately think that there would be no DX12, given their worldwide contacts. They simply made that statement because they KNEW already about its existence.

    ”I say they’re underpowered because they’re underpowered.”

    The only thing that is an obvious bottleneck are its CPU cores. Thats it.

    ”As in: previous gen could do gaming in 720p,”

    Quite a lot of titles did, and there is also quite a bunch who dont. Some even have different resolutions for both platforms: https://forum.beyond3d.com/threads/list-of-rendering-resolutions.41152/

    And lets not forget that the generational change from PS360 to PS4/XBO also saw significant changes in the visual pipeline. Physically Based Rendering is a common staple on current-gen when only at the end of last-gen, Dontnod was able to implement a PBR pipeline on last-gen consoles (and even then had to go through a lot of issues for it) for Remember Me: https://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/

    The visual fidelity, despite being so ”underpowered” has significantly increased.

    ”this generation can’t quite make the jump to 1080p.”

    Only the Xbox One has issues with that. Developers have found that the usual ”sweet” spot for that is 900p. PS4 usually targets (and achieves) 1080p. Heck Uncharted 4 is 1080p. But lets check out some titles with Digital Foundry for additional evidence:

    Thief: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-thief-next-gen-face-off

    Outlast: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-outlast-face-off

    Sniper Elite 3: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-sniper-elite-3-face-off

    Metro Redux: 1080p on PS4, 912p (Its a weird resolution, i know) on XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-metro-redux-face-off

    Destiny: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-destiny-face-off

    Shadow of Mordor: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-shadow-of-mordor-face-off

    Alien: Isolation: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-alien-isolation-face-off

    Sleeping Dogs: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-sleeping-dogs-definitive-edition-face-off

    Shadow Warrior: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-shadow-warrior-face-off

    Lords of the Fallen: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2014-lords-of-the-fallen-face-off

    Lets throw in some more recent titles aswell:

    Metal Gear Solid V: The Phantom Pain: 1080p on PS4, 900p on XBO (An upgrade from Ground Zeroes which was 720p) – http://www.eurogamer.net/articles/digitalfoundry-2015-metal-gear-solid-5-phantom-pain-face-off

    Batman: Arkham Knight: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2015-batman-arkham-knight-face-off

    Mad Max: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2015-mad-max-face-off

    Fallout 4: 1080p on both PS4 and XBO – http://www.eurogamer.net/articles/digitalfoundry-2015-fallout-4-face-off

    Just Cause 3: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2015-just-cause-3-face-off

    Rainbow Six: Siege: 1080p on PS4, 900p on XBO, but they use some kind of ”reconstruction” tech, like Quantum Break also does – http://www.eurogamer.net/articles/digitalfoundry-2015-rainbow-six-siege-face-off

    Far Cry: Primal: 1080p on PS4, 1440×1080 on XBO – http://www.eurogamer.net/articles/digitalfoundry-2016-face-off-far-cry-primal

    The Division: 1080p on PS4, dynamic resolutions on XBO – http://www.eurogamer.net/articles/digitalfoundry-2016-the-division-face-off

    Dark Souls 3: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2016-dark-souls-3-face-off

    Overwatch: Both PS4 and XBO use dynamic resolutions but operate pretty much all the time on 1080p – http://www.eurogamer.net/articles/digitalfoundry-2016-overwatch-face-off

    Doom: Both PS4 and XBO use dynamic resolutions but PS4 usually hits 1080p, whilst XBO operates on a lower resolution – http://www.eurogamer.net/articles/digitalfoundry-2016-doom-face-off

    Homefront: The Revolution: 1080p on PS4, 900p on XBO – http://www.eurogamer.net/articles/digitalfoundry-2016-homefront-the-revolution-face-off

    And yes, i am aware that there are titles operating on a lower resolution – The titles of DICE based on Frostbite being a consistent culprit, operating at 900p for PS4 and 720p for XBO. Ubisoft also had a tendency for lower resolutions in their earlier titles but they have come back from that as of late. My point is that most titles dont run at that resolution, something which is backed up by DF.

    ”At the same time, PC gamers are moving to 1440p and higher without a problem.”

    http://store.steampowered.com/hwsurvey?l=dutch

    According to Steam the most popular resolution used is 1080p with 36 percent – with 1440p only accountable for a mere 1.5 percent with a 0.05% increase. The second most popular resolution is 1366×768. So, looks like PC gamers usually game at PS4 resolution. (Albeit with Ultra settings ofcourse. Most multiplatform release see console releases usually checking out at a PC equivalent ”High” setting.)

    ”As for DirectCompute: AMD claimed the HD5000-series was the first GPU with DirectCompute on the market. Which was not true,”

    Eh: http://www.amd.com/en-us/press-releases/Pages/amd-press-release-2009sep22.aspx

    ”being the first and only card to support DirectCompute 11.”

    It had DirectCompute 11, which, without looking this up (Quite tired of sourcing all of the above as you can see) is on DX11 feature level 11, right? You arent just obmitting the ”11” here on purpose do you?

    ”Which was not true, because as I said, the venerable 8800 series also supports DirectCompute, and that certainly was on the market long before the HD5000-series (where the 5000-series was the first DX11-hardware on the market, the 8800 was the first DX10-hardware on the market)”

    But that is on a featurelevel 9_3 or atleast lower than what AMD claimed.

    ”You see, pretty much all of the things you claim to be ”anti-AMD” are actually triggered by statements that AMD or their fanboy following make.”

    And like i said earlier in this lengthy post:

    ”Obviously, what Nvidia is still PR speak aswell since saying that your SoC can yield 1 teraflop sounds a lot nicer than ”Its 1 teraflop, but its with reduced FP16 precision”. And as for GTX 970 it should be fairly obvious why they should have communicated that bit. Its not like its that big of a point that Nvidia does these things, i love their Tegra X1 SoC and their other technologies. My point was more that you NEVER address these things yourself.”.

    ”I don’t just ”invent” things myself, I respond to things I see and hear, and which I don’t agree with, because they are factually inaccurate, or worse.”

    Or misassumed, as ive have demonstrated now. 🙂

    ”I’d get triggered by the same sort of stuff from any other vendor, it’s just far less common.”

    Dont play dumb Scali.

    • Scali says:

      I will just respond to one thing here (the rest of your ‘arguments’ don’t even warrant a response):

      ”As for DirectCompute: AMD claimed the HD5000-series was the first GPU with DirectCompute on the market. Which was not true,”

      Eh: http://www.amd.com/en-us/press-releases/Pages/amd-press-release-2009sep22.aspxhttp://www.amd.com/en-us/press-releases/Pages/amd-press-release-2009sep22.aspx

      ”being the first and only card to support DirectCompute 11.”

      It had DirectCompute 11, which, without looking this up (Quite tired of sourcing all of the above as you can see) is on DX11 feature level 11, right? You arent just obmitting the ”11” here on purpose do you?

      Firstly, there is no such thing as “DirectCompute 11”, nor is there anything like “feature level 11”.
      Secondly, if you are going to refer to older blogposts of mine, do it correctly.
      You are referring to this blogpost:
      https://scalibq.wordpress.com/2009/10/03/amd-seriously-needs-to-work-on-their-pr/

      Which displays the following quote:

      “AMD’s upcoming next generation ATI Radeon family of DirectX 11 enabled graphics processors are expected to be the first to support accelerated processing on the GPU through DirectCompute.”

      Which is not the quote you displayed.
      I did not misquote that, there is no “DirectCompute 11” there. Just vanilla “DirectCompute”.
      I can still find that quote eg here:
      http://www.boston.co.uk/press/2009/09/amd-advances-its-commitment-to-opencl-for-gpu-with-review-by-standards-body.aspx
      Or here:
      https://www.chiphell.com/thread-55011-1-1.html
      And here:
      http://www.businesswire.com/portal/site/google/?ndmViewId=news_view&newsId=20090920005055&newsLang=en

      So looks like various sites copied the press release verbatim, and that’s what it said.
      Note that these sites are dated September 21st, where your page is dated September 23rd. So they could not have quoted that page, but rather an earlier press release.

      Speaking of accurate quoting:
      http://www.heise.de/newsticker/meldung/AMDs-Vice-President-im-Gespraech-Es-wird-kein-DirectX-12-kommen-1835338.html

      Aber es wird kein DirectX 12 kommen. Das war’s. Soweit wir wissen gibt es keine Pläne für DirectX 12.

      • Redneckerz says:

        And OFCOURSE you have to refute the one point i already said of i didnt really look up as i was tired of grabbing all the sources (I did some basic lookups but nothing extensive, but you ignored the whole ”being tired” bit anyways, as you did with most of the post) but OFCOURSE you have to go nitpick it out of the whole post. Its just childish.

        ”I will just respond to one thing here (the rest of your ‘arguments’ don’t even warrant a response):”

        Yeah thanks for being so thorough, right? My impression is that you simply have no argument to refute it and you dont want to admit that you literally thought that AMD legitimately thought that there would be no DX12. Instead you go on ”correcting” and displaying the proper quotes. You obviously didnt read my response thorough, or you would know i had made references to the posts already.

        ”Firstly, there is no such thing as ”DirectCompute 11”,

        So why does AMD call it like that then? Oh wait, because its part of their PR announcement on having the first DirectX 11 cards to market, and adding in the ”11” after DirectCompute is just there for completeness sake? Or are you going to go all paranoid on those things aswell?

        ”nor is there anything like ”feature level 11”.”

        At first, i was like ”What are you talking about, i just read about it!” but then i got it: https://en.wikipedia.org/wiki/Feature_levels_in_Direct3D

        Remember how i said that i was already tired of looking for all the sources made prior? When i referred to ”feature level 11”, i made the mistake (As you now, with your nitpicking skills) that by saying that, i was referring to feature level 11_0. How foolish of me to assume you would know that i was referring to that. After all, we call DirectX 11 not DirectX 11.0 either. Clearly i made the wrong assumption here and figured that you were able to connect the dots, but nope, since here you are nitpicking about it.

        ”Secondly, if you are going to refer to older blogposts of mine, do it correctly.”

        Says the guy who doesnt even try to refute all the other sourced answers i have given. Its like you dont even try, because you know i am right on those 😉 (Just being tongue-in-cheek here)

        ”Which displays the following quote: Which is not the quote you displayed.”

        Its also not a quote that you made, its a quote from AMD itself. Like i said, i was too tired from before, but here is the link where i got it from at first when i was doing some basic look ups on your statements there: https://community.amd.com/thread/119321

        I type my responses in Kladblok so its a bit clearer for me to see where everything is and i dont have the downside of having unexpected time-outs by sites (Which would remove the post i just had made) During responding i started writing some things in advance about DirectCompute, before i got to the other source which i felt was more complete and for which i had to rewrite part of my answer. Wanna know what i originally wrote? I am just showing it just so you know i wasnt purposefully ignoring your blogposts by incompletely referring to them, and it also isnt my current opinion. Just being open and honest here.

        So there:

        ”In fact: https://scalibq.wordpress.com/2009/10/03/amd-seriously-needs-to-work-on-their-pr/

        In their PR they say: ”AMD’s upcoming next generation ATI Radeon family of DirectX 11 enabled graphics processors are expected to be the first to support accelerated processing on the GPU through DirectCompute.”

        Hence ”expected”. In fact at release they even one-upped themselves by supporting DirectCompute 11.

        Your previous picture provided some proof that there were already talkings in 2013 about DX12, specifically DX12. Since AMD says they worked with Microsoft with DX12 after it was announced, doesnt it make sense that they started these talkings AFTER April 2013? Unless you can back up that AMD even BEFORE 2013 was already talking about DX12 to Microsoft.”

        See? had your blogpost nicely referred. Again, just showing this for completeness sakes and honesty.

        ”So looks like various sites copied the press release verbatim, and that’s what it said.”

        I guess it is. In the grand scheme of arguments (Where you ignored 80% of it), its not that big of a deal.

        ”Speaking of accurate quoting: http://www.heise.de/newsticker/meldung/AMDs-Vice-President-im-Gespraech-Es-wird-kein-DirectX-12-kommen-1835338.html

        Speaking of accurate reading:

        ”Since you dont provide the origin of the ”There will be NO DX12” statement, with some Googlefu i found your original link, which was an English translation of the German source. You do know that the original German source is really poorly worded right?”

      • Scali says:

        Clearly i made the wrong assumption here and figured that you were able to connect the dots, but nope, since here you are nitpicking about it.

        Erm, you are the one desperately trying to connect dots here. But it’s pretty irrelevant, because you are arguing about a different quote than the one in my blog, which does not say “DirectCompute 11” in the first place. So it doesn’t connect, period. Therefore I see absolutely no reason to try and guess what AMD may have tried to say with “DirectCompute 11” in a quote I never even referenced in the first place. I’m just saying that “DirectCompute 11” does not match up with Microsoft’s naming scheme.

        Hence ”expected”. In fact at release they even one-upped themselves by supporting DirectCompute 11.

        Except that quote is from September, and I point out that nVidia had already released drivers with DirectCompute back in July of that year (release 190: “Supports Microsoft’s new DirectX Compute API on Windows 7.” Go ahead, grab a GeForce 8800, put it in a Win7 box and download those drives to see: http://www.nvidia.com/object/win7_winvista_32bit_190.38_whql.html).
        So I’m not sure how anyone can ‘expect’ to be the first with something when you clearly aren’t.

        Since AMD says they worked with Microsoft with DX12 after it was announced, doesnt it make sense that they started these talkings AFTER April 2013? Unless you can back up that AMD even BEFORE 2013 was already talking about DX12 to Microsoft.”

        I can actually, and I have, before.
        See this tweet from Microsoft: http://wccftech.com/phil-spencer-we-knew-what-directx-12-was-doing-when-we-built-xbox-one/
        (as if the timeframe from ‘no DX12’ in April 2013 to a full specification of PSOs in October 2013 was not unrealistic enough)
        So apparently DX12 was ‘a thing’ when MS started on the Xbox One.

        You do know that the original German source is really poorly worded right?”

        You are pushing your opinion as fact. I instead actually quote that source, so everyone can make up their minds themselves about how well it is worded.
        I personally think it’s crystal-clear. So clear that Heise even chose that particular statement as the title of the interview.
        Just as it is crystal-clear that work on DX12 was already underway, and AMD knew about it.

  13. Rebrandeon says:

    http://www.hardocp.com/article/2016/05/27/from_ati_to_amd_back_journey_in_futility

    “In the simplest terms AMD has created a product that runs hotter and slower than its competition’s new architecture by a potentially significant margin.”

    • Scali says:

      Interesting article.
      You can say a lot about Kyle… he can be blunt, opinionated etc. But he is not (or at least was not in the past) in the habit of just posting random disconnected rants (like eg SemiAccurate).
      So let’s see what’s what.
      I did say that nVidia is doing the tick-tock with Maxwell and now Pascal, where AMD is basically skipping the Maxwell arechitectural refinement, and is going straight to another die-shrink. If they also want to close the gap with nVidia at the architectural level, they’re taking a huge risk, and it could blow up in their face.

      I hope he’s wrong about Intel being interested in buying the GPU section of AMD though. Intel is doing quite well on their own currently. Buying the IP makes sense. Hiring a few skilled engineers makes sense. Moving everything over to the AMD GPU section would be a huge waste of resources.

      • Rebrandeon says:

        Kyle’s sources were dead right.

      • Scali says:

        Again I see a focus on ‘DX12 w/Async Compute’.
        This smells of yet another DX12_0 rehash, and again trying to redefine DX12 as if async compute is the only worthwhile feature.
        I can only hope they have something more up their sleeve, because if it really lacks DX12_1 (and other new features such as simultaneous multi-projection) *again*, AND they don’t compete on performance (and performance/watt) either, then AMD is in for a tough fight this round (comparing against 970/980, really?).

      • Alexandar Ž says:

        Once again comparing to the competitors cards instead of their own, which I find pretty disingenious. And the DX12/Async Compute bit is just icing on the cake.
        They’re the only tech company that does this kind of thing in their marketing materials, I’v never seen Intel or Nvidia do that.

      • Scali says:

        They’re the only tech company that does this kind of thing in their marketing materials, I’v never seen Intel or Nvidia do that.

        Yes, AMD’s marketing is quite different from the others. That’s what I tried to explain to Redneckerz 🙂
        Also, trying to claim that nVidia’s hardware doesn’t support async compute is disingenuous indeed.
        nVidia does support it, see also this page:
        https://developer.nvidia.com/dx12-dos-and-donts#dx12
        Under “Do’s”:

        Be conscious of which asynchronous compute and graphics workloads can be scheduled together

        And this is an ‘old’ page, dealing with Kepler/Maxwell, it predates Pascal.

        Here is another presentation from nVidia and AMD together that covers async compute (and various other things in DX12, where some differ for AMD and nVidia):
        https://developer.nvidia.com/sites/default/files/akamai/gameworks/blog/GDC16/GDC16_gthomas_adunn_Practical_DX12.pdf
        They say you can net about 10% gains, if done correctly (which implies they support it). They also say that if done poorly, you can get worse performance. This is something people also said about their experiences with AMD hardware.
        The reason why nVidia asked Oxide to disable async compute was probably because it was suboptimal for their hardware.
        Perhaps they thought they could fix it with a game-optimized driver. But I didn’t keep track of that. It would only work if the async path was also re-enabled for nVidia hardware.

      • Alexandar Ž says:

        I’ll admit I’ve given up on reading your argument with him.
        His walls of text interspersed with random quotes were really hard to digest and after reading the first two I haven’t really seen anything new or well constructed, just the usual blind accusations.

        I’m pretty curious where will the whole DX12 affair go. I don’t have any technical background in graphics development so I don’t feel competent to make guesses or pass judgmenets, especially since so far it seems like we’ve got a couple titles, each associated in some way with one IHV or the other and performing better or worse in correlation with that affiliation.

      • Scali says:

        Yes, we’ll just have to wait and see. It wouldn’t be the first time when a game originally optimized for/by a given IHV performs better on the next-gen product of another IHV.

        The only thing I can say is that it wouldn’t be a good sign if AMD still doesn’t have DX12_1 support. nVidia has had that for 2 generations now, dating back to late 2014. And Intel has had it since SkyLake as well.

      • Alexandar Ž says:

        They say you can net about 10% gains, if done correctly (which implies they support it). They also say that if done poorly, you can get worse performance. This is something people also said about their experiences with AMD hardware.
        The reason why nVidia asked Oxide to disable async compute was probably because it was suboptimal for their hardware.
        Perhaps they thought they could fix it with a game-optimized driver. But I didn’t keep track of that. It would only work if the async path was also re-enabled for nVidia hardware.

        I think I’ve read this somewhere already (could have even been here).
        My takeaway was that async compute is something that has to be tailored specifically for a given architecture, so it’s quite possible that the big delta in Ashes is at least in part thanks to Oxide not optimizing it very well for Kepler/Maxwell.
        To be honest I’m not really sure why is that game touted as some universal benchmark. It’s not terribly popular or good (at least as far as I’ve heard), it doesn’t strike me as doing something all the revolutionary, if anything it kinda looks like a prettier SupCom.
        It’s a single datapoint, which is nice to have, but it hardly paints a whole picture.
        And haven’t they previously done the Mantle “benchmark” in collaboration with AMD? Star Swarm or what was it called.

      • Scali says:

        My takeaway was that async compute is something that has to be tailored specifically for a given architecture, so it’s quite possible that the big delta in Ashes is at least in part thanks to Oxide not optimizing it very well for Kepler/Maxwell.
        To be honest I’m not really sure why is that game touted as some universal benchmark.

        Yup, everything is way overblown.
        AMD is using the game as a marketing tool. Async compute is the only thing they have, as far as DX12 goes.
        So they had it optimized specifically for their architecture, and tried to make it look like AMD has the superior hardware.
        It’s a huge amount of hype anyway, because AMD only gains about 8% of performance in that game.
        nVidia’s GPUs are fast enough to cover that with brute force.

        And haven’t they previously done the Mantle “benchmark” in collaboration with AMD? Star Swarm or what was it called.

        Yup. That one was again AMD marketing. They even managed to trick reviewers to only test AMD hardware. The few reviews that also tested nVidia hardware, showed that nVidia’s hardware performed a LOT better than AMD under DX11. So the benchmark mainly demonstrated how poorly AMD’s DX11 drivers performed, rather than how great Mantle was.
        When actual Mantle games arrived, such as BF4 and Thief, it was a huge disappointment for most people, since Mantle only gained a few fps in the best case, and was even slower than DX11 worst-case.

        I hope nVidia won’t ever go down that route. nVidia pays and supports developers as well to optimize for nVidia, and use nVidia-specific features. But at least nVidia concentrates on actual games. So the gains you see in benchmarks are the gains you see when you buy and play the game yourself. StarSwarm wasn’t a game, it just manipulated their customers. And AoTS doesn’t seem very different. Oxide made some big public statement about async compute, and how nVidia allegedly didn’t support it. And it began to lead a life of its own from there.
        At the end of the day, the game still runs best on nVidia hardware. But a lot of people still think AMD has the ‘One True DX12 hardware!’.

        Oxide themselves also took a step back: http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2130#post_24379702

        Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn’t hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn’t hold Ashes up as the premier example of this feature.

        Shame that the AMD crowd does exactly that.

      • nomis says:

        And now AMD is at it again with ashes. AMD claims the 1080 does not render everything to gain a performance advantage.

        The comparison is quite dumb anyway. 2 480 are actually 300 TDP against 180 TDP of the 1080. It’s just a comparison on the merit of price again.

      • Scali says:

        AMD at its disingenuous best again…
        Firstly, if you know that the benchmark has rendering issues on your competitor’s cards, why would you use it anyway? These would obviously invalidate the results. If you’re going to make a comparison, then do so with a game that works without issues on both cards. Or actually, one game is not enough, do a few.
        Secondly, again it’s AMD taking digs at their competitor. That’s not ‘chic’. If nVidia indeed has driver issues with this game, the right thing to do is to wait for nVidia to fix them and/or come with a response, instead of going for a full frontal attack in public (for all we know, this really is some kind of driver bug, and it may actually decrease performance, because the bug makes the shaders do more work than they should).

        The ‘chic’ thing to do is to just compare your hardware to your own previous hardware, which rules out any of this nasty stuff. People can easily extrapolate how the hardware compares to the competitor’s, because there are plenty of reviews comparing the older hardware under controlled circumstances.

  14. Redneckerz says:

    Alright, its now evidently clear you dont even really read what is being said.

    ”Erm, you are the one desperately trying to connect dots here.”

    You already knew i was talking about correct feature levels when i said: ”It had DirectCompute 11, which, without looking this up (Quite tired of sourcing all of the above as you can see) is on DX11 feature level 11, right?” when just down below i said of the 8800 GT DirectCompute featurelevels: ”But that is on a featurelevel 9_3 or atleast lower than what AMD claimed.”.

    So you KNEW i was talking about the right thing. For you to not read it properly and then very childishly nitpick that ”It wasnt called feature level 11” when its obvious i was referring to 11_0 is more proof i was tired at that time. But ofcourse, you had to nitpick at it because that is obviously such a mature thing to do.

    ”But it’s pretty irrelevant,”

    It definitely is because you cant read properly or care to argue about all the other points (Likely because you cant admit when you are wrong)

    ”because you are arguing about a different quote than the one in my blog, which does not say ”DirectCompute 11” in the first place.”

    Yet the later press statement when HD 5800 series was released does. Or what, are you smelling some anti-AMD nonsense in that statement?

    ”So it doesn’t connect, period. Therefore I see absolutely no reason to try and guess what AMD may have tried to say with ”DirectCompute 11” in a quote I never even referenced in the first place. I’m just saying that ”DirectCompute 11” does not match up with Microsoft’s naming scheme.”

    Just as a reminder since for no apparent reason you bothered to ”analyze” a part on which i said beforehand about:

    ”I type my responses in Kladblok so its a bit clearer for me to see where everything is and i dont have the downside of having unexpected time-outs by sites (Which would remove the post i just had made) During responding i started writing some things in advance about DirectCompute, before i got to the other source which i felt was more complete and for which i had to rewrite part of my answer. Wanna know what i originally wrote? I am just showing it just so you know i wasnt purposefully ignoring your blogposts by incompletely referring to them, and it also isnt my current opinion. Just being open and honest here.”

    ”Except that quote is from September, and I point out that nVidia had already released drivers with DirectCompute back in July of that year (release 190. Go ahead, grab a GeForce 8800, put it in a Win7 box and download those drives to see: http://www.nvidia.com/object/win7_winvista_64bit_190.38_beta.html).”

    You are so dense, READ what i said before about that: ”But that is on a featurelevel 9_3 or atleast lower than what AMD claimed.”.

    ”DirectCompute 11” was likely just said as such by AMD for completeness sake since they were the first DX11 cards to the market.

    ”So apparently DX12 was ‘a thing’ when MS started on the Xbox One.”

    Thus also likely that AMD knew of its existence not much later, and hence why they made all those statements later.

    ”You are pushing your opinion as fact.”

    As if you NEVER do that Scali. Really poor on your end to go put that in people’s faces when you have been rebutted countless times before. Yet here you are still trying to nitpick and argue on really the most trival of points. It really is all about ”winning” debates with you, even when you look completely foolish whilst doing so.

    You can also run that whole article in a translator if that makes you sleep at night. I referred to the German source in the beginning and used the English translation since with a bit of moving and switching on the German source they meant the same thing. Thats why i only referred to the German source, we are on an English speaking site after all.

    ”I instead actually quote that source, so everyone can make up their minds themselves about how well it is worded.”

    On which i pull a quote from earlier on: ”And thats why most people dont bother a discussion with you – Not because you ”know” your shit, but because your attitude is incredibly childish most of the time and almost obsessively focussed on ”winning” debates, no matter from who or what.”

    Thanks for proving my point that you have to ”win” this tidbit here by quoting the actual source whilst i only referred to it and quoted the English translated source instead (Again, since we are on an English speaking site after all)

    ”I personally think it’s crystal-clear. So clear that Heise even chose that particular statement as the title of the interview.”

    Yet you didnt bother to link to the original source before last year – You too linked to the English translation.

    ”Just as it is crystal-clear that work on DX12 was already underway, and AMD knew about it.”

    Yet you were the one to claim ”AMD says there will be no DX12”. So, although you didnt bother addressing it (Because you couldnt go against it since it was right) you seemingly are now convinced that AMD already knew about DX12 when work on DX12 was on its way. I guess this is your way of admitting that i was right 🙂 And for that i thank you.

    • Scali says:

      Yet the later press statement when HD 5800 series was released does. Or what, are you smelling some anti-AMD nonsense in that statement?

      I was talking about the earlier press statement.

      You are so dense, READ what i said before about that: ”But that is on a featurelevel 9_3 or atleast lower than what AMD claimed.”.

      Who is dense here? I take a *direct quote* that even today can be found verbatim on at least 3 websites, which I pointed to, which do *not* specify anything about featurelevels or anything. They just make the blanket statement ‘first DirectCompute’.
      Also, speaking of dense… Obviously the GeForce 8800 is featurelevel 10_0, being the first fully DX10-compliant card.
      Aside from that, the featurelevel is rather irrelevant, since DirectCompute is one of those optional features, not tied directly to a featurelevel. For featurelevels lower than 11_0, compute shader support is not required, but there is a caps bit for it, so the functionality can optionally be exposed.

      Not because you ”know” your shit, but because your attitude is incredibly childish most of the time and almost obsessively focussed on ”winning” debates, no matter from who or what.”

      Oh really?
      Correct me if I’m wrong, but this is my blog you’re posting on. I never asked you to come here. You come here, and make some pretty aggressive posts regarding my article, and various things that have nothing whatsoever to do with this article at all (including personal digs), and pretty much demand that I discuss them. And now you complain when I actually do? Heck, you even whine when your posts are not approved quickly enough (WordPress tends to flag posts as spam when they contain too many links and/or are very long)!
      It’s not so much about “winning” as it is about debunking all the accusations you are throwing in my face.

      Yet you didnt bother to link to the original source before last year – You too linked to the English translation.

      As I say, the statement was crystal-clear, so I saw no need to quote the German phrase in an English blogpost (most people wouldn’t be German speakers anyway, and Google Translate isn’t very reliable, so it wouldn’t add anything of value. If people really wanted, they could have found the link in the English article anyway. It’s not like I deliberately tried to keep them from finding it. But you are a special case. You can’t seem to find anything yourself, and need to be spoonfed every step of the way). The actual German phrasing was never a point of debate, until the AMD fanboys had to go backtrack later, when it became obvious even to them that DX12 was here to stay. So they had to rewrite history to make it all fit.

      Yet you were the one to claim ”AMD says there will be no DX12”.

      That wasn’t a claim. It was a fact. The German article proves that, and many English sites also ran an article on it.
      See, you would have had a point if AMD came out right away saying: “No guys, you got that wrong, that’s not what we meant to say!”
      Sorta like how most sites originally described Mantle as a console-like API, which would also come to PS4 and/or Xbox One.
      But AMD did no such thing, in either case. They gladly kept the delusion up as long as they could.

      you seemingly are now convinced that AMD already knew about DX12 when work on DX12 was on its way. I guess this is your way of admitting that i was right:) And for that i thank you.

      Uhhm, you lost me here.
      Obviously, being in the DX12 Early Access program, I knew that:
      1) Microsoft was working on DX12
      2) AMD was also in the DX12 project.

      These things were never contended by me obviously.
      What I did contend was this:
      A) Most people believed AMD when they said there will be no DX12. Just read some of the comments on various sites. “Oh, MS is evil, they’re keeping gaming back etc!”
      B) Once it became obvious that MS had indeed been working on DX12, many people said “See, it’s because AMD came out with Mantle! That’s why MS now started on DX12”
      C) AMD was lying to the general public when they made that statement, because they knew about DX12 already. They only did this to make themselves look like the saviour of graphics APIs with Mantle. Basically stabbing Microsoft, their own partner with Xbox One and DX12, in the back. This is highly unethical, and I strongly disapprove of such disingenuous actions.
      D) AMD made many other statements about Mantle that they knew would never be a reality. Such as Mantle becoming an open standard, Mantle being adopted by other IHVs, Mantle coming to other OSes than Windows etc.
      E) AMD did not seem very active in the DX12 development, especially in the early stages. They were there, but had a very passive stance. They did not give too much input/feedback on specifications, and it took them much longer to come up with a driver release than nVidia and Intel.

      All these things you can verify by reading my older blogs and forum posts on the topic. If you feel the same about all of this, then I don’t see why you ever wanted to discuss things in the first place. I think it’s pretty obvious to everyone here that you’re still a Mantle apologist.

      • Redneckerz says:

        ”I was talking about the earlier press statement.”

        And i weren’t.

        ”Who is dense here?”

        You are. Because you dont even read the answer.

        ”Also, speaking of dense… Obviously the GeForce 8800 is featurelevel 10_0, being the first fully DX10-compliant card.”

        So at what feature level (if any) is its DirectCompute then?

        ”For featurelevels lower than 11_0, compute shader support is not required, but there is a caps bit for it, so the functionality can optionally be exposed.”

        It never crosses your mind that this might be why AMD called it DirectCompute 11 also? (Aside that it was just said for completeness sake which you seem to ignore) Because for other, less modern card’s its not required and merely an option?

        ”Oh really?”

        Yeah, really. The fact that you go on nitpicking on a mistyped word about feature levels (When i already said before that i was tired of looking up sources before and how just one sentence later i said it correctly, thus you knew i was talking about the right thing, but that didnt stop you from pointing it out anyways) is childish. Because it really is just arguing for the sake of arguing at that point because you cannot give any slack here.

        ”Correct me if I’m wrong, but this is my blog you’re posting on.”

        And as such you can be susceptible to criticism. Especially when its clear as night and day that you arent ”neutral” in the slightest.

        ”You come here, and make some pretty aggressive posts regarding my article, and various things that have nothing whatsoever to do with this article at all (including personal digs), and pretty much demand that I discuss them.”

        Ofcourse. Its your blog. I just go by what you claim, and refute it. If you dont like that, then maybe you shouldnt run a blog at all 🙂 Especially when its as biased as yours.

        ”And now you complain when I actually do?”

        I complain about your rather lack of discussing the points made. The fact that you shaft whole arguments that have a lot of sourced links as being ”They dont warrant a response” just tells me you dont have any rebuttal for it, and instead of agreeing with me, you go shaft the whole argument, or you start nitpicking at words, which really is a completely childish thing to do when the thing you nitpick at was said correctly literally the line below and the reason why the original word was said incorrectly is literally at the line above (of me being tired, etc). Like i said earlier – Your attitude truly sucks, given that you go resort to such childish antics.

        ”Heck, you even whine when your posts are not approved quickly enough (WordPress tends to flag posts as spam when they contain too many links and/or are very long)!”

        Really weak from your end to go put this in my face when i apologized for it beforehand and it wasnt ”whining”. See https://scalibq.wordpress.com/2016/05/17/nvidias-geforce-gtx-1080-and-the-enigma-that-is-directx-12/#comment-8298 – its literally there. Dont make stuff up.

        ”It’s not so much about ”winning” as it is about debunking all the accusations you are throwing in my face.”

        Whilst seemingly ignoring half of them and start nitpicking on semantics instead. It would be great if you were actually capable of making a concession for once, but clearly, that is out of the question.

        ”As I say, the statement was crystal-clear, so I saw no need to quote the German phrase in an English blogpost”

        Gee, just like how i saw no need to the same either! But for some reason, when i did it, you call it out, but when you do it, its fine. Hypocrisy at its finest.

        ”If people really wanted, they could have found the link in the English article anyway.”

        Exactly. Hence why i only referenced it by saying ”Its a English translation from the German source”. Glad this is cleared up.

        ”You can’t seem to find anything yourself, and need to be spoonfed every step of the way.”

        Referencing the German source (Which should make it clear to anyone with a brain that one obviously knew the URL of it) but not posting it is ”You cant find anything yourself and need to be spoonfed everything”? Okay, you really are entering delusional territory here now. Get a grip.

        ”The actual German phrasing was never a point of debate, until the AMD fanboys had to go backtrack later, when it became obvious even to them that DX12 was here to stay.”

        To who are you talking here? Because this makes zero sense whatsoever.

        ”So they had to rewrite history to make it all fit.”

        Just.. wow. ”AMD fanboys have to rewrite history to support their narrative”. You arent even trying to make sense anymore. Bordering on crazy here, Scali.

        ”That wasn’t a claim. It was a fact.”

        You are getting repetitive. I have explained this already. One more time: Like i said before: ”You just took what they said for real and didnt even think one second about the possibility that it would be pretty dumb of AMD to legitmately think that there would be no DX12, given their worldwide contacts. They simply made that statement because they KNEW already about its existence.”

        And i already told you before that it was a PR statement. You disagreed that it was PR, so i explained it for you:

        ”Making outlandish claims is a part of PR, yes. And people fall for it, that is the nature of the beast. Heck, look at the American Presidential Bid – Both parties continously make outlandish claims to eachother. People eat it regardless of whether its the truth or not. That is literally how some forms of PR work. It goes around in companies too – They all do it. Its not like that only AMD makes outlandish claims that blatantly obvious arent true, they all do. Thats why its called PR for a reason.”

        And then you asked if ”You can say anything you want, just to promote your products? Even when it isn’t even remotely accurate. Even when it’s patently dishonest?”

        On which i answered:

        ”Till a certain level, yes. You cant obviously claim on Nike shoes that they also possess the ability to time travel. What you can do, is say that they give you wings and then clip to a reel of Michael Jordan flying in the air when scoring a point. PR speak still has to maintain the suggestion that things may be true, even when in reality they obviously arent. So there is a limit on how outlandish your claim can be.”

        Like you said, that was the ”core of our disagreement” and in hindsight, the closest thing to you conciding with me.

        For full disclosure, i concluded it with:

        ”It seems so. But that is how PR works. So basically, all you have on your ”AMD says there is no DX12” is simply because you disagree that such a bold claim (Which they obviously can only make when they already know of the existence of a new DX) was a PR statement at the time.”

        That still is all you have on it. A disagreement. All of this has been said before already in earlier comments. You just repeat the very same statement again when it was already refuted.

        ”See, you would have had a point if AMD came out right away saying: ”No guys, you got that wrong, that’s not what we meant to say!”

        As ive said earlier, you obviously dont understand how PR works. They made that outlandish claim on purpose because they knew it would rear heads. (And it did, obviously.)

        ”Sorta like how most sites originally described Mantle as a console-like API, which would also come to PS4 and/or Xbox One.”

        And AMD later had to make a rebuttal because people like you obviously take everything for literal: http://www.anandtech.com/show/7421/amd-expands-on-microsoft-blog-post (From October 2013)

        ”Mantle is NOT in consoles. What Mantle creates for the PC is a development environment that’s *similar* to the consoles”

        Note the accentuation on ”similar”.

        ”They gladly kept the delusion up as long as they could.”

        Completely unverified biased nonsense claim. You arent even trying to highlight the supposed ”neutrality” of your image and character, do you?

        ”Uhhm, you lost me here.”

        I guess there is a first for everything 🙂

        ”Obviously, being in the DX12 Early Access program, I knew that: 1) Microsoft was working on DX12 2) AMD was also in the DX12 project.”

        Good for you. So why be so salty against AMD then? You just proved they obviously knew of DX12 when they made their statements 😉 (Said in jest)

        ”A) Most people believed AMD when they said there will be no DX12.”

        ”Most people, you included”, that is. I mean, think about it. Is it really logical for you to believe that a big company like AMD, that has worldwide contacts, for which you have JUST said that they were involved in the DX12 project, when they say ”There will be NO DX12”? Do you really think that AMD legitmately thought that there would be no DX12, even when you JUST SAID that they were involved in the DX12 project?

        The answer should be: ”No, that isnt logical. Its an outlandish claim they are making here. Now why would they make an outlandish claim like that? To garner attention?”

        Ding ding ding. Because that’s what it was. A ridicoulous claim to garner attention, thus a PR statement.

        ”Oh, MS is evil, they’re keeping gaming back etc!”

        People are stupid, news at 11.

        ”B) Once it became obvious that MS had indeed been working on DX12, many people said ”See, it’s because AMD came out with Mantle! That’s why MS now started on DX12”

        *Sigh* When will you start to understand that Mantle was a showcase API from the beginning? Again: ”What we do know is that Mantle was announced first and demonstrated first, since its just a sub-set of a complete API, as a showcase for low-level API’s.”

        Since a sub-set/technology demonstrator is far quicker to assemble than the ”real” thing (DX12), AMD went along and made Mantle to showcase to both the public and Microsoft the benefits of a low-level API.

        ”C) AMD was lying to the general public when they made that statement,”

        *Double sigh* That is how PR works. It doesnt have to be right, it has to draw attention from people, as long as the outlandish claim still makes some sense in the given context. Every big company does it (Yes, even AMD’s competitors, how shocking.) See the Nike shoes example ive given earlier already.

        ”They only did this to make themselves look like the saviour of graphics APIs with Mantle.”

        *Triple SIGH* Yeah thats why they did. They wanted to look like saviours. Get a grip man. You arent even trying to make any sense is it? Again a completely unverified nonsense assumption from your end. I am starting to think that no matter how wrong you are, its still AMD’s fault. Even when its YOU who made the assumption of thinking that AMD legitmately thought that there would be no DX12.

        ”Basically stabbing Microsoft, their own partner with Xbox One and DX12, in the back.”

        Basically it was: (Just going to copy paste here since ive already said this before countless times already but you seem completely bent on believing your own flawed assumption) ”a way to publicly say to Microsoft: ”Hey, we want low-level API’s, look at our limited solution.””

        ”This is highly unethical, and I strongly disapprove of such disingenuous actions.”

        You obviously dont understand PR. That is fine and dandy, but dont pretend like you understand AMD’s statement and that you know how to read it. Because you dont, and because of it, you jump to ill-informed conclusions.

        ”D) AMD made many other statements about Mantle that they knew would never be a reality. Such as Mantle becoming an open standard,”

        And another copy paste:

        ”Which in a cryptic sense has proven to be correct: It lives on in Vulkan, which is part of the Khronos group that has the other major API at play here, OpenGL – https://en.wikipedia.org/wiki/Vulkan_%28API%29”. (”It” being Mantle, ofcourse)

        And what is Vulkan? From the press release: https://www.khronos.org/news/press/khronos-releases-vulkan-1-0-specification

        ”The Khronos Group, an open consortium of leading hardware and software companies, announces the immediate availability of the Vulkan 1.0 royalty-free, OPEN STANDARD API specification.”

        So yeah, that came a reality aswell.

        ”Mantle being adopted by other IHVs,”

        Through Vulkan.

        ”Mantle coming to other OSes than Windows etc.”

        It was announced for Linux, so yeah, it was coming to other OS’es other than Windows. So that claim has been refuted already. However, after DX12 made the scene and AMD gave their code to Khronos, it made no sense for them to port the API to Linux, especially when Vulkan (Which has Mantle as its very foundation) has implementations on Android, SteamOS, Linux, and Windows. https://en.wikipedia.org/wiki/Vulkan_%28API%29

        So yeah, in a sense, that statement checks out too still.

        So all your mentions have been proven wrong 😉

        ”E) AMD did not seem very active in the DX12 development, especially in the early stages. They were there, but had a very passive stance.”

        Perhaps they were busy with Mantle 😉 Anyhow, since you dont provide details like a screenshot that proves this statement, it is unverified.

        ”They did not give too much input/feedback on specifications, and it took them much longer to come up with a driver release than nVidia and Intel.”

        Again, thats all a claim from your end. Even so it isnt really relevant to the bigger topic at hand.

        ”All these things you can verify by reading my older blogs and forum posts on the topic.”

        But your blog isnt a source, you know? As i have demonstrated various times now its very much a biased view on things.

        ”I think it’s pretty obvious to everyone here that you’re still a Mantle apologist.”

        And ending the post with a dig. As it stands i have sourced my points and explained them. All your statements about AMD a few sentences above have been sourced and refuted, and i have explained the rest aswell. On some of them you really have no argument anymore to stand on, on the others you can decide to argue on its semantics. But that would be extremely nitpicking on your end, plus, in the case of the PR statements, you dont understand the logic behind it anyways 🙂

        If proving you wrong makes me a ”Mantle apologist”, than so be it. To me, it just seems to imply that you really are a poor loser Scali.

      • Scali says:

        The reason I don’t respond to everything, is that you post a lot of garbage. You drop some links and some crazy thoughts, and you just go on and on. It’s a big quantity, but there is no quality whatsoever.
        If you think I take you seriously, and actually think most of your ‘arguments’ are even worth discussing, let alone that they would intellectually challenge me, you’re sorely mistaken.
        I have no intention of continuing this. You’re basically an idiot. You don’t understand my responses anyway, and don’t even seem to know what you don’t understand, and don’t do any research yourself to fix the gaps in your knowledge (for example, you can google for yourself to see how DirectCompute works, and whether or not it has anything to do with featurelevels etc. You can even find out what support the GeForce 8800 or any other GPU has for DirectCompute! I’ll even drop you a hint: ‘DXCapsViewer’). So everytime I even bother to answer things (which is mostly correcting your poorly formulated and thought-out arguments), the response is even more broken arguments. It just never ends (it’s amusing at times though, eg you listing a bunch of games running at 900p on Xbox One to ‘counter’ the argument that consoles aren’t really up to gaming at 1080p. Aside from the sad fact that you’re missing the deeper point that even games running at 1080p on consoles tend to pull various tricks/cheats and/or run at rather low framerates to get that 1080p box ticked).
        I am not going to waste time on answering this drivel any longer.

        Also, are you really *that* stupid that you don’t understand that my point about AMD’s ‘no DX12’ is that AMD and MS were *already* working on DX12 together (yes, complete with low-level API, don’t even bother going there, proof for everything was already presented) at the time they made that statement?
        Because your post sounds a lot like that, meaning that your arguments (like all your other arguments) don’t even argue against what I have said in the first place. Which is strange, because I have been VERY clear about what I meant, in various blogs and forum posts.
        This is a rhetorical question by the way, so don’t bother answering.

        You just keep going round in circles, reiterating things that have been discussed and debunked before, often even contradicting yourself (going back to an earlier ‘rebuttal’, even though you had already conceded to my answer before, going into a different line of ‘debate’).
        For example, I’m pretty sure I’ve already explained earlier that my blog is a perfectly fine source for my own statements (you even try to use it as such, in your posts above). But here you are again. Why that wasn’t obvious to in the first place puzzled me, but not half as much as why you are still trying to argue that even now.
        Did I mention you’re an idiot?

  15. Redneckerz says:

    Well, seems like all you have left on me is the whole DirectCompute issue – The one issue, i may add, was the one i didnt research fully through.. yet. Really typical that you narrow it down to that point and ignore everything else (Because you dont have any arguments for it, that much is clear)

    And ofcourse, i am an idiot after i refuted all your AMD statements in the last part of your post. You dont even respond to that, and its clip and clear now its because you are too proud to admit that you were wrong in those claims. Simply because you cant refute against the sources. Like i said: You really are a poor loser in that regard. Atleast have the decency to acknowledge them and admit that you were wrong on these things.

    But no. Because when are you are EVER going to make concessions huh Scali.

    ”The reason I don’t respond to everything, is that you post a lot of garbage.”

    If that is how you are going to downplay it after being refuted so clearly, then that says a lot about you. Showing definitive evidence and thorough explanations = ”You post a lot of garbage”. Sure Scali. Sure. Your pride (Misguided as it is) really stands in your way.

    ”You drop some links and some crazy thoughts, and you just go on and on.”

    Please, continue on downplaying it. Just makes you even more of a poor loser. Those links were correct and provided proof on how wrong you were and obviously you cant handle that.

    ”It’s a big quantity, but there is no quality whatsoever.”

    And even more downplaying. You really take it hard when you have no arguments left anymore. As if the length of my posts is a variable for its quality or not. You make long ass posts aswell, you dont hear me childishly nitpicking on its length. But ofcourse the self-proclaimed ”Neutral” Scali has to.

    ”If you think I take you seriously,”

    Which you do, lets make no mistakes about that.

    ”and actually think most of your ‘arguments’ are even worth discussing, let alone that they would intellectually challenge me, you’re sorely mistaken.”

    Keep on telling yourself that. If you want to convince yourself that you still make any sense, go ahead. In the meantime i stick to whats going on in reality and its that place where you ran out of arguments and now are throwing a tantrum on how ”you are mistaken if you think you challenge me”. Dont be such a child and just take your losses, Scali.

    ”I have no intention of continuing this.”

    Ofcourse not. You finally met your match and that guy won at your own game. (Yeah, dont expect me to hold back on these things. Knowing you, you wouldnt do so either.). And instead of cutting your losses, admitting you were wrong, and gaining some insight, you, very immaturely, threw the towel with the words ”I have no intention of continuing this”. If you cant deal with debates or dont know what certain things mean and you are too proud to admit when you are being wrong, then dont run a blog or have discussions with people online. Because as you can tell now, one of these days someone will come around and beat you at your very own game. Be greatful i dont have any ambitions to run a blog myself, so you can ofcourse continue on spreading your biased nonsense. As long as you can discredit AMD, you will.

    ”You’re basically an idiot.”

    Typical response from you when you ran out of arguments to use. The only idiot here is the one who doesnt understand PR statements (and PR in general) and throws in some other claims where AMD MUST have been wrong, only for him to be caught by surprise when everything was refuted, complete with sources and all, so you couldnt spin your bullshit in another direction. (But you tried to, ofcourse.)

    Dont hate the player, hate the game Scali 😉

    ”You don’t understand my responses anyway,”

    Its quite sad that you continue to delude yourself on your own online blog. If i didnt understand your answers, i wouldnt be able to refute them, you know?

    ”and don’t even seem to know what you don’t understand,”

    Because its always someone else who is wrong huh 😉 Maybe buy a mirror at your local store so you can atleast do some basic self-reflecting on your argumental skills.

    ”and don’t do any research yourself to fix the gaps in your knowledge”

    This doesnt even make any sense. If i wasnt researching anything, then why did i come up with various sources in this message and in the message of yesterday about consoles? Oh right, i forgot, you are just grasping for straws here.

    ”(for example, you can google for yourself to see how DirectCompute works, and whether or not it has anything to do with featurelevels etc.”

    I definitely will another day. Since that is the only part i havent refuted (yet), its safe to say you are again just grasping at straws here.

    ”You can even find out what support the GeForce 8800 or any other GPU has for DirectCompute! I’ll even drop you a hint: ‘DXCapsViewer’).”

    Thanks for the hint. Ill make sure i do some proper research on that (Seeing as its the only thing i havent done extensive research on), then in the future i will refute you on that point aswell 😉

    ”So everytime I even bother to answer things (which is mostly correcting your poorly formulated and thought-out arguments),”

    Still cant admit that you just got played on your own game on your own blog and are that much of a poor sports that you lash out a few more times like that totally ”neutral” person that you are. Stop pretending Scali. Nobody buys it anymore.

    ”the response is even more broken arguments.”

    Which, surprisingly enough, you are completely unable to refute! Maybe they arent so ”broken” after all and you are just a poor sports unable to take his losses. I am legit curious though why that is so hard for you, but i am not going to speculate as to why that might be 🙂

    ”(it’s amusing at times though, eg you listing a bunch of games running at 900p on Xbox One to ‘counter’ the argument that consoles aren’t really up to gaming at 1080p.”

    Interesting how you focus on XBO instead of PS4 which has a lot more 1080p titles, even though XBO has quite a few of them aswell. And its REALLY interesting you go at it again when i already prefaced this earlier by saying:

    ”Only the Xbox One has issues with that. Developers have found that the usual ”sweet” spot for that is 900p.”

    But go ahead, point it out again how XBO has a lot of 900p games.. continue to ignore that preface like it doesnt exist. Doesnt make you any ”right” at all anyways 🙂

    ”Aside from the sad fact that you’re missing the deeper point that even games running at 1080p on consoles tend to pull various tricks/cheats”

    Oh, you are going to introduce some more variables at it? How typical 🙂 So what ”various tricks/cheats” are they really using then Scali? I already said that consoles ”usually” run these titles at a PC equivalent of ”High” with the same LOD’s and the like. There are no ”tricks” involved. The only real obvious difference is the limited AF on consoles, usually around 2x/4x. But there are no dramatic changes in lighting, or downscaled feature sets on these games. But you dont bother about consoles anyways and you clearly havent researched these kind of things either, so you are just make a blank statement here. Bullshit, as per usual 🙂

    ”and/or run at rather low framerates to get that 1080p box ticked).”

    The fact that you write ”and/or” means you arent even certain of your own claim. Anything to introduce new variables and to not admit that you are wrong i guess. And as for framerates: Most titles run quite clearly at around 30 fps. And like ive said before – According to Steam Survey (Posted before so i dont have to bother linking it again) most people game at just about 1080p with the number 2 being 1366×768, which is vastly lower than the common 1080p from PS4 and even lower than XBO’s 900p. As usual you are just talking smoke and mirrors and you dont know what you are talking about.

    ”I am not going to waste time on answering this drivel any longer.”

    Ofcourse not. You just cant admit being wrong and its clear as night and day 🙂

    ”Also, are you really *that* stupid that you don’t understand that my point about AMD’s ‘no DX12’ is that AMD and MS were *already* working on DX12 together (yes, complete with low-level API, don’t even bother going there, proof for everything was already presented) at the time they made that statement?”

    No, because i understood that, already. I was talking about a different point. See, you dont even know which tree you have to bark at, that’s how confused you are about all this 🙂

    ”Because your post sounds a lot like that,”

    It only sounds like that because you just selectively read what you want to read and have shown to be completely incapable of reading the posts in the context that they have been said. And instead of learning anything from it, you go blame it on other factors. Because that is what you ALWAYS do Scali.

    ”meaning that your arguments (like all your other arguments)”

    Which you were completely unable to refute, but if you want to delude yourself to sleep thinking that after being so clearly refuted by me, you STILL think you were right all along, then that isnt my problem, Scali. Then the problem is obviously a lot deeper within you and you should do something about it.

    ”Which is strange, because I have been VERY clear about what I meant, in various blogs and forum posts.”

    Except you clearly arent. If anything, you have been ousted as the anti-AMD clown that you are and have highlighted your bias to such high degrees that nobody should be surprised anymore.

    ”don’t even argue against what I have said in the first place.”

    Sure they dont. You made some claims, i refuted them, now you cry wolf and say ”you dont understand”. Nobody is falling for it Scali. You lost, and now you are just doing pointless kamikaze actions. And if you know your history – Japan didnt win the war by throwing everything they had against the enemy. They lost because the allies threw a nuke on their asses.

    Sadly for you, in this case, that ”nuke” was me. 😉

    EDIT: I figured that was the end of your ”rant”, but a few days later, whilst reading your Keen talk, i noticed you snuck up a new alinea in your older post. Are you that ”angry” or ”hurt” over being called wrong, Scali? Just makes it look like you have a far deeper problem going on than what we were debating over.

    But yeah, will comment on that too since why the heck not:

    ”You just keep going round in circles,”

    Seems that is more applied to yourself since you still dont get the whole ”AMD said there would be no DX12” statement in its correct context. And since you see it as a ”disagreement”, i figure you will just remain stubborn about it. Which in the end, is part of your character, and thus your problem to deal with. You make the mistake that indirect your project your stubbornness into other people.

    ”reiterating things that have been discussed and debunked before,”

    To be honest, its getting kinda silly you still keep on thinking that you debunked a LOT of my claims. You didnt. Yeah, you explained a few things, but ”debunking”? Sure, keep on dreaming. On the other end of the stick, your claims have been thoroughly refuted and all you have left on that is grasping at straws and nitpicking.

    ”often even contradicting yourself (going back to an earlier ‘rebuttal’, even though you had already conceded to my answer before, going into a different line of ‘debate’).”

    If my memory serves me right, i just admitted to those things and/or apologized for them. Thats like, what, 2 or 3 little things? And even then so – it only shows i am perfectly capable of admitting when im wrong or dont know something yet and also perfectly capable of apologizing when due (Like when i thought you held back responses on purpose.). You, on the other hand, have not even tried to admit to being wrong or even go as far as apologizing for your statements (Like calling me an idiot for example. That is definitely something a ”professional” would do, right?).

    ”For example, I’m pretty sure I’ve already explained earlier that my blog is a perfectly fine source for my own statements (you even try to use it as such, in your posts above).”

    You do know that i was linking those as examples of you talking out of your ass, right? It doesnt exactly take a genius to see the clear bias at hand here.

    ”But here you are again.”

    Since i got a nice ”Sorry, this comment couldnt be posted” message after i tried to answer, one is not far from thinking you refrained my address from posting.

    ”Did I mention you’re an idiot?”

    You did. And it simply makes you not the ”expert” you so desperately want to be.

    The sole remaining question would be: Why? What has AMD ever done to you personally that has been so traumatic on a personal level that you have dedicated YEARS of your life trying to discredit them? You got a Geforce FX when you wanted a Radeon 9700 Pro, or what? (And before you play smart yes i know you had an ATI card at the time. I was joking, in case you didnt catch that.)

    No honestly, what is the underlying reason why you have dedicated years of your life discrediting them? Because no person can be so spiteful at a freakin’ graphics card company if there isnt some personal story involved here.

    ”Yes, AMD’s marketing is quite different from the others. That’s what I tried to explain to Redneckerz”

    3 days later and you are still clearly been hurt enough to make a reference to me. Nvidia’s marketing is exactly the same Scali, but ofcourse, you being the ”neutral” guy that you are isnt going to highlight that huh 🙂

    PS: As for that slide: You do know that AMD later tonight will host an update on Polaris right?

    @ Alexandar Z:
    ”I’ll admit I’ve given up on reading your argument with him.”

    Admittely its quite difficult to read, indeed. But why are you only calling me out on the walls of text if your man Scali does it aswell?

    ”His walls of text interspersed with random quotes were really hard to digest and after reading the first two I haven’t really seen anything new or well constructed, just the usual blind accusations.”

    ”Blind accusations” being linked with evidence, but hey, if you want to believe the dude who clearly had a meltdown on this, be my guest 🙂 I am not the one running a blog because i somehow have to show the world what a big meanie a company like AMD is 🙂

    • Scali says:

      Stop fooling yourself. You didn’t refute anything. You’re using the Chewbacca defense, if anything.
      I didn’t run out of arguments, I just don’t feel like answering your crap any longer, it’s a total waste of time on you, since I see no progression whatsoever in your level of understanding or behaviour.

      And as you rightly point out, you’re the one being called out by other people reading the discussion (or trying to), not me.
      If other people start to argue things I’ve said, then I will respond to that.

      Oh, as for ‘doing your research’… Take KILLZONE on PS4, read the part about ‘temporal reprojection’: https://www.killzone.com/de_CH/blog/news/2014-03-06_regarding-killzone-shadow-fall-and-1080p.html

      There’s one of those ‘tricks/cheats’ to get it up to 1080p. There was a big storm on the ‘nets about this (hence that blog explaining it, google around to dig up more dirt). Not sure how you could have missed that.
      But, I guess I’m still wrong and you’re still right… At least, in your mind only.

      • Redneckerz says:

        ”Stop fooling yourself.”

        Right back at ya smartpants 🙂

        ”You didn’t refute anything.”

        Maybe you should read back on your claims and see the links that refute them. That is just a suggestion though, i know for a fine fact that you are in denial and dont want read up on being (or apologizing, for that matter.)

        ”You’re using the Chewbacca defense, if anything.”

        You really never run out of bullshit excuses to justify your ends do you? Now its a Chewbacca defense, what will it be tomorrow?

        ”I didn’t run out of arguments,”

        Except you did since you didnt have any counter-arguments when i refuted your claim. To say ”I am done wasting my time” after your claims were refuted (And knowing you, you would NEVER bother to stop nitpicking) is just a sign of weakness.

        ”I just don’t feel like answering your crap any longer,”

        Because you have nothing anymore against to say it and just keep on posting more Ad nauseams to keep your disciples on your side. Its manipulative. And its basically the only skill you have because without it, nobody would even fantom to consider your anti-AMD bias gospel as even a remote truth.

        ”it’s a total waste of time on you, since I see no progression whatsoever in your level of understanding or behaviour.”

        Cute that you are getting concerned about me. You should rather be concerned of yourself.

        ”And as you rightly point out, you’re the one being called out by other people reading the discussion (or trying to), not me.”

        Ofcourse, this is your own blog after all. I dont have the faintest imagination that people would not support you on your very own blog.

        ”If other people start to argue things I’ve said, then I will respond to that.”

        Usually condescending and with Ad hominem’s applied to it. And ofcourse, always discrediting AMD whenever you can. It doesnt take a genius to see Scali 🙂

        ”Oh, as for ‘doing your research’… Take KILLZONE on PS4, read the part about ‘temporal reprojection’: https://www.killzone.com/de_CH/blog/news/2014-03-06_regarding-killzone-shadow-fall-and-1080p.html

        I know about the technique temporal reprojection – Quantum Break has used it to great success, as does Rainbow Six: Siege. The end result is that it gives a 1080p buffer that looks a lot more ”soft” than a ”native” 1080p buffer would. Dynamic screenbuffers, like Doom utilizes, are a more aggressive approach to maintain 1080p, dropping resolution if needed. The effect however is subtle enough that the general player doesnt notice.

        ”There’s one of those ‘tricks/cheats’ to get it up to 1080p.”

        You apparently are forgetting the 22! links to Digital Foundry that prove that the majority of console titles dont even rely on ”tricks”: PS4 titles are usually 1080p (Thus already refuting your statement, but nice try) whilst XBO is 900p. And that is just a fixed buffer, no trickery here. You also apparently forgot that the most popular resolution on Steam is just that same 1080p aswell, except you can use Ultra settings then, and that the second most popular resolution is 1366×768, which is even lower than what the XBO usually outputs. And lets not forget that this gen usually relies on PBR, which greatly increases visual fidelity. So these ”underpowered” machines are actually quite a step up. But ofcourse, that all is irrelevant if you dont bother to read my answers properly (which is here: https://scalibq.wordpress.com/2016/05/17/nvidias-geforce-gtx-1080-and-the-enigma-that-is-directx-12/#comment-8298)

        ”There was a big storm on the ‘nets about this (hence that blog explaining it, google around to dig up more dirt).”

        There are only a few titles that use this technique, and there are also only a few titles using a dynamic screenbuffer (Google around, maybe you learn something then.). The majority doesnt use ”tricks”, which was my point. But ofcourse, you couldnt take that conclusion home, clearly.

        ”Not sure how you could have missed that.”

        Because i figured i didnt have to let you know, because doing so would be a case of severe nitpicking. Then again, you have already demonstrated you will go far, very far, when it comes to said nitpicking. So i am not surprised that you did.

        ”But, I guess I’m still wrong”

        You are. And clearly that eats at you 🙂

        ”and you’re still right…”

        On what i have refuted, yes. And for highlighting the anti-AMD clown that you really are 😉

        ”At least, in your mind only.”

        It beats being out of your anti-AMD mind every day of every week 😉

      • Scali says:

        And another wall of text…

        I’ll just point out one thing:

        Ofcourse, this is your own blog after all. I dont have the faintest imagination that people would not support you on your very own blog.

        Erm, *you* are posting here, aren’t you?
        There are plenty of other blogposts where quite a few comments aren’t supportive, to say the least.
        Yet, in this case, literally everyone who posted here has called you out. Nobody has supported your side of the story.

      • nomis says:

        “Ofcourse, this is your own blog after all. I dont have the faintest imagination that people would not support you on your very own blog.”

        People support scali instead of you because his response are better articulated, written and have much much substance than what you write. You just gonna ignore that however and blame some form of loyalty.

        And regarding your console issue again. No one says that you can’t make games with great fidellity on current consoles. They are a lot more powerfull than their predecessors. But considering the state of the art of technology currently available, or even the technology available at their release, they can be considered underpowered.

    • dealwithit says:

      Every comment/reply you make has less input for the discussion

    • nomis says:

      Scali does not call you an idiot because he is mean or unprofessional (whatever that means on a personal blog) but because that is pretty much the impression one gets when reading all your posts.

      Just the funny fact that you think that any of your drivel is a refutation of the arguments brought up. The reason no one answers your points anymore is not because what you write is hard to refute but you write the same rubish over an over again. Arguing with you will just endlessly go in circles and is a pointless waste of time (though quite entertaining at times).
      Like how you still try to argue how the current consoles are not underpowerd. Everyone with a litte bit of understanding of hardware already knows they are. When they where released they were the equivalent of some mid range PC. Considering what the consoles cost it should also not be suprising at all. And 30 FPS is not a lot.

      And the steam survey obviously only shows what kind of monitors are most commonly used as it is preferred to play games at the monitors native resolution. It has absolutly nothing to do with how much graphic fidellity is to be expected of this resolution or how much frames the games run at or how powerfull the hardware is. Are you to stupid to grasp that? And how this, just like most of your answers, is not really a quality response that is worth an answer?

      You just throw links around that don’t really prove anything of what you say in the first place and think you won.

      You also have this obsession to picture AMD in a great light. Scali already gave you his opinion about what he thinks about AMD’s statements. You just go on about how thats PR and how this is somehow an excuse. Are you trying to convince scali that his opinion is wrong and yours is right? Or how AMD can’t be bad if Nvidia or Intel isn’t equally bad?

      It’s quite pointless to argue with you because you are actually an idiot.

  16. dealwithit says:

    Redneckerz, every reply you make you have less input for this discussion

  17. Rebrandeon says:

    Kyle’s sources were dead right about how inefficient Polaris is. 150W TDP but much slower than GTX 1070 150W TDP.

    • Scali says:

      That’s funny, because everywhere on the web I was reading that nVidia just did a simple die-shrink, and AMD would come up with this huge architectural update, that would be way more efficient, and nVidia would be out of the market in 3 years tops
      (see: http://seekingalpha.com/instablog/45056646-clarence-spurr/4884330-pascal-new-king).

      Now I don’t want to say AMD fanboys are delusional, but… wait, no, that’s exactly what I want to say.

      Also, still no word on DX12_1/Conservative Rasterization/Raster Ordered Views.

      • Redneckerz says:

        EDIT: I was going to write another response, but new information has reached me that you actually went to other places to throw up dirt there. I think its incredibly telling that one goes resort to such actions when they obviously cant get around on their own forum. Its also not the first time either that ive seen you ”suddenly” post these kinds of things on other places and under a ”different” name in order to pretend it isnt you. Sadly for you we can trace these things back.

        At best, its just a childish act at display. At worst, it goes to highlight how far you are willing to go, supposely out of spite or feelings being hurt. And i think that kind of action is rather unhealthy.

        ”And regarding your console issue again. No one says that you can’t make games with great fidellity on current consoles. They are a lot more powerfull than their predecessors. But considering the state of the art of technology currently available, or even the technology available at their release, they can be considered underpowered.”

        All that really is a bottleneck is the CPU. The GPU at that time was just current technology. And lets not forget the customizations made to these things which really is what makes the difference. Do i think Sony/Microsoft could have gone for better CPU’s? Absolutely. But my best guess is that they didnt since the BoM would go up. Too much for them to justify it. And especially when their last-gen consoles were 500/600 bucks costing machines that were ahead of PC’s at the time (especially X360).

        @ Scali:

        ”and nVidia would be out of the market in 3 years tops”

        Quite a bold claim to make, whoever pondered that one.

        As for the Polaris news: From day 1 i never thought that Polaris would be a high end chip. I thought Vega would be. AMD’s train of thought is ”Value first, Performance later” which is the exact counterpart to how Nvidia deals with things: ”Performance first, Value later”.

        With that said, Radeon RX480 presents a great ”value-for-money” card. One i think perfectly complements Nvidia’s GTX 1070/1080. I was pretty impressed by what they set out to do in terms of performance (GTX 980 levels and beyond for a lower price?) and it looks promising. However, GTX 1070 definitely is still a hefty card when it comes to price. And that is where AMD wants to head out first. By delivering a good value card, Customers will have more variety to choose from.

        For the high-end market, Nvidia will definitely take roles. However, i do think that AMD will get a lot of sales in the segment under that with these Polaris cards. For a real competitor to high-end Pascal, i guess its up to Vega to prove, later this year.

        ”Also, still no word on DX12_1/Conservative Rasterization/Raster Ordered Views.”

        It might be that Vega gets this. But then again, DX12 is still much in its infancy since only a hand ful of titles have some support for it. That will rapidly change, though.

      • Scali says:

        EDIT: I was going to write another response, but new information has reached me that you actually went to other places to throw up dirt there. I think its incredibly telling that one goes resort to such actions when they obviously cant get around on their own forum.

        So let me get this straight… Someone posted a link to this discussion on some forum. And you are complaining, why? Didn’t you already proclaim yourself winner of this discussion?

        As we say in the scene: Make a demo about it!

      • Alexandar Ž says:

        Well if they actually do sell it for 200$ like they said it’s decent value, if it performs on the level of a 390x/980 or a bit above. I’m also not terribly wooed by all the VR speak – it’s still a nascent area that could go either way.
        But it’s not exactly pushing any boundaries. Efficiency is still playing catch up with the competition, but at least they’ve closed up the gulf quite a bit.

      • Scali says:

        I think it’s mostly a strategic move… Where nVidia concentrates on 1070/1080 for now, AMD is going for the segment below.
        The real competition will become clear once nVidia launches their 1050/1060 line in a few months time (in which case I hope Polaris isn’t TOO paperlaunched, and is actually available for a while before that. Else AMD may be in a world of hurt).

        But yes, signs point to worse efficiency than Pascal, and not as feature-complete.

  18. GP106 says:

    Nvidia GP106 coming soon, GP106 will embarass RX 480 in power efficiency & performance.

    $199 is only for 4GB card, good luck with that & high res gaming in multi-GPU.

    Many people already tried multi-GPU and know what a shitty experience it is.

    • Scali says:

      Yea, so it is supposed to be between 970/980 performance.
      I checked the TDP on those, and I was unpleasantly surprised. They have a TDP of 165W and 145W respectively.
      So at 150W TDP, the 480 is pretty much at the same performance/watt-level as these older 28 nm cards, which is quite bad.

      But also exactly as I predicted: AMD is a step behind, they didn’t have the extra iteration that nVidia made with Maxwell.

      • Alexandar Ž says:

        Yea, so it is supposed to be between 970/980 performance.

        Isn’t it supposed to be a bit higher – between 390x and Fury? At least that’s what all the leaks so far suggest.
        Performance level between 970 and 980 would be rather underwhelming, basically a ~120$ discount on 2014 upper mid-range performance with slightly better TDP. So about what would be expected of a new generation, but not exactly amazing.

      • Scali says:

        From what I read, it’s below 390X, which would put it somewhere between 970 and 980.
        Eg on Anandtech: http://www.anandtech.com/show/10389/amd-teases-radeon-rx-480-launching-june-29th-for-199

        In terms of raw numbers this puts the RX 480 just shy of the current Radeon R9 390. However it also doesn’t take into account the fact that one of the major focuses for Polaris will be in improving architectural efficiency. I would certainly expect that even at the lower end of clockspeed estimates, RX 480 could pull ahead of the R9 390, in which case we’re looking at a part that would deliver performance between the R9 390 and R9 390X, with final clockspeeds and architectural efficiency settling just how close to R9 390X the new card gets.

      • Alexandar Ž says:

        Thanks for the link, I didn’t know Anandtech had an article on it already.
        I’m curious about benchmarks when it finally releases, but Polaris definitely isn’t a viable upgrade path for me, 10 to 20 percent extra performance isn’t woth buying a new card.
        It will be interesting to see how the 1060 stacks up against it.

      • Scali says:

        Same here, I’ve already bought a GTX970OC a year ago.
        At this point I’m not even sure if Polaris would be faster at all.

      • Alexandar Ž says:

        Same here, I’ve already bought a GTX970OC a year ago.

        Had mine since November 2014 and it’s been a champ.
        For all the later Internet drama surrounding it I never had any problems either with the card itself or the software side of things and it ran (and still runs) pretty much everything I threw at it admirably.

        In other news – AMD stock down 10% since Computex.

      • Scali says:

        For all the later Internet drama surrounding it I never had any problems either with the card itself or the software side of things and it ran (and still runs) pretty much everything I threw at it admirably.

        Yes, I bought mine after the 3.5/0.5 GB was already a ‘thing’. It didn’t bother me. Things were blown way out of proportion by AMD fanboys. I just checked benchmarks and didn’t see any big issues in frametimes etc, and reviewers didn’t report anything special either. And indeed, the card works just fine.

  19. Redneckerz says:

    ”So let me get this straight… Someone posted a link to this discussion on some forum.”

    It was verified it was from you.

    ”And you are complaining, why?”

    Not really complaining. Just seeing how it went to your head and felt it is needed to harrass other places over it. Like i said: That is just unhealthy.

    ”Didn’t you already proclaim yourself winner of this discussion?”

    I did, and i certainly do now by default. I have no interest to add more details to your childish act. I made a regular comment over the new cards, lets leave it at that shall we.

    @ Alexandar Z:

    I am actually pretty concerned about both Nvidia’s and AMD’s interests in VR, just like how Sony and Microsoft are. I feel stuff like Oculus Rift is brilliant, but i am against the idea of having VR as a seperate platform PC and consoles alike. Whatever is there in Polaris, is efficient, that is for sure.

    • Scali says:

      Whatever is there in Polaris, is efficient, that is for sure.

      It is? How can you tell?

      • Redneckerz says:

        ”It is? How can you tell?”

        Price-efficient, apologies.

        @ Rebrandeon:

        ”The only childish person here is you, spamming with inane comments.”

        That is a great username you have there. (No joke. That is creative)

        ”Async compute is not DX12 but you’re too stupid to understand something simple as that.”

        I feel this isnt addressed to me since ive havent said something like that, but to get it out of the way: Yeah, those are 2 seperate things.

      • Scali says:

        Price-efficient, apologies.

        I’d call that bang-for-the-buck, one area where AMD rarely disappoints.

    • Rebrandeon says:

      The only childish person here is you, spamming with inane comments.

      Async compute is not DX12 but you’re too stupid to understand something simple as that.

  20. Redneckerz says:

    ”I’d call that bang-for-the-buck, one area where AMD rarely disappoints.”

    Likewise.

  21. Redneckerz says:

    @ Alexander Z:

    ”Polaris definitely isn’t a viable upgrade path for me, 10 to 20 percent extra performance isn’t woth buying a new card.”

    That aint always the best decision to go by however. 20 percent higher framerates compared to the predecessor with a similar price is still quite a bump. Unless you want your successor’s feature a 50% or higher improvement on framerates, but that is nigh on impossible.

    @ Scali:

    ”At this point I’m not even sure if Polaris would be faster at all.”

    Compared to what? A GTX 970, which is an upper mid-end card? RX 480 clearly is one step lower on that bar and so is its price. Seems like its a great 1080p60 card, which would play perfectly with Steam users given the most popular resolution used is 1080p. For the people who want more, like 1440p and beyond, there is more ofcourse – From Nvidia and from AMD. But like i said before, i dont think Polaris is their high end solution, yet.

    • Alexandar Ž says:

      That aint always the best decision to go by however. 20 percent higher framerates compared to the predecessor with a similar price is still quite a bump. Unless you want your successor’s feature a 50% or higher improvement on framerates, but that is nigh on impossible.

      I’m not sure I follow you. Judging by the leaks say it’s about the level of a 980/390x. That’s about 20% over what I have. Which isn’t really worth the hassle of investing in a new card, it’s not enough to make a big difference in my overall experience.
      Obviously if someone looks to upgrade from something like a 670/770/280(x)/270(x)/… it’s a much more interesting offer.
      But that is not my case. And why should it be impossible? A 1070 – a direct successor – is ~50% faster than a 970, give or take a bit depending on game and resolution.

      I don’t mean to harp on Polaris, at it’s price point it’s looking like a good offer, but it’s hard to be excited about performance we had two years ago, and it wasn’t outlandishly expensive even then.

      • Scali says:

        but it’s hard to be excited about performance we had two years ago, and it wasn’t outlandishly expensive even then.

        Yes, exactly. Not to mention we already had HDMI 2.0 and DX12_1 on those cards back then (and Maxwell v2 also supports 16 viewports, with some OpenGL extensions for efficient usage, perhaps they can bring SMP to these cards as well: http://developer.download.nvidia.com/assets/events/GDC15/GEFORCE/Maxwell_Archictecture_GDC15.pdf).
        It gives me the same feeling as AMD’s CPU department. If you already have an Intel Core i7 of a few years old, there’s nothing AMD can offer you.

    • Scali says:

      But like i said before, i dont think Polaris is their high end solution, yet.

      Nope, but the 1080 is not nVidia’s high-end solution either.
      Polaris just doesn’t paint a very rosy picture of AMD’s architecture. Sure, they can scale it up, but it won’t bring them to where nVidia is with Pascal. At this point it looks like it will bring them to the level of Maxwell.

      • Redneckerz says:

        @ Alexander Z:

        ”I’m not sure I follow you. Judging by the leaks say it’s about the level of a 980/390x. That’s about 20% over what I have.”

        I dont know what you have though 🙂 What card do you have right now?

        At 200 bucks, This RX480 really looks like a hallmark card like the 7850/5770/4850 before it, and ofcourse Nvidia’s own GTX 750Ti, GTX560 Ti, GTX 460, and 8800 era cards. Just cheap enough to enable quite a premium experience at 1080p. AMD’s own R 2xxx cards and R 3xxx cards basically just revolved around rebranding and slightly improving the core architecture of the 7850 era cards, which is quite a testaments to its longevity. Heck, a 7870 card of 2012 still nets you quite playable framerates in most titles today. Even the older 5800 era cards are still quite alright for gaming.

        I definitely can see the RX480 being one of those kind of cards, especially when the rumored PS Neo is supposely having a underclocked variant on this card.

        ”And why should it be impossible? A 1070 – a direct successor – is ~50% faster than a 970, give or take a bit depending on game and resolution.”

        They are exceptions, however.

        ”I don’t mean to harp on Polaris, at it’s price point it’s looking like a good offer, but it’s hard to be excited about performance we had two years ago, and it wasn’t outlandishly expensive even then.”

        Its also not hard to be excited that older Kepler cards perform less and less on the more recent titles – For no good apparent reason, i must add.

        @ Scali:

        ”Yes, exactly. Not to mention we already had HDMI 2.0 and DX12_1 on those cards back then. It gives me the same feeling as AMD’s CPU department. If you already have an Intel Core i7 of a few years old, there’s nothing AMD can offer you.”

        Together with your:

        ”But also exactly as I predicted: AMD is a step behind, they didn’t have the extra iteration that nVidia made with Maxwell.”

        It it is interesting to point out that on nearly every recent post of today, you have to point out a plus point that Nvidia has, whilst taking subtle digs at AMD. Even when taken with an objective POV, it is really annoying to see these in play. I get it. He gets it. We all get it. There is literally no need to always point to flaws. Just wanted to point this out because without those digs, your responses are fine to read.

        ”Nope, but the 1080 is not nVidia’s high-end solution either.”

        Definitely on a higher tier than the RX 480 and in a totally different price class targeting a different userbase.

        ”Polaris just doesn’t paint a very rosy picture of AMD’s architecture.”

        Based on a single card, that will likely end up getting used in consoles, i find that a too early judgement call to make. For all we know Pascal might have flaws that will only reach the surface a few months from now, when these cards get spread and when submodels are launched. I dont think its fair to shoot AMD down near-instant.

        ”Sure, they can scale it up, but it won’t bring them to where nVidia is with Pascal.”

        If Polaris is so negative for you, then why do you keep bringing up all the flaws? I mean, you run on Nvidia hardware, so in reality, what are you gaining by always pointing fingers? Its just so one-sided, so i want to know why you do that. Honest question, because i honestly would want to know why you do that.

        ”At this point it looks like it will bring them to the level of Maxwell.”

        Nothing really wrong with that if they reach those kind of performances with Polaris for a lower price. Who knows what Vega will bring, and maybe when DX12 Multiadapter takes off, who knows what the performance of dual Polaris hardware will be. At the end of the day, its always good to have options. The graphics card world would be intensly boring if Nvidia was its sole supplier 🙂

      • Scali says:

        Even when taken with an objective POV, it is really annoying to see these in play. I get it. He gets it. We all get it. There is literally no need to always point to flaws. Just wanted to point this out because without those digs, your responses are fine to read.

        A “dig” is a subjective thing.
        I merely state objective facts. When taken with an objective POV, there is no issue with that.
        The discussion was about why people with a 970/980 would not be interested in Polaris. A context not discussed before, hence these facts were not yet presented in light of this context yet.

        I think you can say that many of the cards you mentioned (eg 460, 8800) were a ‘first gen’ card for a new API. They were very interesting for people coming from DX9 and DX10 hardware respectively.
        The cards after that were only mild improvements, so once you went ‘new API’, you would skip the next generation or two (no 560 or 9800). AMD’s Polaris is like that as well. It doesn’t offer much that wasn’t already available 2 years ago. It offers it at a lower price, but that doesn’t mean anything to people who already bought comparable hardware.

        Definitely on a higher tier than the RX 480 and in a totally different price class targeting a different userbase.

        That isn’t the point I was making though.
        The point is rather that both AMD and nVidia have a ‘higher tier’ than Polaris/1080.
        What price nVidia puts on those remains to be seen. I was never very interested in pricetags. I am interested in technology.
        What does price mean anyway? I recently bought a Hercules card for 10 euros. That’s a lot cheaper than Polaris. I guess that makes it better?

        Based on a single card, that will likely end up getting used in consoles, i find that a too early judgement call to make.

        This is not a card we’re judging, it’s a new architecture. We can see its performance/watt at a given configuration. It’s not too hard to compare to the competition and extrapolate what it will do when scaled up or down.
        Bottom line is that R480 and the 1070 are both in the 150W TDP window, and the 1070 delivers considerably more oomph. So there clearly is a big difference in performance/watt between Polaris and Pascal.
        We’re not just talking about ‘flaws’ here. This is a fundamental characteristic of the architecture.

        If Polaris is so negative for you, then why do you keep bringing up all the flaws?

        It’s a new card, and it was only announced a day ago. We’ve only just begun discussing it!
        Question is more: why are you already nagging about this?

        Its just so one-sided, so i want to know why you do that.

        Because AMD keeps releasing duds, it would seem.
        I’d gladly talk about an AMD chip that’s better than what nVidia is offering, but it simply isn’t there. That’s the problem with being neutral and objective. You can only work with the material you’re given. And AMD of late has been giving us duds.

        Nothing really wrong with that if they reach those kind of performances with Polaris for a lower price.

        If price is your only criterion, perhaps.
        But as I say, I am interested in the technology. So features are more important to me. Also, performance/watt is something I find more interesting than performance/buck. Because it tells us something about how efficient the architecture is. It’s a form of elegance, which is the hallmark of good engineering.

        At the end of the day, its always good to have options. The graphics card world would be intensly boring if Nvidia was its sole supplier:)

        You must have forgotten already that only a few lines up, we were discussing that AMD doesn’t really have compelling options for people who own an nVidia 970/980 or better.
        Which implies that nVidia is the sole supplier for that group.
        Exactly the same as what we see on the CPU side, where Intel is the sole supplier for an ever growing group of customers.

      • Alexandar Ž says:

        I dont know what you have though:) What card do you have right now?

        Oh I’m sorry, a GTX 970. I said so a couple posts back, but it was in another sub-thread.

        Its also not hard to be excited that older Kepler cards perform less and less on the more recent titles – For no good apparent reason, i must add.

        Kepler’s been here for what? 4 years? My impression is that Kepler was at it’s original launch more focused and more capable at workloads prevalent at that time, while GCN only hit it stride later on. Afterall Tahiti was beaten in 2012 by a much smaller and power efficient chip, a story repeated with Maxwell. Another factor is that AMD has to try to squeeze as much performance as possible from GCN, because they are still using the very same architecture with some improvements or even the original one in parts of their now outgoing lineup.
        I don’t see any great conspiracy behind it, no evidence of ‘gimping’ of older cards, nor do any benchmarks of different driver versions between 290x and 780ti show it.

        At 200 bucks, This RX480 really looks like a hallmark card like the 7850/5770/4850 before it, and ofcourse Nvidia’s own GTX 750Ti, GTX560 Ti, GTX 460, and 8800 era cards. Just cheap enough to enable quite a premium experience at 1080p. AMD’s own R 2xxx cards and R 3xxx cards basically just revolved around rebranding and slightly improving the core architecture of the 7850 era cards, which is quite a testaments to its longevity. Heck, a 7870 card of 2012 still nets you quite playable framerates in most titles today. Even the older 5800 era cards are still quite alright for gaming.

        Which is what I fail to be excited about. 1080p is by now basically a solved problem, it’s not offering anything new.
        From my perspective it would be much more beneficial if AMD released higher end parts first, because that would put pressure on prices in the part of the market I’m interested in.
        If anything this is the kind of strategy they’ve been trying and failing at since Core 2 Duo days. Always a bit behind, always a bit cheaper. I don’t think that’s a basis for a success story in this industry – I’d much rather pay a couple bucks extra for extra features and/or extra performance.
        And this kind of attitude reflects on their company politics as well – rarely do you see AMD bring handy new features, it’s mostly only when the competition forces them to do so and even then it’s rarely on par. Yet they are the first to cry foul and go weep to the press at any opportunity, which is something that really doesn’t inspire confidence in the company. I just really can’t stand this sleazy way of public relations.
        The RTG split and new leadership brings some hope though, now if only they would finally get rid of people like Richard Huddy and Roy Taylor – these two clowns already did incalculable damage to AMD’s bussiness.

      • Scali says:

        I don’t see any great conspiracy behind it, no evidence of ‘gimping’ of older cards, nor do any benchmarks of different driver versions between 290x and 780ti show it.

        Indeed, there’s too many variables at work here to draw any sort of conclusion either way.
        One thing we do know is that nVidia’s drivers only increase performance in games over the lifetime of a GPU. There is no drop.
        Their drivers, as far as vanilla D3D/OpenGL performance goes, have a very good baseline.

        But when new games arrive in the mix, you have no reference. Is the difference in performance related only to differences in GPU-architecture? Or is the driver a factor in there as well? Does the game need some tweaks to the driver for better performance that the EOL-GPUs no longer get? Or is it just a case of newer games focus on slightly different performance characteristics, and the older GPUs just fall behind because of that?

        I mean, take tessellation for example. Say nVidia has improved tessellation performance scaling going from Kepler to Maxwell. Now games arrive that make use of more tessellation, because that’s what the latest hardware is good at. Then it is only natural that Kepler won’t perform as well as Maxwell does. No conspiracy there.

        If at the same time another vendor does not make improvements in tessellation performance, then you will not see a performance disparity there. Again, no conspiracy.

        To use the typical car analogy:
        Say BMW introduces the new M3. It posts better laptimes on the Nordschleife than the old M3 did.
        Now people are claiming it is unfair because the new M3 had a better driver. Put the same driver in the old M3, and you’d get better laptimes as well.
        But that’s not the case. BMW put equally skilled drivers in both cars, and they ran the cars in equal conditions, Both drivers pushed the car to its limits. The new car just has an improved engine and better handling, so it goes faster in some parts of the circuit.

  22. Redneckerz says:

    @ Scali:

    ”A “dig” is a subjective thing. I merely state objective facts. When taken with an objective POV, there is no issue with that.”

    You do get that it isnt necessary to consistently point out negatives on AMD, right? They arent really subtle to start with.

    ”The discussion was about why people with a 970/980 would not be interested in Polaris. A context not discussed before, hence these facts were not yet presented in light of this context yet.”

    But in general terms, one can easily see all the subtle jabs you take. Its not needed to make these jabs in the first place, especially when they are so plenty in numbers. Its just distracting, really.

    ”It doesn’t offer much that wasn’t already available 2 years ago.”

    It doesnt have to be when its clear purpose is to score on value-performance. Besides that, its not really known yet what Polaris supports or doesnt support in terms of tiers. Who says they wont do updates like Nvidia did on Maxwell? A lot of it is still very much in the unknown. For what its worth, RX 480 is an attractive solution for a lot of gamers.

    ”It offers it at a lower price, but that doesn’t mean anything to people who already bought comparable hardware.”

    Which is an unanswerable remark. Just like how people who buy a Fury X for gaming dont have much to gain from the new Nvidia cards in terms of framerates at current games.

    ”That isn’t the point I was making though.”

    So why name 1080 then? Its in a different priceclass than the Polaris card.

    ”I am interested in technology.”

    When applicable, that is 🙂

    ”What does price mean anyway? I recently bought a Hercules card for 10 euros. That’s a lot cheaper than Polaris. I guess that makes it better?”

    Whilst having a philosopical discussion on the premises of the word ”price” is interesting, that is beyond the scope of what is discussed here.

    ”This is not a card we’re judging, it’s a new architecture. We can see its performance/watt at a given configuration. It’s not too hard to compare to the competition and extrapolate what it will do when scaled up or down.”

    Still dont think its fair to shoot down the whole Polaris architecture so quickly. We hardly know anything about it. Given AMD’s scaling efforts, Polaris will go into thin clients and perhaps even lower for x86 systems. Perhaps its a forward-thinking architecture like the older GCN’s were aswell. Fact is we dont know. I am not going to pick sides just yet.

    ”Bottom line is that R480 and the 1070 are both in the 150W TDP window, and the 1070 delivers considerably more oomph.”

    And its significantly more expensive since they arent in the same price window.

    ”It’s a new card, and it was only announced a day ago. We’ve only just begun discussing it!”

    Taking digs along the road of discussion is distracting to say the least. To take the road example, There is no need to have someone to constantly pointing out that there is a sign to the left or right every single time.

    ”Because AMD keeps releasing duds, it would seem.”

    I dont think that really is why, but i do think that is yet another dig. Nvidia releases duds aswell, but the world isnt that black and white. For what its worth, i really get the impression that you feel it is. And its not. Nvidia has fucked up in the past aswell by giving ridicoulous figures for its Tegra K1 SoC’s, claiming performance levels better than PS3 and X360. And the disappointing thing is – They dont need that kind of overselling since the real world performance of K1 and X1 is impressive already.

    ”I’d gladly talk about an AMD chip that’s better than what nVidia is offering, but it simply isn’t there.”

    But whenever it is there, you downplay it, and point out that Nvidia is better at other things. Which reminds me, when was the last time Nvidia did something that wasnt ”better” than the competition, or that had some kind of flaw that called for discussion?

    ”If price is your only criterion, perhaps.”

    Well, yeah, in this context it is. We both know Polaris aint a high end chip. We both know that this card is going to settle for similar performances that we already have now at a significantly lower price. So why ask for more? That is what Vega (supposely) is going to target.

    ”It’s a form of elegance, which is the hallmark of good engineering.”

    Well both sides are pretty efficient at these things, as you know with Tegra X1 and AMD with their APU ranges in the sub 5 watt territories.

    ”You must have forgotten already that only a few lines up, we were discussing that AMD doesn’t really have compelling options for people who own an nVidia 970/980 or better.”

    At the moment, The RX480 is just aiming for near-equal or good-enough performance at 1080p at a (significantly) lower price. Thats it. And ofcourse they have some compelling options, like the Fury ranges and all that.

    ”Which implies that nVidia is the sole supplier for that group.”

    Except they arent, ofcourse.

    ”Exactly the same as what we see on the CPU side, where Intel is the sole supplier for an ever growing group of customers.”

    There is Zen along the way.

    • Scali says:

      You do get that it isnt necessary to consistently point out negatives on AMD, right?

      You do get that this is my blog, and that I can post whatever I please, however I please, and that I can engage with other people in conversation about whatever I like, however I like?
      If you don’t like it here, then don’t visit, and certainly don’t comment here.

      If there’s anything that isn’t necessary, it’s people whining about what I post on my blog.
      I’m quite sure that the people who do like to read this blog aren’t waiting for posts like that either. In fact, plenty of commenters already called you out on your posting style and lack of content. You have basically over-stayed your welcome a long time ago already. At this point I merely tolerate you, don’t push it.

      Just like how people who buy a Fury X for gaming dont have much to gain from the new Nvidia cards in terms of framerates at current games.

      Lol? The 1080 completely creams the Fury X (and every other card on the market) in most games: http://www.anandtech.com/show/10326/the-nvidia-geforce-gtx-1080-preview/2
      Even for people who own a Fury X, the 1080 is quite an interesting upgrade: More features, more performance, more memory, lower power consumption.
      Don’t try to make the world look fair, it isn’t.

      forward-thinking architecture

      Ah, is that what it’s called now when you rebrand it 3 times or more?
      Forward-thinking architecture, my arse.

      We both know that this card is going to settle for similar performances that we already have now at a significantly lower price.

      You realize that this means absolutely nothing, don’t you?
      If nVidia drops its prices of the 980 to $199 tomorrow, then Polaris has nothing going for it anymore. Price is all it has. The technology isn’t great. It’s only good at the right price (which is why AMD has to dump them that cheaply, whereas nVidia can charge a premium for the 1070/1080).

  23. vmavra says:

    Don’t waste your time with Scali, because any time you post something that puts him in a bad light, he’ll just delete your posts. Sad really because I really respected him and his posts until he proved he’s just a hypocrite who can’t handle any criticism.

    Scali I feel sorry for you because you are clearly very knowledgeable but that doesn’t excuse you deleting posts that just raise questions and painting them as garbage.

    • Scali says:

      Pfft, just look at all the ‘garbage’ and ‘criticism’ I am tolerating right here, mostly from Redneckerz. So anyone can see that your claims are patently false. You are the hypocrit here. Going for personal attacks as soon as you see something you don’t like. Grow up, and learn how to behave like a civilized adult.

      And yes, I do delete comments that are nothing but trolling. I warned you, but you kept going.
      Redneckerz may post a lot of garbage and personal attacks, but at least there is still *some* substance in his posts every now and then. Your posts didn’t have anything to say. Just trying desperately to put me in a bad light, not based on anything.
      This is very exceptional by the way, I rarely delete or edit posts (just look at all the comments I didn’t delete on this and various other blogposts). I am very tolerant in general. But enough is enough.

  24. Redneckerz says:

    @ Vmavra:

    ”Don’t waste your time with Scali, because any time you post something that puts him in a bad light, he’ll just delete your posts.”

    He hasnt done that (yet), so i dont see where that comes from. WordPress takes some time to make the posts visible, nothing he can do about, so thats quite a wild claim really. But i already have experienced he can do far worse than that. Anyways, as long as he aint doing that and just keeps the playground at the original location, its fine.

    ”Sad really because I really respected him and his posts until he proved he’s just a hypocrite who can’t handle any criticism.”

    Regardless of that, even when taking everything out of the way, its just really distracting that how he posts and comments is quite one-sided. For him to say ”But well its my blog” is just another unanswerable remark. Yeah, its his blog, but that shouldnt be an excuse to make one-sided comments, especially when you say you are neutral and you go after facts.

    PS: I also saw your now deleted post. For the sake of clarity: That blog was deleted for different reasons. I know, because one of the mentioned people contacted me on it. So its not a case of ”He deleted my posts on purpose”. The rest of your comment (Hard on taking critique, etc) is still on point ofcourse. And perhaps he took your ”questions” (I dont know what they were) the wrong way. But there you have it.

    @ Scali:

    ”You do get that this is my blog, and that I can post whatever I please, however I please, and that I can engage with other people in conversation about whatever I like, however I like?”

    Sure, its your blog. Doesnt mean you have to constantly point out flaws – I think everyone in this topic knows your views by now. Just distracts from the actual points at hand.

    ”If you don’t like it here, then don’t visit, and certainly don’t comment here.”

    Most people have done that already given that kind of attitude.

    ”If there’s anything that isn’t necessary, it’s people whining about what I post on my blog.”

    So valid critique on your blog is a no-go? Good to know.

    ”I’m quite sure that the people who do like to read this blog aren’t waiting for posts like that either.”

    I agree. They prefer armchair analysis a lot more.

    ”In fact, plenty of commenters already called you out on your posting style and lack of content.”

    Dont bring up the ad populum’s again.

    ”At this point I merely tolerate you, don’t push it.”

    Is that a threat? Pretty sure its a threat. Im not really sure what to think of it.

    ”Lol? The 1080 completely creams the Fury X”

    ”Completely creams” is quite an overstatement judging by the link. That kind of word usage was applicable to Radeon 9700 and Geforce 8800, not here. It improves quite a bit in a few games, but going from 60 fps to 100 at 1080p in the Witcher 3 comparison is just nonsensical at those settings. Than a card like GTX 1080 should just supersample from 4K and output to 1080p at 60 fps fixed. And thats just one example.

    ”Even for people who own a Fury X, the 1080 is quite an interesting upgrade: More features, more performance, more memory, lower power consumption.”

    People who shelled big bucks recently for a card like that are unlikely to shell those big bucks again for what equals a rough 30% more performance in certain titles (Not all).

    ”Ah, is that what it’s called now when you rebrand it 3 times or more?”

    The fact that AMD could rebrand their top cards 2-3 times and still have competitive framerates over the course of 3-4 years is quite forward-thinking, yes. Anyone with a 2012 7850/7870/7950/7970 card is laughing his balls off at the performance these cards still have 3-4 years down the road. In the meantime, Nvidia did multiple architectures.

    ”You realize that this means absolutely nothing, don’t you?”

    For the moment, it does.

    ”If nVidia drops its prices of the 980 to $199 tomorrow, then Polaris has nothing going for it anymore.”

    But that is just assuming that it will. That card, according to the Anandtech link, is still 429 bucks. That is significantly more.

    ”Price is all it has.”

    I think it has more than that, but hey, whatever you want to believe. I dont share your intense dislike for a brand.

    ”The technology isn’t great.”

    Wow. If that aint either a troll comment or a severe hatred for the brand AMD..

    ”It’s only good at the right price (which is why AMD has to dump them that cheaply, whereas nVidia can charge a premium for the 1070/1080).”

    Why does it pisses you off so much anyways that AMD releases a card in the first place, let alone a 200 dollar card that obviously doesnt target cards like the 1070? Seriously, its not like AMD killed your parents or anything, so this anger is really misplaced.

    ”Grow up, and learn how to behave like a civilized adult.”

    Heh, the irony.

    ”Redneckerz may post a lot of garbage and personal attacks, but at least there is still *some* substance in his posts every now and then.”

    Thanks for the compliment, i guess? Lol.

    @ Alexander Z:

    ”Oh I’m sorry, a GTX 970. I said so a couple posts back, but it was in another sub-thread.”

    Thank you 🙂 The most popular card for gaming.

    ”Kepler’s been here for what? 4 years? My impression is that Kepler was at it’s original launch more focused and more capable at workloads prevalent at that time, while GCN only hit it stride later on.”

    That may very well be, but one cant deny how quickly Nvidia ditches architectures for the gaming market. In a generation where AMD introduced and stayed on GCN (A similar tactic they did with Terascale), Nvidia has gone through Fermi, Kepler, Maxwell and now Pascal. Sure, the architecture will live on in low-end cards, but it certainly is interesting to see how quick Nvidia ditches architectures on the mid to high-end ranges.

    ”Another factor is that AMD has to try to squeeze as much performance as possible from GCN, because they are still using the very same architecture with some improvements or even the original one in parts of their now outgoing lineup.”

    Hence why i called it a ”forward-thinking” architecture. Polaris is really the first ”big” update to that effect. I think its fairly impressive that 7850/7870 cards are still quite competitive now 4 years on.

    ”I don’t see any great conspiracy behind it, no evidence of ‘gimping’ of older cards, nor do any benchmarks of different driver versions between 290x and 780ti show it.”

    I never said there was a conspiracy to be found, but there is (quite a lot) indirect evidence that Nvidia downgrades the performance on Kepler cards on purpose to trick customers into buying new Nvidia cards with newer architectures. See also https://scalibq.wordpress.com/2016/05/17/nvidias-geforce-gtx-1080-and-the-enigma-that-is-directx-12/#comment-8224 for these.

    Let me be clear that this isnt something new or unexpected – Apple has the same thing going on. I figure its even a legal thing to do, hence why its done, but it really undermines the value of these cards.

    ”Which is what I fail to be excited about. 1080p is by now basically a solved problem, it’s not offering anything new.”

    Is it? Because most gamers play at 1080p. This card specifically targets that at a lower price, on a new architecture for the future. That really is all there is to it. Its not a GTX 1070 beater, or an electricity hog (150 watts really is quite low for such a card. Not too many years ago we had cards that easily asked double this power since they were on a much larger chip).

    ”From my perspective it would be much more beneficial if AMD released higher end parts first, because that would put pressure on prices in the part of the market I’m interested in.”

    But that is solely your personal preference. And in 2012, they did release higher end parts first, which they have rebranded a few times (A sign of their relative performance). RX 480 is the first time in 4 years they took a different approach. And even, Vega is coming around at the end of the year. Its not like you have to wait ages.

    ”If anything this is the kind of strategy they’ve been trying and failing at since Core 2 Duo days.”

    If it was a failed strategy, AMD/ATI wouldnt be around here already, nor would they announce new cards.

    ”Always a bit behind, always a bit cheaper. I don’t think that’s a basis for a success story in this industry – I’d much rather pay a couple bucks extra for extra features and/or extra performance.”

    4850/4870/5770/5870/6870 cards were great value for money and highly succesfull. Heck, 5770/5870 cards are still listed as minimum specs by developers, and that is a *7* year old GPU. That is some serious mileage, only rivaled by Geforce 8800 series (Which was kept rebranding till the GTS 250, see, Nvidia rebrands aswell) and the Radeon R300/R420 series. If you bought a 5770/5870 in 2009, then you really got multiple times its value worth out of it. In the mean time, if you bought a GTX 260 back then…

    ”And this kind of attitude reflects on their company politics as well – rarely do you see AMD bring handy new features, it’s mostly only when the competition forces them to do so and even then it’s rarely on par.”

    AMD has multiple ”firsts” under its belt – See Terascale, AMD/ATI Xenos, Radeon HD 4800 series, Radeon R300, and the like.

    ”Yet they are the first to cry foul and go weep to the press at any opportunity, which is something that really doesn’t inspire confidence in the company.”

    I feel this is quite a polarizing statement.

    ”I just really can’t stand this sleazy way of public relations.”

    Nvidia, and in particular Jen-Hsun Huang also have a great tendency to play on PR with their cards and their Tegra’s. Its not needed, but they do it anyways. Aka – AMD does it, but so does Nvidia. Huang in his announcements borders on the cringeworthy sometimes. Defintiely not a great presenter.

    • Scali says:

      Doesnt mean you have to constantly point out flaws – I think everyone in this topic knows your views by now.

      Firstly, ‘constantly pointing out flaws’ is merely your opinion. That is not my opinion.
      Secondly, how can people ‘already know my views’? Polaris was only presented to the press a few days ago, and these comments here were the first and only time I even spoke about it.
      Lastly, I already pointed this out before, so you’re going round in circles again.
      Make your own blog about whatever you like. Point remains that this is *my* blog, where I post *my* opinions, experiences etc. I do that in the way *I* see fit. If you don’t like that, don’t visit here. Don’t expect me to change on your behalf. That will never happen, and I do not want repeated posts on this. I already warned you about this before, and you keep going. Bring it up again and I will start deleting your posts. Do I make myself clear?
      You can discuss technology all you like. Discussing me and my blog/comments is off-limits from now on. You have made it known that you do not agree with how I post things here. Your comment is noted.

      ”Completely creams” is quite an overstatement judging by the link.

      It is? Grand Theft Auto V is about 55% faster than the Fury X.
      In BF4 I see the 1080 being about 47% faster.
      Tomb Raider is even 61% faster.
      Other games may be less extreme, but we’re still talking a range of 20-30%.
      All that at much lower power consumption as well, and with DX12_1 support, new VR features, HDMI 2.0, and twice the memory.

      I think it has more than that, but hey, whatever you want to believe

      What exactly? You should name something first. It has the brand AMD on it, is that what you mean?

      But that is just assuming that it will. That card, according to the Anandtech link, is still 429 bucks. That is significantly more.

      You are missing the point. Try again.

      Why does it pisses you off so much anyways that AMD releases a card in the first place, let alone a 200 dollar card that obviously doesnt target cards like the 1070?

      The point isn’t AMD releasing a card.
      The point is people like you attacking me and trying to force me to like this card and praise it.
      I’m a tech guy, not a fanboy. As a tech guy, I think Polaris is underwhelming technology, and probably marks the beginning of the end of AMD (as I already warned about before: AMD is behind on nVidia and Intel in a number of areas… this generation again does not appear to close the gap, but rather extend it even further).
      If you have problems dealing with people who have different opinions, then go discuss Polaris in a place where people share your opinion.
      because to me it looks like you have mental issues. Everytime you read something slightly less positive on AMD than you were hoping for, your brain short-circuits, and you start attacking people, and making all sorts of completely unreasonable demands.

      • Redneckerz says:

        ”It is? Grand Theft Auto V is about 55% faster than the Fury X.”

        To get playable framerates in 4K, nonetheless.

        ”In BF4 I see the 1080 being about 47% faster.”

        To get 60 FPS in 4K. But it doesnt go for all titles, ofcourse.

        ”Tomb Raider is even 61% faster.”

        Does it make sense to from stable 30 fps to 50 fps on 4K? You ought to lock at 4K 30 then or disable some stuff to get to 60.

        ”Other games may be less extreme, but we’re still talking a range of 20-30%.”

        The overall conclusion, as noted by Anandtech is that the card is 32% faster on all games. Now, seeing as how Alexander doesnt buy a card if its only 20/30% better, do you think he will buy a GTX 1080 if he had a Fury X, which has half the memory of the GTX 1080 yet is ”only” 30% slower on these inane resolutions? That isnt a ”complete cream” in my book. It gets beaten by quite some margin, but it isnt getting creamed imo.

        ”What exactly? You should name something first. It has the brand AMD on it, is that what you mean?”

        Power/price, console wins with this kind of card, ability to scale down significantly, and no, we dont know yet what DX tiers Polaris will support, so pointing out that Nvidia has DX12_1 support is pointless for the moment.

        ”You are missing the point. Try again.”

        Oh am i really? You state: ”If nVidia drops its prices of the 980 to $199 tomorrow, then Polaris has nothing going for it anymore. Price is all it has.”. That card is still 429 bucks right now, so AT THE MOMENT, RX 480 has the price edge, delivering near-980 of performances at half the prices. You saying ”If Nvidia drops prices tomorrow” is just an assumption. If you love facts so much, then dont throw these kind of things in discussion, you know 🙂

        ”I’m a tech guy, not a fanboy. As a tech guy, I think Polaris is underwhelming technology, and probably marks the beginning of the end of AMD”

        But if people change ”Polaris” into ”Maxwell” and ”AMD” into ”Nvidia”, its ”Those statements come out of the blue and have no basis in reality”, but when you say it about AMD, it somehow does?

        Its like you dont understand that Polaris was a mid-end chip from the start, and you expect GTX 1080 beating levels of performance from it.

        Lets also not forget that AMD is pretty much solely responsible for delivering the heart and the oomph’s in current-gen consoles. That is still a significant industry to have in your portfolio.

        ”(as I already warned about before: AMD is behind on nVidia and Intel in a number of areas… this generation again does not appear to close the gap, but rather extend it even further).”

        Just wait for Vega and then draw your conclusions. If Vega is still significantly behind Pascal (In the sense that Pascal would yield better overall framerates in games) then you can call Vega, UNLESS its priced significantly lower than Nvidia’s high-end offerings (Which is pretty doubtful since thats where Polaris is for) a geniune disappointment. And then AMD just has to try again. Nvidia had similar kinds of shit with Geforce FX (Well, even more shit than what Polaris already seems to cause with you) and they came back aswell. So yeah, im not worried in the slightest. In fact, i would like it if Intel becomes a sort of third party and ups the ante on GPU’s and release discrete cards again based on their tech.

      • Scali says:

        Does it make sense to from stable 30 fps to 50 fps on 4K? You ought to lock at 4K 30 then or disable some stuff to get to 60.

        Uhh, did you forget about G-Sync/FreeSync already? We don’t need to hard-lock our games to 60 fps anymore! They can be perfectly smooth at framerates slightly below 60 fps. No need to drop all the way down to 30 fps anymore.

        That isnt a ”complete cream” in my book. It gets beaten by quite some margin, but it isnt getting creamed imo.

        Well, then we have a difference of opinion. I think 50-60% faster is quite a big deal, and not something we see everyday.

        Power/price, console wins with this kind of card, ability to scale down significantly, and no, we dont know yet what DX tiers Polaris will support, so pointing out that Nvidia has DX12_1 support is pointless for the moment.

        None of that means much.
        Power/price depends solely on pricepoint. I said, disregard price for a moment.
        “Console wins”, who cares? We’ve just seen the 1080, with no console wins, beat the “console wins”-powered Fury X by 50-60% in various games. Console wins mean nothing (it doesn’t even mean profit for AMD), except in AMD’s marketing material.
        “Ability to scale down significantly”, what do you even mean by that?
        Also, no, we don’t know what tiers Polaris supports, but we DO know that nVidia supports DX12_1, and you can’t really go up much from there.
        So best-case scenario is that Polaris also has DX12_1. But given AMD’s total radio-silence on DX12_1 features, I get the feeling that DX12_0 is more likely.
        So if you already have an nVidia card, you’re not likely to find the DX12-featurelevel to be a reason to upgrade to Polaris.

        That card is still 429 bucks right now, so AT THE MOMENT, RX 480 has the price edge, delivering near-980 of performances at half the prices. You saying ”If Nvidia drops prices tomorrow” is just an assumption. If you love facts so much, then dont throw these kind of things in discussion, you know:)

        The point is: take the price-edge out of the equation. What other edges do you have left?
        None, apparently, because you’re quick to try and put price back into the equation, rather than actually discussing the scenario presented.
        You don’t seem to grasp that. I never claimed that nVidia will drop the prices that far, and I don’t expect they will (so the word ‘assumption’ is completely out of place here). It’s just a thought-experiment: what if they did? What if the 980 and Polaris were the same price? What would still make the Polaris interesting?

        But if people change ”Polaris” into ”Maxwell” and ”AMD” into ”Nvidia”, its ”Those statements come out of the blue and have no basis in reality”, but when you say it about AMD, it somehow does?

        Erm, how does that make sense?
        Maxwell is last-gen. Back when Maxwell was introduced, it made a significant step in terms of performance/watt, and gave nVidia a considerable edge over AMD. It was the best of its generation. But nobody expects a 2-year old architecture at 28nm to be competitive with today’s 16nm architectures and new GDDR5X and HBM memory.
        Polaris appears to be closer to Maxwell in performance/watt than Pascal, based on the estimates that it performs between 390/390X and 970/980, and the TDP of 150W that AMD has shared.
        We know that GTX970 is 145W TDP, and 1070 is 150W TDP, so they are close in terms of power usage. 1070 however is a huge deal faster than the 970 and 980.
        Now, if Polaris would perform close to the 1070 at the same TDP, it would be something to get excited about. Performing close to 970/980 means that AMD needs the 16nm FinFET process to match the level of efficiency that nVidia had 2 years ago at 28nm.
        How on earth should we get excited about that?
        I think it’s pretty obvious that this is underwhelming.
        And I think it’s also pretty obvious that this card is not going to be the saviour of AMD. They’ll be selling Polaris at low profit margins to compensate for the lack of efficiency. Not what AMD needs right now.

        Its like you dont understand that Polaris was a mid-end chip from the start, and you expect GTX 1080 beating levels of performance from it.

        That actually has nothing to do with it, see above.

        Lets also not forget that AMD is pretty much solely responsible for delivering the heart and the oomph’s in current-gen consoles. That is still a significant industry to have in your portfolio.

        Why do you keep pointing at consoles? It means nothing.
        In fact, rumours have surfaced that Nintendo will be going with an nVidia Tegra-based solution rather than AMD: http://www.tweaktown.com/news/52119/nintendo-nx-powered-nvidia-tegra-processor-amd-chip/index.html
        So the tables might be turning here.

        Just wait for Vega and then draw your conclusions.

        Why? Vega isn’t going to be hugely different from Polaris. What AMD names ‘Polaris’ and ‘Vega’ is pretty much what nVidia calls ‘GP104’ and ‘GP100’. They’re they same basic architecture, implemented at different scale levels.
        They try to sell it has having better perf/watt than Polaris, but I think that is mostly because they factor in the wins from HBM2 as well.
        Vega will be released too shortly after Polaris to have a significantly more optimized architecture.
        Besides, AMD’s own roadmap showed Polaris as having the big ‘2.5x perf/watt’ gains (coming from new arch + 16nm/FinFET), the gains going from Polaris to Vega were pictured as much smaller.

        Nvidia had similar kinds of shit with Geforce FX (Well, even more shit than what Polaris already seems to cause with you) and they came back aswell.

        nVidia was nVidia. When GeForce FX bombed, they had a huge marketshare, and equally large mindshare. They could afford a misstep, because the generations before the FX had been huge successes.
        AMD however does not have this luxury. They’ve been losing marketshare very quickly, and are at an all-time low. They have been trying to get back in the race for a few years now, and have failed so far. So Polaris wouldn’t be their first disappointment, the 3×0/Fury were also a disappointment, and before that, the 2×0 series didn’t stand much of a chance against Maxwell. So AMD has already been struggling for about 2 years at this point. And we’re not just talking performance, but AMD was also lacking other things, such as HDMI 2.0 and DX12_1.
        The further they get behind on features, the more development they have to put in to close the gap.

    • vmavra says:

      @Redneckerz
      He most definitely jumped to conclusions and considered me an AMD fanboy or whatever group he has a beef with, when my posts had nothing to do with AMD vs Nvidia or whatever.

      • Scali says:

        This kind of thing is pretty much the reason why your posts get deleted:
        You are doing nothing but attacking me and spreading lies (jumping to conclusions basically, oh the irony… idiot). Why would I possibly tolerate that garbage?
        I have not said anything even remotely related to “AMD fanboy” to you, so I don’t know where you go that from. Your posts just don’t make any kind of positive contribution to the discussion, they’re not even remotely on-topic, and that is also what I answered to you already, so you know why your posts got deleted. Go troll elsewhere.

        If you think you’re so smart, discuss the article, prove me wrong on technological points (it’s not like I deleted those, there just weren’t any).
        It’s so childish to just attack someone personally when you don’t agree with them. And even more childish to then go and complain that your earlier personal attacks were removed. No more.

  25. Redneckerz says:

    ”Uhh, did you forget about G-Sync/FreeSync already? We don’t need to hard-lock our games to 60 fps anymore!”

    Oh well, excuuuseee me for briefly forgetting that for a moment.

    ”Well, then we have a difference of opinion. I think 50-60% faster is quite a big deal, and not something we see everyday.”

    For select titles it is.

    ”None of that means much.”

    If you say so. I think it does.

    ”Power/price depends solely on pricepoint. I said, disregard price for a moment.”

    Why disregard a huge plus point just to suit a narrative?

    ”“Console wins”, who cares? We’ve just seen the 1080, with no console wins, beat the “console wins”-powered Fury X by 50-60% in various games.”

    Not talking about Fury X in this context, but RX 480, where supposely a lower clocked variant will be in PS NEO. And lets not forget that current consoles are also based on AMD hardware, so that is, like i said, still a sizeable industry they have under their belt.

    ”Console wins mean nothing (it doesn’t even mean profit for AMD), except in AMD’s marketing material.”

    Right, thats why its in 75 million current-gen consoles so far (Nintendo+Sony+Microsoft). And lets not forget X360 either with 80 million sold, or the Wii, which was a huge success. PS3 was a ton more difficult to program for and the RSX with its seperate shaders was quite behind the curve. So yeah, it means something.

    ”“Ability to scale down significantly”, what do you even mean by that?”

    What it says. Current AMD tech can scale all the way down to a few watts and with various configurations, whilst Nvidia makes a few ARM based platforms with a desktop-like GPU. Intel has a different start point – They scale all the way up.

    ”Also, no, we don’t know what tiers Polaris supports, but we DO know that nVidia supports DX12_1, and you can’t really go up much from there.”

    Yes, i know. But its silly to make that comparison when the supported tiers of Polaris are still unknown.

    ”So if you already have an nVidia card, you’re not likely to find the DX12-featurelevel to be a reason to upgrade to Polaris.”

    Oh well, DX12 is still quite young sort to say. It hasnt even been long that games still had DX9 codepaths. Just for a year now you see more games popping up with a DX11 only codepath, which is ofcourse partially influenced by current-gen consoles.

    ”The point is: take the price-edge out of the equation. What other edges do you have left?”

    The point is that its silly to take out the price-edge. As it stands now, RX480 performs similar to 980 at half the price. That is a hugely attractive offer for people who want to upgrade quite cheap.

    And then you can say ”But if Nvidia does a pricedrop” – I dont go on ”If’s. There is no pricedrop NOW, and people who will look for an upgrade will notice this.

    ”None, apparently, because you’re quick to try and put price back into the equation, rather than actually discussing the scenario presented.”

    Lets take watt usage out of the way aswell then. Or shader cores. Its irrelevant to drop big points out of the equation to suit a narrative, or to be right. If you want to do that, by all means, go for it. Not something i will do 🙂

    ”I never claimed that nVidia will drop the prices that far, and I don’t expect they will”

    You made an assumption: ”If nVidia drops its prices of the 980 to $199 tomorrow, then Polaris has nothing going for it anymore. Price is all it has.”, hence the ”If”. If you dont think Nvidia will drop their prices anymore, than the price-edge remains solid.

    ”It’s just a thought-experiment: what if they did? What if the 980 and Polaris were the same price?”

    Its irrelevant to what happens in reality. Else we can do multiple assumptions ”What if RX 480 was 7 TF instead at 90 watts power usuage?”, ”What if GTX 1080 was just a rebranded Maxwell”? What if, what if…

    ”What would still make the Polaris interesting?”

    Near-980 performance and you can be fairly sure its architecture will remain supported for a while (hence the forward-thinking architecture again)

    ”Erm, how does that make sense?”

    Well, try again.

    ”Polaris appears to be closer to Maxwell in performance/watt than Pascal, based on the estimates that it performs between 390/390X and 970/980, and the TDP of 150W that AMD has shared.”

    And you get all that for a measly 200 bucks. What a card 🙂

    ”Now, if Polaris would perform close to the 1070 at the same TDP, it would be something to get excited about.”

    Hmm: ”Its like you dont understand that Polaris was a mid-end chip from the start, and you expect GTX 1080 beating levels of performance from it.”

    Stop thinking Polaris is a high end chip when it isnt.

    ”Performing close to 970/980 means that AMD needs the 16nm FinFET process to match the level of efficiency that nVidia had 2 years ago at 28nm.”

    As in framerates? Because there are other cards for that, you know?

    ”I think it’s pretty obvious that this is underwhelming.”

    Well, then you dont buy it. Problem solved. For a lot of other customers out there, its a good card. Not everyone is interested in tech as much as you.

    ”And I think it’s also pretty obvious that this card is not going to be the saviour of AMD.”

    Gee, you dont say when its a mid-end chip to begin with.

    ”They’ll be selling Polaris at low profit margins to compensate for the lack of efficiency. Not what AMD needs right now.”

    Speculation.

    ”That actually has nothing to do with it, see above.”

    It does. Because you really just expected Polaris to be near GTX 1070/1080 performance for you to make it ”interesting”. And when it turns out it doesnt (Gee, you dont say with a mid-end chip!), you are ”underwhelmed”.

    ”Why do you keep pointing at consoles? It means nothing.”

    It means a lot since they power 80 million of these things now. IF that means nothing, than your whole Nvidia talk means nothing either. See how ludicrous that sounds? Case and point.

    ”In fact, rumours have surfaced that Nintendo will be going with an nVidia Tegra-based solution rather than AMD: http://www.tweaktown.com/news/52119/nintendo-nx-powered-nvidia-tegra-processor-amd-chip/index.html”’

    And the other day it was said they went for a similar setup to PS4, and the day before that it was a PPC chip. And they also have a NX Handheld on the way – Also with talks of using Tegra units. That stuff changes nearly every day, unlike PS NEO and Xbox Scorpio. It really means nothing.

    And even then, if they would go for Pascal (Not saying they would), how? They for ARM chips? Because even Nvidia’s top end Tegra X1 is just a bit more powerful than last-gen, except for having a current-gen feature set.

    ”So the tables might be turning here.”

    I feel that on this part, you are grasping for straws just to put AMD in a bad light. AMD has a whole dedicated semi-custom division for this kind of thing, Nvidia doesnt, so its quite unlikely they will go around to make a custom SoC for Nintendo (Besides Nintendo isnt about power or fancy features)

    ”Why? Vega isn’t going to be hugely different from Polaris. What AMD names ‘Polaris’ and ‘Vega’ is pretty much what nVidia calls ‘GP104’ and ‘GP100’. They’re they same basic architecture, implemented at different scale levels.”

    Still doesnt mean you have to draw conclusions already. That is just silly. Who knows, maybe AMD will do mid-updates like Nvidia did with its Maxwell cards.

    ”They try to sell it has having better perf/watt than Polaris, but I think that is mostly because they factor in the wins from HBM2 as well.
    Vega will be released too shortly after Polaris to have a significantly more optimized architecture.
    Besides, AMD’s own roadmap showed Polaris as having the big ‘2.5x perf/watt’ gains (coming from new arch + 16nm/FinFET), the gains going from Polaris to Vega were pictured as much smaller.”

    I figure you will say that you are being ”realistic” here but good grief how negative you are about them. Literally every stone has to be turned. Its like AMD is like cyanide to you.

    ”nVidia was nVidia. When GeForce FX bombed, they had a huge marketshare, and equally large mindshare. They could afford a misstep, because the generations before the FX had been huge successes.”

    Oh, i see. You are going to excuse that Geforce FX flop like that with such a justification. Noted. ATI also had a big marketshare there. Geforce FX was just not competitive at all with Radeon R300, to the point their performance in shader heavy games plummeted. Why Nvidia decided that its first PciE cards had to based on these cards, i will never know.

    ”AMD however does not have this luxury. They’ve been losing marketshare very quickly, and are at an all-time low. They have been trying to get back in the race for a few years now, and have failed so far.”

    And yet they are still here despite ”failing” all these years. Sounds like AMD could afford a (lot smaller) misstep aswell.

    ”So Polaris wouldn’t be their first disappointment, the 3×0/Fury were also a disappointment, and before that, the 2×0 series didn’t stand much of a chance against Maxwell. So AMD has already been struggling for about 2 years at this point. And we’re not just talking performance, but AMD was also lacking other things, such as HDMI 2.0 and DX12_1. The further they get behind on features, the more development they have to put in to close the gap.”

    But there is no agenda involved, obviously, for being so negative and cynical, ofcourse 🙂

    Question – If they are all so bad, why cant you just leave them do their thing? I mean, you clearly expect them to be dead in the future, so, with that in mind: Why do you care? If AMD dies, you still have Nvidia and you still have cards coming around. Nothing changes for you personally. Well, except that you cant write about AMD any longer.

    • dealwithit says:

      You are still attacking Scali, and agreeing with Polaris isn’t direct competition to Pascal. You have an agenda.

    • Scali says:

      Why disregard a huge plus point just to suit a narrative?

      Because price is an arbitrary metric, and has nothing to do with technology. It mostly has to do with the performance. The player with the most advanced technology will generally be able to set the price/performance scale.
      The remaining players will just have to try and undercut these prices.
      But there is a problem there.
      If vendor A and B both have a card that has about the same performance… But vendor A needs more transistors to get that performance than vendor B. Then that means that vendor A needs to produce larger, more expensive chips, and possibly also more advanced, more expensive coolers. However, because performance is the same, the consumer expects both products to be the same price. After all, they want the best bang for the buck, and won’t pay more just because vendor A’s architecture is not as efficient.
      So, vendor A has to settle for a smaller profit margin than vendor B.
      Where it gets dangerous is when vendor B goes for aggressive pricing, and goes for large volumes at small profit margins.
      This will push vendor A into a position where they may have to sell their cards with little or no profit at all. Which means they cannot get a return on investment. Over time, vendor A will be bled dry, because every new iteration they have less money to invest than vendor B, so they get more and more behind, and no opportunity to turn things around.

      But yes, if you only want to look at: “Hey, vendor A has great prices for us!”, fine.
      I look further ahead.

      Not talking about Fury X in this context, but RX 480, where supposely a lower clocked variant will be in PS NEO. And lets not forget that current consoles are also based on AMD hardware, so that is, like i said, still a sizeable industry they have under their belt.

      Again, who cares?
      Whether or not a console uses (a derivative of) a certain chip doesn’t mean anything for a desktop PC card.
      AMD powers both the PS4 and the Xbox One, but ironically enough they’re being crushed by nVidia for about as long. Maxwell-based cards are the dominant cards on the desktop: http://store.steampowered.com/hwsurvey/videocard/
      They win most benchmarks. So who cares that AMD powers consoles? People see that nVidia cards perform best in PC games, and buy nVidia cards.

      Current AMD tech can scale all the way down to a few watts and with various configurations,

      And how is that relevant when I am considering whether to buy a Radeon 480?
      Because that was the discussion here: for what kind of users would the Radeon 480 be a compelling upgrade?

      Yes, i know. But its silly to make that comparison when the supported tiers of Polaris are still unknown.

      How so? We were discussing whether Polaris would be interesting as an upgrade for people who currently own a GTX970.
      So people who already have a card with DX12_1. Regardless of Polaris’ exact supported tiers, there isn’t much room for improvement to begin with.

      you can be fairly sure its architecture will remain supported for a while (hence the forward-thinking architecture again)

      That is crystal-ball nonsense, and completely unsupported by empirical evidence. In fact, I had to toss out my last few Radeons for the simple reason that AMD didn’t release drivers for new versions of Windows. I think it’s a huge gamble to buy AMD for the long term.

      Stop thinking Polaris is a high end chip when it isnt.

      That’s not the point. The point is, at 150W TDP, nVidia can apparently get a lot more processing power from their architecture. So better performance/watt. Not taking about chips, talking about architectures.

      Gee, you dont say when its a mid-end chip to begin with.

      Actually, that’s where the volume generally is. With a proper mid-range (there’s no end in the middle!) card at the right price, you can score huge successes. The GeForce 970 has been a fine example of that the past 2 years. It hits a nice sweet-spot in terms of price and performance, so it’s very popular. And it uses a ‘small’ Maxwell GM204 chip, not the big GM200, and with a modest TDP, so no super-advanced power regulation and cooling required, so it’s relatively cheap to produce. The GTX960 is slightly less popular, but still sells very well, and that one is built on the even smaller GM206. Products like that can rake in the big bucks.

      It means a lot since they power 80 million of these things now. IF that means nothing, than your whole Nvidia talk means nothing either. See how ludicrous that sounds? Case and point.

      It doesn’t mean what you think it means. You try to make the claim that because consoles use AMD chips, that would somehow be an advantage for PC videocards as well. I don’t see AMD cards winning any benchmarks, so there is no advantage to be seen.
      Just one of the many fairytales from the AMD-camp.
      Developing and optimizing games doesn’t work the way you think it does (or the way you try to portray it at least). You can’t just copy a console game to PC 1:1, even if it uses an x86 CPU and a DX11-compatible GPU.

      Oh, i see. You are going to excuse that Geforce FX flop like that with such a justification.

      I’m not excusing anything. The GeForce FX was a horrible product line. I’m just making the point that nVidia was in a completely different position at the time than AMD is now, so the fact that nVidia survived is no guarantee that AMD will.
      nVidia survived because:
      1) They were very successful up to the GeForce FX (ATi was mostly an OEM supplier up to that point, and was just starting to get taken seriously by gamers/enthusiasts. nVidia was the go-to brand for that market, everyone knew about the TNTs and GeForces. Radeons were still a new thing. Heck, most people didn’t even want to believe that the FX was actually that bad… sound familiar?)
      2) They made a good recovery with the GeForce 6-series

      AMD isn’t very successful right now. They’re bleeding cash through the CPU department, and they’ve lost a big chunk of marketshare to nVidia in recent years.

  26. Pingback: The damage that AMD marketing does | Scali's OpenBlog™

  27. Pingback: GeForce GTX1060: nVidia brings Pascal to the masses | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s