Catalyst Hotfix 11.1a: AMD admits defeat in tessellation

So, AMD has released a new Catalyst hotfix driver:

AMD Catalyst 11.1 hotfix GPU ShaderAnalyzer

· The Catalyst Control Center includes an early prototype of some new tessellation controls. Our goal is to give users full control over the tessellation levels used in applications. The default selection “AMD Optimized” setting allows AMD, on a per application basis, to set the best level of tessellation. The intention is to help users get the maximum visual benefit of Tessellation, while minimizing the impact on performance. Currently no applications have been profiled, so the “AMD Optimized” setting will be non-operational until further notice.

· The “Use Application Settings” option gives applications full control over the Tessellation level. Users can also manually set the maximum tessellation level used by applications with the slider control

· The long term goal for the “AMD Optimized” setting is to use the Catalyst Application Profile mechanism to control the AMD recommended level of tessellation on a per application basis. AMD’s intention is to set the tessellation level such that we will not be reducing image quality in any way that would negatively impact the gaming experience.

It’s official: AMD is going for the driver hack to alleviate their tessellation deficiencies. Basically what they’ve said above is that the driver can artificially limit the tessellation factors. This classifies as a driver cheat for the simple reason that it breaks the DirectX 11 and OpenGL 4.0 APIs. Namely, the application explicitly passes tessellation factors to the tessellation units via the shaders. What AMD is doing here is short-changing the application. The application shader will say: “I want tessellation factor 15”, and the driver says “Sure… but I’m just going to use tessellation factor 5”. The developer already has full control over the tessellation factors through the API. So technically there’s no need for this slider. The developer can build it into the game, just like any other quality slider commonly found in games (texture detail, shader detail, shadowmap size etc). And historically, developers have been quite good at giving games plenty of controls so their games can be adjusted to perform very well on a wide variety of hardware (even if you disregard AMD hardware for a moment, there is quite a bit of performance disparity between nVidia’s slowest DX11 card and their fastest, so having control over tessellation detail is desirable even in an nVidia-only world). But now AMD allows the user (or effectively AMD themselves, when using the Optimized setting) to override any choices the developer made, and use lower detail.

It’s pretty much equivalent to reducing texture quality by silently using 16-bit textures instead of 32-bit ones, for example… or by using lower resolution textures than what the application is actually using (eg mipmap biasing). So basically it’s driver cheating. Trading in image quality for performance.

I can only hope that AMD won’t enable this by default (like their AI feature which does some of the above texture reduction tricks), since it will make any kind of comparison with competing cards even less apples-to-apples than they already were.

But I’m glad that AMD has now admitted that their hardware needs this option to lower tessellation quality, because they can’t do tessellation as well as nVidia’s hardware. So now you can pretend that you can run games at full tessellation detail, just like nVidia cards. Except unlike nVidia cards, your AMD card won’t actually render at full detail anyway, it cheats.

Update: I see that certain people refer to this as an ‘optimization’. It is NOT an optimization. To quote Wikipedia:

In computer scienceprogram optimization or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources.[1] In general, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power.

Making something work more efficiently or use fewer resources assumes that this ‘something’ is the same. That is, you have the same input and the same output for a given function. What AMD is doing does NOT class as an optimization because given the same input, you get less output. They are cheating by reducing the workload and producing results which are incorrect. Results which do not match the specification of DirectX and OpenGL.

Optimizing is about producing better, more efficient code. AMD doesn’t do anything to improve the code at all.

Besides, you should ask yourself: why did AMD introduce this setting, while nVidia apparently doesn’t need it? Without any kinds of cheats or hacks nVidia’s hardware ploughs through the tessellation workload. If all these tessellation games and benchmarks were so suboptimal, wouldn’t nVidia need a cheat as well?

Another side-effect of this feature is that AMD can no longer hide behind their excuse of ‘over-tessellation’. Namely, you can put the limit on any value you like. You will find that even in those ‘over-tessellated’ scenario’s, setting the limit to 16x instead of the regular 64x, doesn’t do much, if anything, for performance. This indicates that although in these scenario’s, very high tessellation factors may sometimes occur, they are not just tessellating the heck out of the scene, but rather they are working in a nicely adaptive fashion. It’s much like adaptive anisotropic filtering… you can set the maximum to 16x, but it will barely be slower than 8x, or perhaps even 4x, because the adaptive algorithm will rarely use values higher than 8x under normal circumstances. In this case, the tessellation factor doesn’t go over 16x often enough to impact performance. AMD’s problem is more structural than these sporadic high tessellation factors (as I already said earlier).

This entry was posted in Direct3D, Hardware news, OpenGL, Software development, Software news. Bookmark the permalink.

19 Responses to Catalyst Hotfix 11.1a: AMD admits defeat in tessellation

  1. Michael says:

    I’ve frequented this blog often , as I’ve found your contrary disposition both entertaining and informative. You’re sorely mistaken on this one, though.

    Tessellation was and, as far as the point is accurate, is a feature implemented to substantially improve image quality across a range of hardware without limiting scalability to high-end, upper echelon hardware. One of the reasons it has such potential as a revolutionary function derives from this per-user flexibility of adjustment; whereas previous generations required mid-line GPUs to entirely turn off & dial down IQ settings as requisite of enjoy tolerable framerates, appropriating different tessellation factors accommodates the breadth of DX11 hardware without crippling ranges of them by default.

    It’s certainly appropriate to allow the 570, 580, 6970, & 6950 to render Metro or a presumably tessellation enabled Crysis 2 without providing a variable slider for tessellation factors. But why on Earth would you protest a customization option – for a feature touted and, more importantly, DESIGNED to provide such scalability in accommodating lower performance GPUs? Would you rather them just turn off tessellation? Why, for God’s sake, shouldn’t provision for tessellation “intensity” be accessible through a driver control panel?

    What’s lost in your summary, in my opinion & with due respect, is the benefactors are not higher tier Radeons making up a small fragment of proportional users. AMD isn’t conceding a fundamental inferiority intrinsic it’s architecture. The tessellation customization is simply unnecessary for high-end users, which is why it’s not directed as such toward that hardware. Instead, it’s the plethora of middle-end gamers whom will (at least potentially, as DX11 becomes the standard) enjoy substantial image quality improvement resultant the sweeping customization accessibility being integrated here. Regardless of what tessellation factor suits the individual best, they’ll have an option that doesn’t degrade into checking the “tessellation off” button. Any advance that puts such decisions in the hands of individuals is fantastic, and coinciding the many recent sliders & check-boxes AMD has implemented into the previously utterly useless Catalyst, I believe the occasionally daft suits over there are finally listening & catering to their consumer base. One last point: aside from the GF110, I don’t see how Geforce cards wouldn’t stand to benefit from a similar option in Forceware? Am I missing something?

    In any case, I do admire your obvious technical understanding & respect your divergent position, so I’ll say cheers to you & keep writing. I find your insights most informative.


    • Scali says:

      Well Michael, allow me to explain my position more clearly then:
      I am not against the user having control over tessellation. I just don’t think that the driver is the place for this. It should be built into the game. As I already pointed out, the D3D and OpenGL APIs already give the developer full control over tessellation factors, so the developer should put a detail control in (just like all those other detail controls that you find in pretty much every game).

      To me as a developer, all these driver controls are nasty anyway. It doesn’t matter what code I write, a magic driver is going to come along and just ignore my commands anyway, because the driver writers at AMD, nVidia or whatever other company think they know better anyway. I don’t have to bother trying to keep to D3D or OpenGL spec anyway, because the drivers don’t adhere to it. They just make up their own minds.

      Basically I’m not saying that AMD should take out the control at this point. The reason they put it in is because most games currently do NOT have such a control (most of them just have an on/off switch). This pretty much forces AMD to implement this driver hack, and in a way nVidia should do the same if they want their lower end DX11 hardware to use tessellation.
      I’m saying that it should never have been in the driver, it should have been in the games. But since it isn’t in the games, I understand why AMD wants it. But that was not the real point of my blog. My point was that I’ve always said that AMD’s tessellation hardware is underpowered, while most people believe it’s some kind of conspiracy with ‘nVidia-optimized’ games and benchmarks. This driver feature means that AMD admits it’s not a question of optimization, since AMD has not optimized a thing. They just clamp the tessellation factor to reduce workload. They basically admit that they cannot handle higher workloads period. As I’ve always said.

      Aside from that I think AMD’s implementation is a bit silly. Instead of just clamping the maximum, they should have applied a scale factor. That way you get much more control over tessellation performance, and it also gets less of a detail level disparity when the limiter suddenly kicks in. If you’re gonna put in a hack, at least put in a smart and efficient hack…

      I also don’t agree on your remark about high-end hardware. Even AMD’s highest-end GPUs are still a LOT slower at tessellation (even though most games currently may not push them that far). If you take a look at Endless City for example, AMD’s fastest cards are reduced to mainstream performance, compared to nVidia’s offerings. A GTX580 can reach double the framerate of a 6970.

    • Scali says:

      Well, I’m a bit disappointed that you haven’t bothered to reply anymore, Michael.
      At any rate, I’d like to add that the thing I have issue with is not not the control itself (again, nowhere in this blog do I say anything about removing. I merely say that they should not enable the “AMD optimized” setting by default).
      The thing is that AMD has been spreading FUD about tessellation for months, circulating claims that nVidia was buying off developers and specifically making the code unoptimized for AMD hardware and all that. They also claimed that they had ‘optimizations’ for games like HAWX 2 which they claimed would benefit both AMD and nVidia. When the developer didn’t accept AMD’s ‘optimizations’, AMD claimed they could just put them in the driver.

      I have said from the start that those ‘optimizations’ would just be some kind of cheat, since there is nothing nasty about games like HAWX 2, other than the tessellation workload being too high for AMD’s current hardware (which I see nothing wrong with, just forces AMD to put a better tessellator in their future products. Both AMD and nVidia have been on the receiving end of such situations various times over the years… eg GeForce FX’s poor SM2.0 performance, or Radeon 2900’s poor DX10 performance… and recently nVidia’s lack of DX11 hardware altogether).
      These drivers just prove that I was right all along, and AMD was just launching a dirty FUD campaign against nVidia, trying to cover up for their poor tessellation hardware. Now they’re covering up for it with overt driver cheating. At least it’s open now, I hope the underhanded tactics will also stop.

      Don’t get me wrong, both nVidia and AMD have played it dirty over the years… but I think AMD’s marketing is just downright nasty. nVidia has shown a bit more class in recent years. When they didn’t have DX11 hardware, they merely said that they didn’t find DX11 all that compelling (which in a way is true, if you don’t sell DX11 hardware, it is not interesting to you at all… probably not how people interpreted it though, but hey).
      They didn’t launch a smear campaign against AMD trying to claim that AMD was bribing developers to make games unfairly biased to DX11 (where AMD was doing pretty much the same as what nVidia is doing: partnering with developers to get your latest technology supported in as many games as possible… AMD helped push a lot of DX11 titles, just like nVidia is pushing for PhysX and their DX11 strenghts: compute shaders and tessellation).

      • David Luke says:

        Great article and debate – but the AMD optimised setting seems useless to me, as it relies on AMD regularly updating all the applications produced by all vendors, this approach could end up in a big pain to administer (a bit like crossfire profiles)

        However the hard-slider bar with 2x,4x etc. is a great idea.

        This way, you can clamp the tesselation level to suit your particular card’s hardware capabilities, and just leave it there.

        The trick is to find the max geometry setting possible that doesn’t overload the whole pipeline and damage framerates.

        I have an HD5870 in my desktop PC, and that can just about handle 2X without overloading. 4X is just a bit too much.

        My HD6970 in my gaming PC can go up to 6X without crashing framerates. The next generation 7xxx will presumably go even higher.

        But this control is very welcome, since crysis 2 comes out next month and knowing Crytek they won’t have a nice tesselation slider in their own code and I don’t want to get 12FPS when ther is more than one alien on the screen.

      • Scali says:

        Well yes, the fact that AMD currently does not even have any ‘AMD optimized’ settings for any games already demonstrates how much of a fool’s errand this is going to be.

        As for the slider… as I said, clamping alone isn’t that good. It would have been much better if they would scale down the tessellation factors as well. That way you reduce tessellation everywhere in the scene, rather than just limiting the few parts that use high tessellation factors (although AMD wanted us to believe otherwise, pretty much all games use an adaptive tessellation scheme, so they don’t just throw large tessellation factors around. Limiting only the large factors doesn’t do all that much for performance… benchmarks such as Unigine Heaven may go over 8x every now and then, but it is so sporadic, that limiting to 8x gives virtually no performance increase. It’s almost as if AMD doesn’t understand their own performance problem). It would be a much better way to control detail over the scene. It would also be better in terms of performance. Currently the limiter is pretty useless unless you set it EXTREMELY tight (as you say, 2x to 4x on most, 6x on their best offerings)… A scaling factor would reduce tessellation over the entire scene, giving performance increases everywhere, not just the clamped areas.

        As for Crysis… the original CryEngine2 had very good control over detail settings. They had 4 levels of detail for a large set of graphics detail parameters (Low, Medium, High and Very high):

        If they add 4 levels of tessellation to Crysis 2, that’s probably good enough, but I wouldn’t put it past them to put a slider in. Crysis was one of the most tweakable games ever in terms of detail/performance.

  2. David Luke says:

    I thought I would just do a little testing to see what these controls in CCC do, and my results are quite simple here: nothing.

    I tested Cat 11.2 on both an HD6970 and an HD5870 and found that no difference was made to either Unigine or Stone Giant benchmarks by any adjustment of these settings.

    Perhaps I should roll back to the 11a hotfix and test again??

    None of these CCC tesselation settings overrides the app settings themselves for tesselation in either of the apps tested.

    This is obviously what the AMD term “experimental” means as mentioned in the release notes a while back.

    You mentioned that AMD misappropriate the word “optimized” when they really mean “sacrificed”. It looks like “experimental” in this context means “non-functional”

  3. Antzrhere says:

    Scali – Firstly don’t quote wiki for word/phrase definitions,. Wiki is not a credible source and cannot be referenced in many fields of expertise as it’s often inaccurate, contextually biased and frequently just plain wrong.

    So this is not an optimisation? I think you’re failing to see the context in which this optimisation is defined. Sure, it is not improving the tessellation efficiency – thus not optimising DirectX/OpenGL tessellation calculations in itself, but improving overall FLOPs efficiency (the setting simply states ‘AMD optimized’). Because Radeons use a dedicated hardware tessellation unit (unlike Nvidia’s more clever scalable tessellation) if tessellation is set too high the tessellation unit becomes a bottleneck to the rest of the shader units meaning some of these are not fully utilised. Using the ‘AMD optimized’ setting ensures the tessellation unit is balanced with the shader unit workload, thus optimised for your particular card. By tweaking the tessellation factor you can optimise overall computing performance in terms of floating point operations. Just because you don’t get exactly the same end result by tweaking tessellation does not mean it is not an optimisation, efficiency IS improved, just completing a slightly different task.

    Of course, this is no fix or excuse for AMDs lacklustre tessellation performance, however because games may not offer the exact tessellation sweet spot setting using any given resolution this may be useful for gaming enthusiasts. Whether or not it is ‘fair’ to enable this by default depends on why you have an interest in the graphics card to begin with – if it is to simply make your games look & perform better then it is fair, but if your interested in relative benchmarks then your likely to see this as a cheap trick.

    Secondly, don’t try and provoke people into replying to a followup message. Sometimes people only want to say something the once – maybe Michael was honestly dissapointed in your message to a point where he didn’t want to reply? Consider these things and try to remain neutral.

    • Scali says:

      Oh please.. not the old Wikipedia fallacy again. Regardless of whether or not Wiki is a credible source in general, this particular definition appears to be perfectly valid. Next time, try to attack the actual message, not the source. Attacking the source is a sign of weakness.

      As for context… I’ll give you the context: AMD has been talking about optimizing tessellation in their drivers for months. For example, with the whole HAWX 2 pre-release benchmark thing. AMD clearly stated that they wanted to help the developers to optimize the tessellation code in the game:

      It has come to our attention that you may have received an early build of a benchmark based on the upcoming Ubisoft title H.A.W.X. 2. I’m sure you are fully aware that the timing of this benchmark is not coincidental and is an attempt by our competitor to negatively influence your reviews of the AMD Radeon HD 6800 series products. We suggest you do not use this benchmark at present as it has known issues with its implementation of DirectX 11 tessellation and does not serve as a useful indicator of performance for the AMD Radeon HD 6800 series. A quick comparison of the performance data in H.A.W.X. 2, with tessellation on, and that of other games/benchmarks will demonstrate how unrepresentative H.A.W.X. 2 performance is of real world performance. AMD has demonstrated to Ubisoft tessellation performance improvements that benefit all GPUs, but the developer has chosen not to implement them in the preview benchmark. For that reason, we are working on a driver-based solution in time for the final release of the game that improves performance without sacrificing image quality. In the meantime we recommend you hold off using the benchmark as it will not provide a useful measure of performance relative to other DirectX 11 games using tessellation.

      So all this time AMD has been accusing developers/nVidia of writing tessellation code with ‘known issues’, and AMD has been promising us fixes for this tessellation code. This slider option has NOTHING to do with what AMD has been saying all along. THAT is the context in which I have written this blog, the ‘optimizations’ that AMD has been promising us for many months now.
      Regardless, I still don’t agree on your interpretation of ‘AMD optimized’. It doesn’t improve effiency… in fact, you cannot make any comparisons of efficiency or any other kind of metric, because the workloads are different. My interpretation of ‘AMD optimized’ is that the driver will select the ‘most optimal’ tessellation clamping factor for a given game (for your particular Radeon card), where ‘most optimal’ should probably be interpreted something like ‘best tradeoff between image quality and performance’ (which is not necessarily the most optimal setting in terms of absolute performance).

      Also, I never said anything about ‘fairness’. I simply pointed out that the tessellation factor is not to be meddled with by the driver. The API specs are very clear about that. The API provides the developer with direct control over the tessellation factor, and you cannot have two captains on one ship.

      As for Michael not replying… The way I see it, he clearly misinterpreted the point of my article. He also posted the (perhaps rhetorical) question: “Am I missing something?” Well, apparently he was, and I tried to answer that.
      After I replied to try and make him see the article in the light it was meant, yes I expected him to reply, especially since he posted such harsh criticism the first time (“You are sorely mistaken”). I would have liked to know whether his opinion has changed.

      I’m as neutral as they come.

    • Scali says:

      So, you’re just like Michael then. You think you can just come here, badmouth me, and when I give you the courtesy of approving your comment rather than just trashing it right away (that’s right, comments don’t appear on this page unless I explicitly approve them. You accuse me of not being neutral… If I didn’t want to be neutral, the easiest option would just be to trash any comments that I don’t fully agree with, but I don’t do that. I allow people to disagree with me, and have a discussion… Which neither of you tried), and then answering it in a polite way, you don’t bother to reply.
      Perhaps I should just trash such posts in the future, rather than bothering to respond to them. You’re not worth a reply.

      • David says:

        Or they just don’t care enough to check back…

      • Scali says:

        Then why did they bother to write such a lengthy and vitriol-filled reply in the first place? They seemed to care back then.
        Nope, not plausible.
        The real reason is that they thought they were smart, and they could show me up, so they wrote an arrogant reply.
        But when it turns out they’re not as smart as they think, they will just hide behind their anonymity and disappear like a thief in the night. It takes a big man to admit that you are wrong.

  4. Bob Jones says:

    Why not write about how game companies sell out to graphics companies. Game companies going out of their way to write slow code in order to make one card look better than the other.

    They play end users like a fiddle.

    Ever wonder how it was possible that ATI and Nvidia cards would beat each other every 6 months for ten years? Its called collusion. They got slapped on the hands for it in court but if the profits exceed the fine it will continue. Its programmed in, insurance companies have been doing it for years.

    Its about money and manipulation, the are both sell outs but of the two I would say Nvidia has been the worse over the years.

    • Scali says:

      Well, why don’t YOU write about that then? I don’t quite share your ideas on this matter. In fact, none of your ideas fit with the topic of this blog at all. AMD has supported tessellation for years. So how is it that firstly nVidia beats them, and secondly, it’s been well over 6 months, and AMD is still way behind nVidia?
      In fact, many games and benchmarks using tessellation were originally developed with and for AMD cards. And that code runs way faster on nVidia cards, although it SHOULD have been written to make AMD cards look good, and promote DX11 at a time when nVidia did not have any support at all.

      • Promilus says:

        You can affect LoD through drivers in NV. Same applies to different levels of filtering algorithms and so on. tess factor limiter is just another slider which gives you opportunity to adjust between best quality and best performance. They never denied that geometry in their gpus is slower than fermi. 4GPC+16 polymorph engine of GF110 vs 2 rasterizers, 2 geometry and vertex engines of Cayman – it’s quite obvious.

      • Scali says:

        Yes, but as I already said, I don’t approve of these drivers hacks any more than I approve of the tessellation hack. LOD and filtering algorithms can be specified by the D3D/OpenGL API (and are usually controllable from the game itself), so I don’t see why a driver should overrule the choices that a developer made.

        And AMD never admitted that their tessellation hardware is slower than nVidia’s. They always sought excuses such as the polygons being too small (which is a lie, as I have demonstrated elsewhere on my blog), or nVidia bribing the developers to write unfavourable code for AMD (which cannot be proven, but is highly suspect, especially since even AMD-developed software has about the same performance margin on nVidia hardware). That is the same as denial in my book.
        And no, to fanboys, NOTHING is obvious.

  5. pepi says:

    Haters gonna hate. Who cares about this anyway? I’m glad I have this option and you ain’t gonna change a fuck abt it with your hate reviews… Or maybe I’m missing something?

    • Scali says:

      Apparently you are missing something yes. I never said that AMD should remove the option. I just explained why AMD needed to add this option. Hatred? Hardly. Bringing out an objective view on the situation.
      The only hater I see here is you.

      • pepi says:

        Your objective review is really filled with hate towards amd adding this option. Youre really bashing on it and I almost instantly figured youre an nvidia fanboy in my eyes. Sadly I came across a forum -from which you were expelled- in which people were addressing you as a nvidia fanboy.

        Anyway I can’t seem to figure your problem on such a small update on the drivers and why amd/ati has “lost” anything? Was there any race of some kind going on here except the race of who’s got the most sales race between nvidia and amd?

      • Scali says:

        Is this an nVidia fanboy?
        Just because fanboys always see other people as fanboys as well doesn’t mean it’s true.
        I’m independent and there is plenty of proof of that if you bother to look.

        For the rest, I am more bothered why you are attacking me personally (and not the article), and why you are asking me the wrong questions, looking for some kind of apologetic comment?
        If you want the answer, read my earlier blog posts on the tessellation situation between nVidia and AMD.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s