Richard Huddy comments on my blog

But perhaps he shouldn’t have… It all started a few days ago with a comment he made on this old blog post:

In the words of repi “Huddy gets it”. BTW repi is Johann Anderson of DICE, the lead graphics architect on BF3.

Since the post was written rather poorly (he didn’t even spell Johan Andersson’s name correctly), and didn’t try to challenge any of the arguments I brought forward in the article, and was posted from a German IP, I didn’t approve it immediately, but decided to mail Huddy first, asking if it was really him.

And yes, he admitted to posting this, and excused himself for the spelling by saying it had been a long day, and explained that he was in Munich for work (Intel-related).

Anyway, I was wondering why he’d comment at all… and why he would point to Andersson rather than backing up his own statements and countering the issues I put forward. The exact text from Andersson that Huddy is referring to is the following:

I’ve been pushing for this for years in discussions with all the IHVs; to get lower and lower level control over the GPU resources, to get rid of the serial & intrinsic driver bottleneck, enable the GPU to setup work for itself as well as tear down both the logic CPU/GPU latency barrier in WDDM and the physical PCI-E latency barrier to enable true heterogeneous low-latency computing. This needs to be done through both proprietary and standard means over many years going forward.
I’m glad Huddy goes out and in public talks about it as well, he get’s it! And about time that an IHV talks about this.
This is the inevitable, and not too far, future and it will be the true paradigm shift on the PC that will see entire new SW ecosystems being built up with tools, middleware, engines and games themselves differentiating in a way not possible at all now.
– Will benefit consumers with more interesting experiences & cheaper hardware (more performance/buck).
– Will benefit developers by empowering unique creative & technical visions and with higher performance (more of everything).
– Will benefit hardware vendors with being able to focus on good core hardware instead of differentiating through software as well as finally releasing them and us from the shackles of the Microsoft 3 year OS release schedule where new driver/SW/HW functionality “may” get in.
This is something I’ve been thinking about and discussing with all parties (& some fellow gamedevs) on different levels & aspects of over a long period of time, should really write together a more proper blog post going into details soon. This is just a quick half-rant reply (sorry)
The best graphics driver is no graphics driver.

(Also, note how most people in that thread are about as skeptic about dropping DirectX as I was)

It could just be me, but I don’t see Andersson actually saying “Farewell to DirectX”, but rather wanting the API to be as lowlevel as possible. I assume that he, as most developers, is aware of why drivers are a necessary evil on the PC platform (as I argued in my original blog). And I think his sentence “The best graphics driver is no graphics driver” is just expressing an unobtainable ideal. I agree that purely from a performance perspective, in theory it would be ideal. But in practice there are various problems with that approach, so some kind of hardware abstraction is required.

The original interview also shows another skeptic developer.  Crytek’s R&D technical director, Michael Glueck, says the following:

 It definitely makes sense to have a standardised, vendor-independent API as an abstraction layer over the hardware, but we would also prefer this API to be really thin and allow more low-level access to the hardware.

I think this is exactly what Andersson was trying to say, and it is the same stance that I have (and Microsoft for that matter, as the move from DX9 to DX10 was to make the API more low-level, more ‘thin’). Some kind of abstraction layer (and thus API) will be required, to abstract away differences between architectures, both from different vendors and from different generations of hardware. As I pointed out in the original blog, that is a very important role that DirectX fulfills. Although Andersson has not commented on the issue so far, I assume he is fully aware of the need of hardware abstraction on the PC platform.

Therefore I believe that Huddy took Andersson’s words one step too far, by saying the API should disappear altogether.  I think the reason why Andersson says Huddy “get’s it” (sic) is already explained by him: “I’m glad Huddy goes out and in public talks about it as well”. He is happy about the publicity, and hopes it will have an effect on the API designs of the future. Having Huddy go overboard may actually be a good thing in this respect: Aim too high, come up slightly short, and land in the place that you actually wanted to be in.

Anyway, it is sad that Huddy is trying to drag Andersson into this, and I can understand it if Andersson wants to refrain from commenting. After all, they are Huddy’s words in that interview, not Andersson’s. My criticism of that interview is aimed at Huddy’s words, not at Andersson. I respect Andersson as a developer, and as I said, I assume he knows why hardware abstraction is a necessary evil, just like most developers.

Huddy has not tried to make a single technical argument… so I’m not quite sure what he was trying to aim for. It looks like a fallacy to me: appeal to authority. Something like: “Andersson agrees with me, and he is an authority, therefore I must be right”. Well, fallacies don’t work. They don’t magically make all the issues I raised in my blog disappear.

So while I don’t think that Andersson would back Huddy all the way, even if he did, it wouldn’t change anything I wrote. Although I respect Andersson, I would respectfully disagree. I still think it’s a fool’s errand to drop the API altogether, and I have presented plenty of issues. Huddy and/or Andersson will still have to provide solutions to these issues before they would convince me.

The real Huddy shows his face

Anyway, so much for the technical aspect… Clearly Huddy provided no technical answers to anything I said, and Andersson has not commented either. As our email conversation continued, I pointed out the same thing that I said above: although only Andersson can answer if he really backs you, you’d still have to address the points I made regardless.

Huddy made no such attempt however. In fact, I am not sure if he’s even capable of doing so. He’s more of a manager-type, and probably operates in a similar way to John Fruehe: he talks to developers, then paraphrases their thoughts to the press. His own understanding of the technical matters is limited at best, and on occasion he may get the details wrong a tad.

Instead, Huddy chose a more aggressive approach. Is that how it works then? Bully tactics? Technically, him bringing Andersson into it was already a form of that: instead of addressing the technical issues, he appealed to authority. He claimed that I lack integrity (because I did not approve his comment quickly enough to his liking). And said that I would plan to hide behind my anonymity. I don’t think ‘Scali’ is any more hiding behind anonymity than say ‘Lady Gaga’. I’ve been using the same alias for about 15 years now.

So I was curious why he is so bothered about some ‘anonymous’ blog in the first place. He said he found my blog post ‘pretty offensive’. Well, it’s all in the eye of the beholder, I suppose. I don’t see it as offensive. I just say his suggestions are nonsense, and then elaborate why with various examples and technical backgrounds. Since when is pointing out that someone is spreading nonsense ‘pretty offensive’?

Quite rich too, coming from Huddy. This is someone who has been pretty tough on competing companies over the years (mostly nVidia), often accusing them of lies, underhanded tactics and whatnot (while in actuality it was often Huddy who was doing the more underhanded work). At least the episode regarding tessellation has been well-documented on this blog.

He wasn’t afraid of ‘collateral damage’ either. As I briefly mentioned in other posts, Huddy was involved heavily in the 3DMark05 soap between ATi and nVidia, regarding the use of DST/PCF technology, while not using 3Dc technology. Huddy did a lot of damage to FutureMark here, severely hurting their image as an independent supplier of benchmarks by implying that FutureMark was in bed with nVidia. Perhaps the worst part is that he DIDN’T mention that FutureMark worked with ATi developer relations just as hard as with nVidia. So ATi had their share of input in the choice of shaders and rendering techniques in the benchmark as well. In fact, the choice not to include 3Dc was done with the consent of ATi, for technical reasons. Not at all what Huddy was implying. I don’t think FutureMark ever fully recovered from the damage that Huddy did to their image. Which is a shame, as FutureMark has always done a great job at making benchmarks that are as vendor-neutral as possible, and generally paint a very good picture of differences in hardware performance (be aware, this may not always translate to actual gaming performance for the simple reason that games are not as vendor-neutral as FutureMark’s benchmarks. If there is a weak spot in the hardware, FutureMark will uncover it. Games however will generally try to work around it, since their goal is to offer the best possible gaming experience, not to be as vendor-neutral as possible).

To add insult to injury: ATi didn’t even need to do this in the first place. Despite the fact that their hardware did not implement DST/PCF yet, they still had the best-performing cards in 3DMark05. And although there were slight visual differences between the DST/PCF path and the alternative ‘vanilla DX9’ path that the ATi cards used, people generally judged ATi’s visuals to look as good or better.

As for integrity… well, aside from the above mudslinging/misinformation campaigns that Huddy has been involved in… Not too long ago, Huddy was telling us how excited we should be about DirectX 11. He specifically points out how much more efficient DirectX has become. Is DirectX good or bad? Make up your mind already! I suppose it mostly depends on whichever stance benefits your company most at a given point. Just like how his initial stance was that tessellation was great, until nVidia introduced their line of DirectX 11 cards, which did tessellation a lot better than AMD.

I suppose Huddy is not used to not getting his way. Well, too bad, because I am not impressed by hollow rhetoric. Eventually he sent me this:

You can, of course, blog as much as you like.  But be careful not to defame me in a way that would give me a right to legal recourse.  If you do that you should expect me to act.  And, no, I’m not trying to scare you, just warning you that if you defame me then you’ll find yourself in court – even though you might try to hide.

Well, too bad again. I am not impressed by threats. Funny also that he insists that he doesn’t see this as very important. Not important, yet important enough to threaten with lawsuits? Good luck on that, Mr. Richard Huddy.

This entry was posted in Direct3D, OpenGL, Software development, Software news and tagged , , , , , , , . Bookmark the permalink.

6 Responses to Richard Huddy comments on my blog

    • Scali says:

      Yes, I already said you don’t have any arguments to back up your story of dropping the API. Thanks for proving that once again.

  1. This topic has always confused and fascinated me but I’m not finding Mr Huddy’s comments helpful either to understand why you would want to remove the HAL or why forcing developers to write hardware specific renderers would be of benefit to anyone.

    I do see how some developers could benefit from lower level access to the hardware and we do always want to remove bottlenecks or performance issues from our rendering. However after a few years of doing Xbox360 and PS3 I’ve recently been doing mobile phone games. That’s completely changed my perspective of PC development.

    Now I can see some really good arguments for greater abstraction to ease this upswell of new developers into PC development which a lot regard as a black art known only to the luminaries hidden within the fortresses of major developers that have long honed there crafts 😉

    Looking back is insightful too. Doing the last couple of MotoGP games there were plenty of times when I’d struggle with the PS3 API (not the Cell specifically) and having completed my task on it I’d turn to the Xbox360, look something up in the doc’s and then… just turn it on. Ok I’d then have to check my shaders still work, but a week long task on PS3 was often less than an hour on the Xbox360. That kind of time difference adds up.

    Could I have done it “better” on the X360 if I’d spent a week? Maybe.
    Would it have made the game any “better”? No.
    Did spending a week on a task on the PS3 make it’s version better? No, it made it the same.

    The difference is that because you had to do everything on the PS3 (we were writing our wrapper libs on the first MotoGP so this was the up-front work) right down to the most minute detail everything took 1 week or more just to get it running. The Xbox360 always seemed to take a couple of hours – the difference was that after when you decided you needed to get something more (faster, smaller, better looking, etc) then you could go back and improve that single thing. You dug down into the details where it mattered and spent your time and effort only there.

    Spending effort only where it matters, instead of forcing everyone to do the heavy lifting across even the trivial parts, means that you can spend more time actually making the game better.

    So to end this rambling epic-comment, yes I can see that lower level access to hardware will benefit some, but if it comes at a price for the majority then I can certainly live without it.

    PS: I am aware that this repeats some of what you posted previously and above 🙂

    • Scali says:

      Thanks for your post, Andrew. My biggest concern is that a PC is both the ‘PS3’ and the ‘XBox’ at the same time: there are two major GPU vendors, each with their own hardware. And Intel is well on its way to becoming an important third player. Their integrated GPUs improve with leaps and bounds every generation, and are becoming good enough to play the bigger 3d games.

      So at this point in time, dropping the API would mean that every developer has to write three renderers instead of one (or two, if you want to support both D3D and OpenGL, like some companies are now doing, to include Mac/linux versions of their games).
      It would also close the door to any radically new architectures from the existing vendors, or to new entrants to the market (let’s not forget, there are some GPU designers aiming mostly at the mobile market today, such as PowerVR. They might want to cross over to the PC platform again in the future, especially with Windows 8 running on ARM-based devices with these GPUs).

      And speaking of mobile devices… We see that solutions like Java and .NET are popular there. The result? Although Android has been an ARM-only platform so far, Intel can enter the market quite easily with their x86-based Medfield SoCs. They can run most Android apps out-of-the-box, because an estimated 75-80% of all Android apps are entirely written in Java.
      Java may be slightly less efficient than native code, but I think in general the fact that you can introduce newer and more efficient CPU architectures will outweigh being locked into a single instructionset forever.

      If we take x86… it effectively works much the same as Java anyway: the underlying CPU architecture has very little relation to the original x86 architecture. It translates all instructions at the decoding stage. Much like what Java does, really… Except that every x86 CPU has to carry this decoder in hardware (initially the overhead was quite heavy, but over time CPUs have become so fast that this level of abstraction has become an insignificant factor in performance). And all software has to be rewritten to make use of any new extensions to the instructionset. With Java, you just have to update the JVM to take advantage of new extensions, and all applications will automatically make use of them. Which is pretty much the same situation as D3D/OpenGL today.

      Especially with GPUs I don’t see any downside to at least keeping the instructionset virtualized. You have to upload the shader code to the GPUs anyway. It’s not that much extra overhead to compile these shaders from virtualized bytecode to optimized native code while uploading them the first time. If I’m not mistaken, even nVidia chose to implement Cuda that way, despite Cuda being designed specifically for nVidia GPUs. If even nVidia doesn’t go to the bare metal, why should we? Mr. Huddy?

  2. Pingback: Richard Huddy responds again | Scali's blog

  3. Pingback: SteamOS and Mantle | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s