Now before I start, don’t get the impression that AMD’s competitors (mainly Intel and nVidia) are saints, or that this in any way excuses them. They have their share of dubious activities as well. However, I think those are well-documented and discussed on many places on the web already, and I’d like to concentrate on AMD only at this moment.
The issue here is basically that AMD seems to think we’re all idiots, because they try to fool us and trick us. Now obviously that is the job of PR, to a certain extent. However, in this case it’s taken on pretty pathetic forms, and they’re basically insulting their public.
Let’s start with OpenCL. AMD has been trying to paint nVidia as the evil proprietary Cuda/PhysX devil. Obvious attempt of trying to exploit the hyped ‘open’ term in the name OpenCL (apparently they were so committed to open standards that they originally had their own proprietary Stream/Brook+/CAL infrastructure, and they needed Apple to propose the OpenCL standard, rather than AMD proposing one itself. Pot, kettle, black, AMD!). So far so good. However, AMD, YOU DON’T HAVE IT! That’s right, you’re pushing OpenCL as an alternative to Cuda/PhysX GPU-acceleration, but you don’t actually have any drivers or an SDK or anything! What’s worse, nVidia, that company that you try to discredit for not using open standards, DOES!
That’s right, here it is: http://developer.nvidia.com/object/get-opencl.html. Free for anyone to download, a fully OpenCL 1.0-compliant driver and SDK. So basically, if someone were to release an OpenCL-application today, it would actually run on nVidia hardware, but not on AMD, the company that’s being so loud about OpenCL. They think we’re too stupid to figure out that AMD doesn’t actually HAVE OpenCL and is selling vapourware? No we aren’t! AMD only submitted their GPU-based OpenCL drivers to Khronos little over a week ago: http://www.hpcwire.com/topic/developertools/AMD-Submits-OpenCL-for-GPU-to-Review-by-Standards-Body-60000452.html. Stop insulting nVidia, stop insulting our intelligence, just get a working product out of the door (it could still be months off at this point, nVidia’s OpenCL drivers and SDK passed compliancy tests months before the public release, although they were available as beta to registered developers almost immediately after testing).
“AMD’s upcoming next generation ATI Radeon family of DirectX 11 enabled graphics processors are expected to be the first to support accelerated processing on the GPU through DirectCompute.”
Nope, nVidia beat you there as well. The 190-release of nVidia’s GeForce drivers enabled DirectCompute back in July, on all 8800-series and newer (nVidia’s OpenCL SDK is actually not an OpenCL SDK but a GPU Computing SDK, it also contains C for Cuda and DirectCompute). By the way, even though their HD5800-series now supports DirectCompute, there is still no sign of support on their existing DX10 products.
And since AMD doesn’t have PhysX, or some alternative, they teamed up with Intel’s Havok: http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_543~126548,00.html. Although, only a few months earlier they said that GPU physics was dead, because Intel bought it: http://www.bit-tech.net/custompc/news/601677/gpu-physics-dead-says-amd.html. And look, they even said that Ageia wouldn’t be interested in working with a GPU company. Well, that didn’t stop nVidia. They just bought Ageia and added GPU support in only a few months, which is the reason why AMD is now the odd man out.
However, apparently Intel and AMD were never meant to be, since nothing much was heard of Havok since. Then AMD announced they are now partnering with the open source Bullet project: http://www.brightsideofnews.com/news/2009/9/17/amd-supports-opencl-bullet3b-are-havok-and-physx-in-trouble.aspx.
Or well, partnering? Just like they partnered with Intel/Havok before? Paul Marini Jr. of Hi Tech Legion decided to interview Bullet’s lead developer and AMD about the situation. First up, Havok: http://www.hitechlegion.com/our-news/1389-sheer-gaming-horse-power-or-the-total-package-part-2?start=4
“Ok, I thought I would get my head ripped off if I were in the room Dave Hoff seemed to become very upset with that question and what I got out of the answer was, ATI doesn’t have, and never needed, a license and sees no need to pursue a relationship with Havok for physics, due to Open CL and Direct Compute. He also mentioned that Havok is working with the Khronos Group on OpenCL.”
That’s not what it said in the press release, is it? And what about this statement then: http://www.rage3d.com/previews/video/ati_hd5870_performance_preview/index.php?p=4
“One of the first things I did was meet with Havok, introduce them to the amazing engineering team I have here and explain that we could implement some of their code in OpenCL thereby enabling them to achieve acceleration on not just ours, but also Nvidia’s GPUs. So we ventured into a quick little project to gauge the technical feasibility as well as if it was a good climate and team dynamics for our organizations to collaborate.
While we learned the answer to both, I can only report on the technical feasibility since we demonstrated Havok Cloth at GDC in March running in OpenCL on our Radeon HD 4890. In terms of productization, we’re waiting for our OpenCL tools to complete conformance acceptance (they’ve been submitted to Khronos) and will likely need to get through some solid beta usage and up to a production state before an OpenCL-based Havok solution would be ready.
Then it’s really up to Havok if they want to bring this to market. I’d like to see them do this particularly with their cloth product since game developers can incorporate cloth late in their development cycle and our OpenCL implementation is generally transparent to the Havok API.”
Aren’t you completely contradicting yourself here?
And here the interview regarding Bullet: http://www.hitechlegion.com/our-news/1411-bullet-physics-ati-sdk-for-gpu-and-open-cl-part-3?start=1
Some interesting quotes from the lead developer:
“Our NDA with AMD doesn’t allow to disclose details on AMD GPU OpenCL just yet.”
Okay, so you’re promoting an open source project with an open standard like OpenCL, but the lead developer is silenced by an NDA? How open is that? Apparently nVidia doesn’t mind him talking about their stuff:
“OpenCL works fine on NVidia 8800 and better.”
“Bullet’s GPU acceleration via OpenCL will work with any compliant drivers, we use NVIDIA GeForce cards for our development and even use code from their OpenCL SDK, they are a great technology partner.”
Yup, nVidia has their stuff in order. Great promotion for you, AMD! And then they didn’t even mention that Bullet already contained a Cuda solver before the OpenCL standard was even finalized. So what exactly is it that AMD is ‘helping’ with anyway? And yes, it works fine on the 8800-series, which is now 3 years old, all thanks to nVidia and that silly proprietary Cuda that they introduced with those chips. AMD can only support OpenCL and DirectCompute on their 4000-series and newer. That’s how committed AMD is to OpenCL and GPGPU.
However, what REALLY annoyed me were the responses by David Hoff of AMD:
“..Erwin would not know because he is irrelevant and ATI has their own team from Bullet working with them.”
WHAT?! Erwin Coumans is the main developer of Bullet, it’s HIS project, not yours! How DARE you call him irrelevant? Is that what you do with your partners? These Havok and Bullet ‘partnerships’ are just there so you can tell in the media that “We’re working on physics too!” in response to nVidia’s PhysX, right?
Another thing that really annoyed me was that he just avoids direct questions about GPU support in OpenCL. There is OpenCL support in the Stream SDK, but it is CPU-only at this point. They ‘forget’ to mention that, so most people probably wrongly assume that AMD has GPU-accelerated OpenCL. See for yourself: http://developer.amd.com/gpu/ATIStreamSDKBetaProgram/Pages/default.aspx. They clearly state “OpenCL™ allows programmers to preserve their expensive source code investment and easily target both multi-core CPUs and the latest GPUs, such as those from AMD.” True, OpenCL allows that, but your SDK does not! Very misleading, and very deliberate! AMD’s developer forums are actually filled with people asking about it, as they too were tricked, and now they can’t find the GPU-accelerated stuff in the SDK.
The other day I had a similar experience myself. At Beyond3D we were discussing the new nVidia Fermi architecture. The question arose whether AMD’s new Cypress architecture also fully supported IEEE754-2008, like nVidia does. Dave Baumann of AMD ‘answered’ by posting a quote from Tech Report, which was supposed to confirm it. Wait a second… an AMD marketing person that posts a quote from a tech site? Why not refer to official documentation? And why did he post only the quote, and no words of his own, like “yes it does”? Well, I couldn’t find it in the official documentation. It doesn’t support it, does it? Tech Report could have gotten their information wrong (which could again be AMD deliberately misleading them, just like with the non-existent GPU OpenCL SDK), and AMD is now using this wrong information to make it appear that their hardware is more capable than it is in reality. After all, they’d be responsible for what it says in the official documents, but not what some journalist says on an independent website. When I asked a ‘critical’ question about why he referred to Tech Report rather than official documentation, my post was magically deleted. Way to go guys! I’m a developer, I need straight answers. Don’t lie to me, I don’t like being lied to.
Gives me a flashback to earlier lies of Dave Baumann and Richard Huddy. Back when 3DMark05 started using nVidia’s shadowmapping extensions, but not ATi’s 3Dc normalmap compression. Back then I actually HAD an ATi Radeon 9600XT card in my development machine, so I was a direct customer and developer for them. Richard Huddy tried to spin it like FutureMark deliberately favoured nVidia and their proprietary extension (there’s that word again). He claimed that 3DMark05 could run faster on ATi hardware if they used 3Dc. Being a friend of one the developers of 3DMark05, I had discussed this issue with him. I learnt a few interesting things. Firstly, their normalmaps were deliberately denormalized, as the variations in vector length could be used as scaling factors to reduce aliasing. 3Dc can only store normalized normalmaps, so you would lose the antialiasing effect and get lower image quality. Secondly, it was unlikely that 3Dc would have a tangible effect on performance at all. Thirdly, they used nVidia’s extensions only because they would be standardized in DirectX 10, and various other IHVs were already working on new hardware, with shadowmapping support (including ATi!). Lastly, ATi developer relations had actually worked together with FutureMark on the shaders, and FutureMark and ATi devrel had jointly decided that there wasn’t any point in using 3Dc. So basically Richard Huddy was now stabbing FutureMark in the back, his own company had agreed not to use 3Dc, and now he tries to make it look like FutureMark deliberately didn’t use it… and at the same time he pretended that the shadowmapping extensions were and always would be a proprietary nVidia extension, rather than a standard which their own next-gen products would be using.
I actually tried to discuss this with Dave Baumann at the time as well, trying to explain the shader algorithm, and why 3Dc wouldn’t produce correct results. I don’t think he understood any of it… however, he did actually admit that he knew about the standardization of the shadowmapping extensions in private. But in public he continued to deny it, and my posts were deleted. AMD has a considerable problem, when some of their main PR figures have a reputation of deliberately lying, misleading and backstabbing their partners. They’re supposed to make developers WANT to use their products. They make me want to stay away.
On a slightly related note… Here’s another case of AMD’s opportunistic marketing: http://computered.wordpress.com/2009/10/02/the-card-that-killed-the-dragon/
Last year they tried to sell us on the AMD Spider and Dragon platforms. Get everything from AMD, and you’ll be great. Now they haven’t mentioned AMD platforms with a single word, and none of the reviews used one. I think we all know why. Intel’s new Nehalem platform is just considerably faster than what AMD has to offer, especially when it comes to multi-GPU setups. AMD couldn’t afford to have their new GPUs benchmarked with their own CPUs, as they would probably be CPU-limited. They needed Intel to make the AMD GPUs look as good as possible. Before Nehalem that wasn’t a problem, as the Phenom II was a decent match for Core2 Quad in these situations. So everyone, AMD’s message today is NOT to use their CPUs and chipsets, No AMD platform! Go Intel! Ofcourse they play dumb when someone actually asks them why they don’t mention what platform they used for their own press material. You think we couldn’t figure it out ourselves? Don’t insult our intelligence!