Come on, don’t be shy!

The internet is a strange place… In theory it is an ideal medium for communication… but in practice people communicate very selectively. I myself try to be as open as possible about the things I write on blogs or forums. When I discuss an article that someone else has posted, I will generally also contact the author with a link to the discussion on my blog. They will generally get a pingback, and if possible, I will also try to add a comment to the article, or contact the author by email. I don’t like to do things behind someone’s back. I want to discuss things openly.

Sadly, not everyone is interested in that. For example, AMD did not approve my pingback or comment on their ‘Tessellation for all’ blog. Apparently they are not open to criticism or discussion. And it also occurs the other way around. I often find references or pingbacks in my blog stats, where people will ‘discuss’ things I write on my blog or on another forum… But funny enough, most of the time, the people do not bother to post a comment here, or open a discussion directly with me.

Which is why I am going to make an example out of one such occasion. I happened to stumble upon the forum of Alienbabeltech.com, while following the links to my blog. I have never participated on that forum, nor do I even know most of the people there… and as such they won’t know me either.

I found a post from a person by the nickname of gstanford, saying:

People who really should know better are spouting some ridiculous things in that thread. Take Scali for instance:

Quote:
You can’t add to an API what doesn’t exist in hardware, so you can’t fault the API for not supporting it.

What a steaming pile of cow manure that statement is. You most certainly can add things to an API without hardware support. The thing just gets software emulated. How the hell does he think most 8/16 bit computers went about performing floating point math?!

Now as far as know, I have never spoken to this individual before, on any forum… But it’s such a shame that he didn’t confront me with this directly… Looking a bit further, I found another post of his, a few months later:

His posting manner is only irritating because he posts the truth and facts, which others find impossible to refute.

Well, that’s a bit ironic in a way. I think I’ll have to agree with him. I always try to just post truth and facts, and present links to reliable sources which support what I say. When asked, I am also willing to clarify what I say, or point to more sources if necessary.

Anyway, getting back to his earlier post… I have to agree that my phrasing wasn’t entirely bulletproof. However, the description of “steaming pile of cow manure” is ridiculous. The statement I made was referring to Direct3D 9 which didn’t include certain features which only appeared in hardware many years later. While theoretically he is correct… you *can* emulate functionality in software… it is ridiculous in this particular context. Namely, the Direct3D API is not just an API, it is a hardware abstraction layer. Unlike OpenGL, Direct3D has always aimed to only abstract the hardware, and not emulate any functionality in software quietly. OpenGL’s software emulation was a huge problem, especially in the early days, since it could dramatically drop performance, and developers really had no way of knowing beforehand, since the API just reported that it supported certain features.

Direct3D makes only one exception, and that is with vertex processing. But we were talking about antialiasing, and doing that in software would severely hamper performance, as it would force pretty much all pixelshading to software. This would be unacceptable. Clearly what I meant to say was more along the lines of  “You don’t want to add to a hardware abstraction layer API what doesn’t exist in hardware, so you can’t fault the API for not supporting it, as adding support would basically degrade the whole API to a software renderer, and lose its entire point as a hardware abstraction for 3d accelerators.” Which makes a whole lot more sense than his statement, really. Trying to add software emulated multisample readback AA to Direct3D 9? If there’s anything I’d classify as “steaming pile of cow manure”…

Looking further, I found another post of his, again trying to ‘correct’ me:

However Scali while very knowledgeable isn’t always correct.

http://forums.anandtech.com/showpost.ph … tcount=521

Scali wrote:
blastingcap wrote:
So, I’m curious now. What’s your AMD/ATi list look like?

Well, I think we can give them tessellation.
I think they were also the first with hierarchical z/stencil buffering.
But some things are a bit difficult to say…
While clearly ATi was first with SM2.0, and probably was a large factor in the development of this standard… is that really an innovation, or just building on the groundwork of programmable shaders that nVidia laid? Because if we count such technologies, then I think I can make nVidia’s list a whole lot longer as well.

I believe you will find that nvidia’s Cg was the driving force behind SM2.0
http://en.wikipedia.org/wiki/Cg_%28prog … anguage%29
it got a lot of use in games of the time (nv3x timeframe), including famous ones like Farcry1. See attachment (or your own farcry ‘bin32’ folder if you don’t believe me and see alsohttp://en.wikipedia.org/wiki/Cg_(programming_language)#Applications_and_games_that_use_Cg).

Again, had he bothered to contact me directly, I could have easily defended my statements. Firstly, HLSL and Cg are two sides of the same project. Microsoft and nVidia developed HLSL/Cg together (as his own Wikipedia link states). nVidia wasn’t really the driving force, as HLSL has its roots in the Effect framework, which Microsoft introduced with Direct3D 8:

HLSL was created starting with DirectX 8 to set up the programmable 3D pipeline. In DirectX 8, the pipeline was programmed with a combination of assembly instructions, HLSL instructions and fixed-function statements.

Secondly, HLSL/Cg are programming languages, and are not related directly to SM2.0. HLSL/Cg also support SM1.x. ATi’s Radeon 9700 was the first SM2.0 hardware on the market. nVidia didn’t have any SM2.0 hardware on the market until much later, and as a result, Cg didn’t support SM2.0 until much later. Aside from that, nVidia’s SM2.0 hardware had notoriously poor performance, with the infamous GeForce FX series. It wasn’t until nVidia’s next major architecture, the GeForce 6-series, that nVidia finally delivered SM2.0 performance that was on par with ATi’s offerings. Clearly, ATi’s Radeon 9700 was the ‘blueprint’ for Microsoft’s SM2.0, and not nVidia’s GeForce FX. The reverse situation happened a few years later, with SM4.0. nVidia’s GeForce 8800 served as the ‘blueprint’ for SM4.0, and ATi’s Radeon 2900 arrived much later, and its performance was much worse.

So my point is: I don’t just talk nonsense. I am only interested in truth and facts. I can and will back up everything I say. You just have to open the discussion. Then again, perhaps it is because people already know that what I say is pretty hard to refute… I guess they are just insecure.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Come on, don’t be shy!

  1. Mark Davis says:

    Personally I think your awesome.

  2. Pingback: Yes, AMD fanboys *are* idiots | Scali's blog

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s