The moment many of you have been waiting for: it’s the nVidia fanboys’ turn!

I suppose the release of Fermi made the nVidia fanboys come out of the woodwork again, after being relatively silent for a while. I had a brush with some of them recently… People were often frustrated with me harping on AMD and its fanboy following. They criticized me for not picking on other fanboys as well. My response was that I didn’t have anything to pick on. I suppose with companies as dominating as Microsoft and Intel, there’s little use in being a fanboy. With nVidia it’s a different story however, and today it’s nVidia’s turn, so enjoy.

It all started when I was reading one of the preview articles for the GTX465, on XBitlabs. One thing I noticed was that the driver settings for AMD and nVidia weren’t matched very well:

ATI Catalyst:
Catalyst A.I.: Standard
..
AAMode: Quality
Nvidia GeForce:
Texture filtering – Quality: High quality
Texture filtering – Trilinear optimization: Off
Texture filtering – Anisotropic sample optimization: Off
Antialiasing – Gamma correction: On
Antialiasing – Transparency: Multisampling

The Catalyst AI feature is enabled, which will perform some texture/shader optimizations. At the same time, the trilinear and anisotropic optimizations in nVidia’s driver were disabled. Likewise, the ‘Quality’ AA mode in Catalyst is not equivalent with multisampling transparency AA from nVidia.

So I figured I’d bring that up in a thread on the Anandtech forum, which discussed the preview article. In my opinion, nVidia was put at a disadvantage because of the settings. So at this point, I was defending nVidia. The result was that the first AMD fanboys came out of the woodwork to ‘explain’ to me that Catalyst AI doesn’t affect image quality. Yea right. I have a Radeon 5770 myself. I know what the effects on image quality are, I develop my own 3D code with the thing. I know what my textures and shaders SHOULD look like, and what AI does to them, sometimes. Which I don’t consider a bad thing, in general, by the way. The performance gains outweigh the image quality losses. Which is also why a lot of people don’t even SEE the differences. But that does not mean that they don’t exist, so I linked to an article, and explained how to use Paint.NET to highlight the differences:

Clearly, especially the grass has undergone some changes. With AI enabled, some grass seems to be missing altogether, probably because the textures are somehow compressed/optimized and the alphachannel is messed up, so you ‘see through’ some of the grass.

So the AMD fanboys needed to change the topic… Now that they could no longer argue that AI doesn’t affect image quality, the logical next step was to argue about how nVidia’s optimizations would affect image quality. Well, my experience was that prior to the 5000-series, Radeons had significantly worse image quality than GeForces, because AMD did quite a bit of ‘optimization’ with texture filtering and such. The 5000-series however, was the first to have ‘perfect’ texture filtering capabilities, which can be demonstrated with a tool such as Demirug’s D3DAFTester. It is completely angle-independent:

This is EXACTLY how the image should look, it matches the D3D reference exactly. The 5000-series is the first to be able to do this. At the time of writing it is actually still the ONLY family of GPUs that can do this, since nVidia’s Fermi architecture can not do it either, which I already mentioned as sort of a disappointment in my earlier blog covering the Fermi release.

Then, something unexpected happened. Although I originally entered the thread to defend the GTX265 with the ‘unfair’ settings, I had now hit a sore spot of nVidia fanboys: nVidia can’t match AMD’s image quality. So a guy by the nickname of BenSkywalker starts attacking me. He’s not a total stranger to me. We’ve had discussions before… I recall an instance where he tried to discredit Larrabee. He used a lot of broken arguments, and he did not seem to have a deeper understanding of Larrabee or rendering algorithms in general. At first he started arguing about how ray/triangle intersections are very inefficient… so apparently he thought Larrabee tried to solve everything with raytracing. Then he started referring to the articles that Michael Abrash had published in Dr. Dobbs. I tried to explain that Abrash merely describes a parallel tile-based rasterizer… but apparently BenSkywalker didn’t really grasp it all. He also didn’t seem to understand the difference between a tile-based renderer and a tile-based DEFERRED renderer. So in short, my first impression of the guy was that he was arrogant, had a big mouth, tried to defend nVidia or discredit its competitors, and has a general lack of knowledge about the subjects, or deliberately uses crackpot theories.

This encounter was no different. Since it’s hard to argue that the 5000-series does perfect angle-independent anisotropic texture-filtering, a dedicated fanboy has to find another area to attack. Creative as fanboys are, they wanted to trick people into believing that AMD doesn’t “fully sample”. Then he throws some ancient nVidia architectures in the mix, which were angle-independent, and did fully sample. So I pointed out that it’s pretty obvious, since it’s a bruteforce implementation, and as such doesn’t care about angles (hence being fully angle-independent).

He didn’t seem to understand that, so he tried to attack me… In the process he made a fool out of himself, because apparently he doesn’t understand how texture filtering works. When I hinted at the difference between an angle and a partial derivative, I seem to have lost him, but his huge ego just continued to throw insults around anyway.

Then he brings up this fanboy site that ‘analyzed’ the filtering on GeForce and Radeon. According to him, it is impossible that there’s a solid gray area around the center, when using a small checkerboard texture:

5770-AF

Gee, I wonder what he thinks the mipmaps look like then, when you have a black/white checkerboard, and you create mipmaps. So I gave him a small hint: “I see a checkerboard pattern… which under certain circumstances may result in gray pixels yes (the pattern being 50% black and 50% white… just sample exactly in the center, et voila).”

Being the idiot that he is, he didn’t understand that I was talking about how a boxfilter would generate the mipmaps like that, and end up with gray as the smallest mipmap (or mipmaps, depending on the scale of the checkerboard). So he started a lot of arrogant nonsense about how the surface is not flat and how it must be undersampling. The irony is that the OTHER cards *appear* to have more detail in that area because THEY are the ones that are undersampling. They paint some moire patterns that may fool people into believing the image is sharper and more detailed. But from a theoretical point of view, the AMD image is the correct one. The point at which the gray starts appearing in the image is perfectly correct: the point where texture:pixel ratio approaches 1, so where you would be getting into the gray mipmap level (or levels).

And yes, obviously the image looks different when you apply SSAA. Makes perfect sense too, because you change the texture:pixel ratio when you supersample. Then when you downsample the final image, you get an extra level of filtering, which will remove some of the remaining moire effect. But that doesn’t mean that the non-supersampled image was somehow wrong. In fact, it would be nothing short of amazing if you could get the same image quality without supersampling!

It leaves me with a rather strange feeling. In a single thread, based on a single subject (texture filtering quality), I get attacked by both sides. Besides, I always run with Catalyst AI set to advanced, and with my previous nVidia card, I also had the texture optimizations enabled. So yes, on the one hand I support the notion that the optimizations have such a minor impact on image quality that there’s no reason to turn them off. On the other hand, if you go for maximum image quality, the Radeon just is the better GPU, period. No, I don’t think you will notice much of this better image quality in daily use… But I’m not going to let some idiot fanboys get away with crackpot theories that nVidia actually does it BETTER.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to The moment many of you have been waiting for: it’s the nVidia fanboys’ turn!

  1. Pingback: More hate fail… | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s