I’ve already mentioned it in my previous blog about Windows 7… There are quite a few people on the net who *think* they are tech-savvy. The annoying part is they tend to get very arrogant and start insulting people who DO know what they’re talking about. They’ll never admit they’re wrong.
In the past few days I’ve encountered a few more ‘gems’ from such people. It’s interesting to see how these people think.
For example, the other day I saw someone saying “Ambient Occlusion is a DX10.1 feature”, trying to argue that nVidia can’t support it. That was sort of a deja-vu moment for me. Years ago, when videocards first got programmable shaders, I noticed that quite a few people had some kind of idea that graphics features were things that you just ‘turned on’. Like a programmer just had to do something like SetBumpmapping(true), and the game would be bumpmapped… Or SetShadowMap(true), etc.
They didn’t seem to realize that you actually write a program, where you have to do all the graphics effects yourself, which can be done in many ways. Not something you just turn on or off either. You have to manage the resources in your 3d engine so that the shader programs can access bumpmaps, shadowmaps and that sort of thing. And ofcourse your content has to be designed with these bumpmaps and things in mind as well. It couldn’t be further from “just turn it on”, it’s very hard work really 🙂
I recall back in those days that even the internet journalists/reviewers started having problems understanding videocards and their capabilities. You almost had to be a graphics developer yourself to understand all this new advanced technology. I noticed that many sites started to drop off to the level of merely regurgitating what the GPU marketing departments were telling them. Sometimes they didn’t even get THAT right. I have to give credit to sites like Anandtech and Extremetech, who seem to have some staff-members who still have a good idea of what they’re talking about, and still deliver articles with their own view and interpretation on the technology, rather than just repeating press-releases and other marketing material.
Getting back to ambient occlusion… Well, that is just a rendering technique. If anyone would bother to look it up on Wikipedia, they’d see what it is exactly, and how it has no direct relation to DirectX or anything. In fact, I recall that some people implemented a vertex-based ambient occlusion algorithm back in the DX9 era. Crysis also uses a screen-space Ambient Occlusion algorithm, and that doesn’t need DX10.1 either. nVidia now added Ambient Occlusion as a driver option, allowing you to apply it to various titles that originally didn’t have support for it… And nVidia’s hardware doesn’t support DX10.1 either.
So where do people get that it’s a DX10.1 feature? And what’s more, why are they convinced enough to say this in discussions on public forums? They’re making a fool out of themselves… Or are they? Because it seemed that nobody responded to this remark. Nobody else seems to know what they’re talking about either? That’s quite amazing, actually. So these people might feel their confidence rising because they get this false impression that they know what they’re talking about. And with that, they become more and more arrogant. Until at some point, if even someone who DOES know what he’s talking about tries to correct them, they will just insult them.
I’ve also had some discussions regarding Cuda and OpenCL recently. The same thing seemed to happen. Most people don’t seem to really have a clue what they’re talking about, yet they have strong opinions about the technology. Funny enough these seem to be ATi supporters most of the time. As if they are frustrated that nVidia had Cuda first. They also didn’t seem to want to even consider the possibility that OpenCL is actually very similar to Cuda, and that there is a very realistic chance that nVidia’s hardware will run OpenCL code more efficiently than ATi’s hardware for that reason.
They also seemed to be angry that some developers had already written software for Cuda, and they say that software is useless until it’s ported to OpenCL so everyone can use it. What nonsense. Firstly, these developers apparently thought that their software would have a good enough market even if it only worked on nVidia hardware… And secondly, these people don’t seem to realize that OpenCL is little more than a paper standard at this point. Both nVidia and ATi are still working on OpenCL 1.0-conformant drivers, and it could take until the end of the year before the first OpenCL drivers are actually available to end-users, so software written for OpenCL would actually run on their systems.
Oh well, amazing what idiots run around on internet forums, or blogs etc. All these self-proclaimed experts… Sheesh. What annoys me most is when they don’t even believe me, or try to tell ME that I don’t know what I’m talking about. Then suddenly *I* have to prove that I know how to write graphics software and things? Geez. That’s funny really, when I’ve been involved with Win32ASM for years, and also with the demoscene, and also did some toy-demo programs for things like Flipcode’s Image Of The Day back in the day… Or the caustic raytracer I’ve made together with Ewald Snel back at University for our Masters (oh, raytracing, that’s another story altogether, maybe some other blog). Thing is, Flipcode is long gone, most people never heard of the demoscene anyway, or Win32ASM… and it all was a long time ago. Not that I’m bothered, but it would be nice if people knew who they were talking to, just as it was back when I was part of those communities, and I didn’t have to prove that I knew what I was talking about.