Just after I posted my previous blog, regarding OpenGL and Direct3D, Khronos announced OpenGL 3.3 and 4.0. This ofcourse rekindled the old OpenGL vs Direct3D war around the web, which as I said only days earlier, didn’t seem to have died down completely yet. Sadly, people are as clueless as ever. I often saw the argument that the Sony PlayStation 3 uses OpenGL. This is not correct. While there is a linux-based development environment, and OpenGL is available on the machine, the normal way of rendering in a PS3 game is to use Sony’s own graphics API or access the hardware directly, as regular OpenGL isn’t very efficient on the PS3, with its rather unique hardware configuration.
In fact, there was more OpenGL-news, as Valve has decided to port Steam and their Source engine to OS X, using a native OpenGL implementation (where many OS X games are basically the Windows DirectX code running on a DirectX-to-OpenGL-wrapper, such as Cider/Cedega/Wine or MacDX).
Does this mean that OpenGL is back in the game? Well, not entirely. So far, Khronos has only announced the new 3.3 and 4.0 standards. It will take a bit of time until the videocard manufacturers have implemented these new standards into their drivers. Another thing is, the OpenGL 4.0 standard is basically just OpenGL playing catch-up with Direct3D 11. Because Direct3D has been so dominant in the past years, videocards have pretty much been designed specifically towards a Direct3D standard, and the days of videocards offering functionality above and beyond the Direct3D API have all but disappeared. Yes, they do still offer extra functionality, but that is mostly GPGPU-related, and that’s the terrain of OpenCL (and Cuda/Stream/DirectCompute), not OpenGL. So that doesn’t leave OpenGL all that much opportunity to get ahead of Direct3D 11, and as such, the main reason for choosing OpenGL over Direct3D remains the support on alternative platforms such as OS X, Linux and the BSDs.
Aside from that, as nice as it is that Valve chooses to port their Source engine to OpenGL and support OS X… The Source engine is far from a cutting-edge engine (as great as it was when it was first introduced). It is still using Direct3D 9, and Valve’s games are getting a bit long in the tooth. It’s not exactly the graphical might of Crysis. Having said that though, I wonder where Valve is taking it from here. After all, their Source engine being Direct3D 9 is getting to be a problem on Windows as well. If they want to move forward, they will have to implement Direct3D 11, and as I mentioned before with my own Direct3D-related blogs, converting a Direct3D 9 application to Direct3D 10/11 is not exactly trivial. The API’s are completely different.
This could mean that Valve decides to go OpenGL on Windows. After all, if they already have OpenGL support in their Source engine, it will be easier to expand the existing support to OpenGL 3.3 or 4.0 than it would be to move to Direct3D 10/11 (the basic OpenGL API remains the same, there are just some new things added, and some old things scrapped). With OpenGL 4.0 offering basically the same functionality as Direct3D 11, they wouldn’t need to support Direct3D anymore.
Performance and drivers
The main obstacles for OpenGL at this point remain driver support and performance, as I already touched upon in the previous blog. Khronos can draw up new OpenGL specifications, but if companies such as Intel stick to minimal implementations of their aging 2.0 spec, it isn’t doing OpenGL a lot of good. The situation on OS X isn’t that great either. Apple supplies part of the OpenGL runtime there, but Apple is not all that up-to-date itself. At the time of writing, I understood that OpenGL 3.0 is still not supported on OS X.
One of the problems I ran into, is related to OpenGL and its extension system. With a lot of ‘standardized’ functionality being basically ARB-ratified extensions, it is easy for graphics developers to selectively implement these extensions. With Direct3D, if your graphics hardware does not support a certain version of vertex shaders, you can still opt to use software emulation. In Direct3D 10/11 this software emulation is silently enabled, as there are no finely grained caps for the hardware anymore. With OpenGL, I have found that if the hardware doesn’t support vertex shaders, you don’t get the extension either. This means you cannot run shaders in software mode either. You will have to implement your own workaround vertex processing. Luckily this scenario should disappear in a reasonable timeframe, as the DirectX 10 generation of IGPs all has hardware vertex shader support, and support for GLSL in the drivers (even Intel, as basic as it may be).
Getting back to some of my own performance experiences… it looks like AMD’s OpenGL drivers still aren’t all that great.
I added a simple FPS counter to my code, to get an idea of how well things are running so far.
I rendered a single triangle, using static VBOs, which should theoretically be the fastest way to render them.
First I tried on my laptop with Intel X3100. It got around 1000 fps, in Vista 32-bit.
Then on my desktop with Radeon HD5770. About 1500 fps in Vista 64-bit. Then I tried in Windows 7 64-bit, closer to 2000 fps.
Now I was getting curious…. Apparently there’s quite a bit of CPU overhead somewhere, because the GPU should be WAY faster than the X3100, not just 1.5-2x as fast (especially since I also have a 1.5 GHz Core2 Duo in my laptop and a 3 GHz Core2 Duo in my desktop). What’s more, if I run the ENTIRE BHM scene in D3D9/10/11 (as opposed to just a single triangle), animated, per-pixel lit and everything, I get about 6000 fps still, on the HD5770.
So I tried it in XP, which is generally slightly faster in windowed mode (gets about 7000 fps on my HD5770). Now I got about 3500 fps… that’s more like it, but still a far cry from D3D9 performance.
Now AMD’s drivers have a bit more overhead than nVidia’s anyway… Because on an old Pentium 4 system I have, with a GeForce 9800GTX (which is quite a bit slower than a HD5770), I would get 7000 fps in Vista/Win7, and 8000 fps in XP.
So a friend of mine ran the OpenGL thingie for me on his machine with Windows 7 and a GeForce. He got over 5000 fps… So apparently the overhead is mostly in AMD’s OpenGL implementation. Even on nVidia’s implementation, it’s not *quite* as low overhead as D3D, but still it’s considerably better than AMD.
So… what do I conclude from this so far?
- Contrary to popular belief, basic driver overhead in D3D is lower than in OpenGL, in practice.
- Windowed mode 3d graphics have slightly more overhead on Vista/Windows 7 than on XP.
- The difference in overhead between Vista/Windows 7 and XP is larger with AMD than with nVidia.
- The overhead in AMD’s drivers is higher than in nVidia’s drivers.
- In OpenGL, the difference in overhead between AMD’s and nVidia’s drivers is significant.
In practice it probably won’t make that much of a difference though.
Namely, if we take an overly pessimistic worst-case scenerio:
– 1000 fps means 1 ms per frame
– 8000 fps means 0.125 ms per frame
– 60 fps means 16.667 ms per frame
So, if we assume all of the difference for rendering at 1000 fps vs 8000 fps to be overhead, then that means the driver will have 0.875 ms overhead per frame.
Now, if you were to translate that to the 60 fps figure, an extra 0.875 ms overhead per frame would only only make a difference of about 3 fps. So it’s very marginal.
Once the OpenGL code can also render and animate the entire BHM file, I can do a more direct comparison. Frankly I don’t expect the framerate to drop much, if at all, when it has to render the entire scene. I’ll just have to wait and see if I’m right about that.
Hi, Valve said that the OpenGL renderer will be Mac only and at GDC 2010 they had a dx11 (and dx10) presentation: http://schedule.gdconf.com/ & https://www.cmpevents.com/GD10/a.asp?option=C&V=11&SessID=10269
Pingback: Valve to bring Steam and Left 4 Dead 2 to linux | Scali's OpenBlog™
Pingback: OS and platform wars, why even bother anymore? | Scali's OpenBlog™