Well, in my previous blog I spoke of the Q35, and how I managed to get my D3D11 engine running on there. As it turned out, the D3D9 engine did NOT work on there (even though it used the same shaders, textures and everything). Although technically I shouldn’t be concerned about such hardware, since it’s below the minimum level that I am interested in supporting, I just can’t stand it when something doesn’t work when it should. So I couldn’t rest until I found out what the problem was, and ultimately fixed it.
The problem was that the application opened and started rendering, but the claw didn’t appear. The rest seemed to look okay. So, first thing I did was to check with DXCapsViewer, to try and figure out why the skinned claw wasn’t working. I noticed that the GPU supported pixelshader 2.0, but no hardware vertexshading. I then checked my code, and noticed that I had made a small typo somewhere, which could prevent it from going to software vertexprocessing mode.
This however did not fix the problem. Upon closer inspection, the GPU didn’t support any hardware T&L at all, so it would never have went into mixed vertexprocessing mode in the first place. It would just be a software-only device. So what would be the problem then? I figured that if the vertexprocessing is done in software, it will always support VS3.0-level anyway, so perhaps I should redirect my attention towards the pixelshading stage.
So I inspected the texture states that I set with the skinned and the unskinned materials… but they were the same, so that couldn’t be it either. At this point I figured there was no other alternative but to install the debug runtime to get additional information. So, I installed the debug D3D9 runtime on the Q35 machine.
It gave me some useful info… or maybe not?
What it said was something like this: “The output of the current vertex shader cannot be used, because it cannot be mapped to a valid FVF”.
Now, I’m not sure what it’s trying to tell me…
I’ve found this with Google… It seems to be somewhat related:
DirectX 9.0 Drivers without Pixel Shader Version 3 Support
- The input declaration must be translatable to a valid FVF (have the same order of vertex elements and their data types).
- Gaps in texture coordinates are allowed.
Since this machine has only SM2.0, I suppose this applies… Then I wonder though, why did it work on my Radeon 9600? Perhaps because the DRIVER understands PS3.0, even though my specific hardware doesn’t? It may not enforce the restriction, and the hardware may not require the restriction in the first place.
At any rate, it pointed out the problem with my code… The shaders were based on some very old SM1.x code, where I used the old trick of packing some per-pixel interpolated vectors into the COLOR0/COLOR1 registers. I used a float3 type however. Apparently this confused the runtime, and it could not map it to a proper FVF format. So I should just change it to a TEXCOORDn type.
After modifying the output struct of the vertexshader (and the input struct for the pixelshader), it finally worked on the Q35. The driver WAS right, technically my code wasn’t 100% correct… Strange though, that I have never seen this problem before. And I cannot simulate this problem on my own machines either. Since my machines have SM3.0 or better, the restriction doesn’t apply, and the debug runtime will not give a warning. I’ve even tried it with the Reference Rasterizer, but again, it is an SM3.0 device, so it will not complain either. In fact, I don’t really understand why the Q35 has this restriction, because in D3D11 mode it worked as well.
But it’s good to have that pointed out, because it may have caused vague problems on other hardware as well.
Pingback: When old and new meet: lessons from the past | Scali's OpenBlog™