A few months ago, I discussed downgrading a modern codebase to .NET 4 and Windows XP. I managed to get the code working to the point that all functionality worked, aside from web views, given that browsers no longer support XP or Vista. The .NET 4/XP version of the application makes use of a DirectX 9 renderer and DirectShow or VLC for video playback. DirectX 9 you say? Well, I suppose I have to be more specific. So far we have only looked at the software side of things: which components, frameworks, libraries, APIs etc. are supported on Windows XP? But we worked under the assumption that the hardware had the same capabilities as the hardware that ran newer versions of Windows. Under certain circumstances, your software will also be limited by the capabilities of your hardware. This is especially true for DirectX, as it is a low-level hardware abstraction layer.
DirectX 9, the Swiss Army Knife of graphics APIs
As said, this codebase started life around 2008, at which time shader hardware was standard, even with low-end integrated GPUs. The Aero desktop required a minimum of a DirectX 9 GPU capable of Shader Model 2.0. The codebase made use of shaders through the Direct3D Effect Framework. DirectX 9 is an interesting API as it covers a lot of ground in terms of supported hardware. While at the high-end, it supports Shader Model 3.0, with floating point pixelshading, branching and whatnot, it also supports the first generation SM1.x hardware, and even the pre-shader hardware that was designed for DirectX 7 and below, where we had a fixed function pipeline. So DirectX 9 allows you to go all the way from relatively early 3D accelerators, such as an nVidia TNT or GeForce, or the original ATi Radeon, all the way up to floating-point shader cards. Crysis is a good example of what you can do with DirectX 9 when pushed to the extreme. The original Crysis had both a DirectX 9 and a DirectX 10 backend. While the DirectX 10 rendering quality was absolutely groundbreaking at the time, its DirectX 9 mode is not even that much of a step backwards visually, and it will run on Windows XP systems as well. Earlier games, like Far Cry and Half Life 2, would use DirectX 9 to support a wide range of hardware from fixed function all the way up to SM3.0.
But can our application also support this wide range of hardware? Now, as you may recall from some of my earlier exploits with old GPUs, recent DirectX SDKs include a compiler that will output only SM2.0 code or higher, even if the source code is written for a lower shader version. This also applies to Effects. As far as I can tell, our software has always used this modern compiler, or at the least, all shaders assumed SM2.0 or higher. You need to use an older compiler if you want to use Effects on actual SM1.x or fixed function hardware, otherwise the compiler will silently promote effects to SM2.0, and will only work on SM2.0+ hardware.
We build and distribute our application with pre-compiled shaders. We use the fxc.exe shader compiler for that. I had already created some scripts that compile a set of DX9 and a set of DX11 shaders separately, as the two APIs cannot share the same shaders. So I introduced a third set here, which I called ‘dx9_legacy’. I renamed the old fxc.exe to fxc_legacy.exe and added it to the build with a script to compile a new set of shaders from a dx9_legacy source folder and output to a dx9_legacy folder.
From there, I had to modify the application to support these alternative Effect files. That was relatively simple. Like before, I had to add the D3DXSHADER_USE_LEGACY_D3DX9_31_DLL flag when loading these legacy Effects. Or in this case, it’s actually the SharpDX equivalent: ShaderFlags.UseLegacyD3DX9_31Dll.
And I had to select the proper set of Effects. That is quite simple, really: if the hardware supports SM2.0 or higher, then you don’t need the legacy shaders, else you do. It gets somewhat more complicated if you want to support every single version of hardware (fixed function, ps1.1, ps1.3 and ps1.4). Then you may want to have a separate set for each variation. But at least in theory, I can run any kind of code on any kind of hardware supported by DirectX 9 now, as the legacy compiler can compile Effects for all possible hardware (it can also do SM2.0+, although the newer compiler will likely generate more efficient code).
More specifically, I only had to check if Pixel Shader 2 or higher was supported. Namely, first of all, the new shader compiler still supports Vertex Shader 1.1. Only pixel shaders are promoted to ps2.0. And secondly, there is the option for software vertex processing, where DirectX 9 can emulate up to vs3.0 for you. In my case, the vertex shading is relatively simple, and meshes have low polycount, so software processing is not an issue. Which is good, because that means I do not have to build a fixed function vertex processing pipeline next to the current shader-based one. All I have to do is rewrite the pixel shader code to fixed function routines, and the code should run correctly.
Or actually, there was a slight snafu. Apparently someone once built a check into the code, to see if the device supports SM2.0 at a minimum. It generates an exception if it does not, which terminates the application. So I decided to modify the check and merely log a warning if this happens. It’s purely theoretical at this point anyway. Hardware that does not support SM2.0+ has been EOL for years now, so it is unlikely that anyone will even try to run the code on such hardware, let alone that they expect it to work. But with our legacy compiler we now actually CAN make it work on that hardware.
A willing test subject
I have just the machine to test this code on. A Packard Bell iGo laptop from 2003 (which appears to be a rebadged NEC Versa M300):
It is powered by a single-core Celeron Northwood (Pentium 4 derivative) at 1.6 GHz. The video chip is an ATi Radeon 340M IGP. The display panel has a resolution of 1024×768. It originally came with 256MB of memory and a 20 GB HDD. These have been upgraded to 512MB (the maximum supported by the chipset) and a 60 GB HDD. It came with Windows XP Home preinstalled, and that installation is still on there.
The Radeon 340M IGP is an interesting contraption. It reports that it supports Vertex Shader 1.1 in hardware, but it is likely that this is emulated on the CPU in the driver. The pixel pipeline is pure DirectX 7-level: it is taken from the original Radeon, codename R100. It supports three textures per pass, and supports a large variety of texture operations. This is exactly what we like to test: Effects with a simple vertex shader, and fixed function pixel processing.
So I started by converting a single Effect to vs1.1 and fixed function. I chose the Effect that is most commonly used, for rendering text and images, among others. This will be my proof-of-concept. I first developed it on a modern Windows 11 machine, where it appeared to render more or less correctly. That is, the alphachannel wasn’t working as it is supposed to, but text and images basically appeared at the correct place on screen, and with the correct colours, aside from where they should have been alphablended.
Well, good enough for a proof-of-concept, so I decided to move over to the old laptop. Initially, it opened the application window, which is good. But then it didn’t display anything at all, which is bad. So I looked in the log files, and found that there were some null pointer exceptions regarding font access.
Interesting, as the application had been made robust against missing fonts in general. But as I looked closer, this was for the initialization of some debug overlays, where there was some special-case code. We use the Consolas font for certain debug overlays, as it is a common monospace font. However, apparently it is not THAT common. I hadn’t run into this problem on my other Windows XP machines. But as it appears, the Consolas font was not shipped with Windows XP. It could be installed by various other software though, such as recent Office applications. That might explain why the font was available on my other Windows XP machines, but not on this one. So as the application initialized the overlays on startup, it tripped over the missing font, and could not recover.
I added the font to the system, and tried again, and indeed: the application now worked, and rendered exactly the same as on the modern system. So the proof-of-concept works. For completeness I also added some checks to the code, so it will not crash on missing fonts in the future.
This proof-of-concept shows that everything is in place, at least from a technical point-of-view, for the support of non-shader hardware. We can compile Effect files for non-SM2.0 hardware, and load them from our application. We can create a DirectX 9 device on the old hardware, and detect when to use the fallback for the legacy compiler and alternative legacy Effect files.
The only thing that remains is to actually write these Effect files. I will probably not convert all of them, and certain ones will not convert to fixed function anyway, as they are too advanced. But I will at least see if I can fix the alphablending and add support for video playback, and perhaps some other low-hanging fruit, so that basic stuff will display correctly.
It’s interesting how far you can stretch a single codebase, in terms of development tools, APIs, hardware and OS support. On this old laptop, the code can work fine, in theory. You now run into practical problems… For example, yes it supports video playback, with a wide range of formats. But it has very limited capabilities for hardware acceleration, especially for more modern codecs. Also, we are now back to a screen with 4:3 aspect ratio, where our content has been aimed at 16:9 for many years, and more specifically 1080p, which is far higher resolution than this system can handle. Also, we normally have 2GB of memory as the absolute minimum. This system only has 512MB, and that is shared with the IGP as well. You can configure how much of it to reserve for the IGP. By default it is set to 32MB, so you only have 480MB left for Windows and your application. That puts some limits on the fonts, images and videos you want to use, as you may run out of memory simply because your source material is too big.
But, at least in theory, the code can not only run on Windows XP, but it can actually be made to run the hardware of that era. With the right combination of content and Effect files, you can use this laptop from 2003. So where I normally use a 1080p30 test video with H.264 encoding, in this case I had to transcode it down to 720×480 to get it playing properly. H.264 does not seem to bother the system that much, once you install something like the LAV Filters to get support in DirectShow (you need an older version that still works on XP). But decoding frames of 1920×1080 seems to push the system beyond its limits. It does not appear to have enough memory bandwidth and capacity for such resolutions. Back in 2003, HD wasn’t really a thing yet. You were happy to play your SD DVDs in fullscreen.
As I converted a few Effects down to fixed function, I concluded that it is highly unlikely that this code had ever been used on non-shader hardware before. Certain implementation details were not compatible with fixed function. For example, certain constants, such as solid colour or opacity (alphachannel) values were multiplied in the pixel shader, while the vertex shader merely copied the vertex colour. With fixed function, there are a few constant registers that you could use, but that would require setting these registers specifically in your Effect, instead of setting shader constants. But since these are constants, it makes much more sense to calculate them in the vertex shader, and just output a single diffuse colour, which can be used directly in the fixed function pipeline. It is simpler and more efficient.
In general there’s a simple rule to follow… In order from least to most instances per scene, we have:
- The world
So, you want to process everything at the highest possible stage where it is invariant. For example, the lights and camera are generally static for everything in the world for an entire frame (they are ‘global’). So you only need to set them up once. You don’t recalculate them for every object, mesh or vertex, let alone every pixel. So in general you only want to calculate things in shaders that you cannot precalc on the CPU and pass to the shaders as constants efficiently. And you don’t want to calculate things in a pixel shader that you can calculate in a vertex shader. After all, there are normally far fewer vertices in your scene than there are pixels, so the vertex shader will be executed a lot less often than the pixel shader.
Another interesting detail is that the fonts were stored in an 8-bit format. This was effectively an alphablend value. The text colour was set to an Effect constant, so that a single font could be rendered in any colour. However, the format chosen for the texture was L8 (8-bit luminance). In the pixel shader, this value was read, and then copied to the A component, while the RGB components were set to the constant colour. I couldn’t find a way to make this work with fixed function. The fixed function pipeline treats colour and alpha operations as two separate/parallel pipelines. For each stage you can calculate RGB separately from A. However, most operations will not allow you to move data from RGB to A or vice versa. And when you read an L8 texture, the value is copied to RGB, where A will always be 1, as the texture does not contain alpha information.
So instead, the font should be using an A8 format (8-bit alpha) instead. Then A will contain the value, and RGB will read as 1, because the texture does not contain colour information. That is how the shader should also have been designed, semantically. It should have read the A-component of the texture into the A-value of the output pixel, rather than reading the RGB components into the A-value.
So this once again shows that older systems/environments have limitations that can give valuable insights in the weaknesses of your codebase, and can make your codebase more efficient and more robust in ways that you might not normally explore.
I have decided to once again port back the small fixes and modifications to the current codebase. This way I can develop the legacy Effects on the current build of our software, and do not need to rely specifically on the .NET 4 version. I have decided not to actually put the legacy Effect source code and compiler into the main codebase though. Otherwise the legacy shader set would end up in production code. However, the code knows of this set, so if you manually copy it to the correct place on disk, it will automatically make use of it.