When old and new meet: lessons from the past

As you may know, I like to dabble in retro-computing. I have held on to most computers I’ve used over the years, and I play with them from time to time. However, an interesting change has happened over the years: in the early days, a computer was more or less designed as a disposable item. As a result, the software also was disposable. By that, I mean, a Commodore 64 is just that: a Commodore 64. It is not backward-compatible with the VIC-20 or other Commodore machines that went before it, nor is it forward-compatible with later machines, such as the C16, the Plus/4 or the Amiga. As a result, the software you write on a Commodore 64 is useful for that machine only (the C128 being the exception to the rule, but only because the C128 was designed to have a C64 mode). So once you upgrade to a new machine, you also have to rewrite your software.

However, companies such as Apple and IBM were mainly selling to businesses, rather than to consumers. And for businesses it was much more important that they could continue to use the software and hardware they had invested in, when they bought new machines. So in this market, we would see better backward and forward compatibility with hardware and software. Especially on PCs, things seem to have stabilized quite a bit since the introduction of the 386 with its 32-bit mode, and the Win32 API, first introduced with Windows NT 3.1, but mostly made popular by Windows 95. We still use the Win32 API today, and most code is still 32-bit. As a result, the code you write is no longer ‘disposable’. I have code that I have written in the late 90s that still works today, and in fact, some of it is still in use in my current projects. Of course, the UNIX-world has had this mindset for a long time already, at least as far as software goes. Its programming language C was specifically designed for easy sharing of code across different platforms. So UNIX itself, and many of its applications, have been around for many years, on various different types of machines. Portability is part of the UNIX culture (or was, rather… linux did not really adopt that culture).

With Windows NT, Microsoft set out to have a similar culture of portability. Applications written in C, against a stable API, which would easily compile on a variety of platforms. Since I came from a culture of ‘disposable’ software and hardware, I was used to writing code that would only get a few years of usage at most, and did not really think about it much. Focusing on computer graphics did not really help either, since it meant that new graphics APIs would come out every few years, and you wanted to make use of the latest features, so you’d throw out your old code and start fresh with the latest and greatest. Likewise, you’d optimize code by hand in assembly, which also meant that every few years, new CPUs would come out with new instructions and new optimization rules, so you would rewrite that code as well. And then there were new programming languages such as Java and C#, where you had to start from scratch again, because your existing C/C++ code would not translate trivially to these languages.

However, over the years, I started to see that there are some constant factors in what I do. Although escapades in Java and C# were nice at times, C/C++ was a constant factor. Especially C could be used on virtually every platform I have ever used. Some of the C/C++ code I have written more than 15 years ago, is still part of my current projects. Which means that the machines I have originally written it on, can now be considered ‘oldskool/retro’ as well. I have kept most of these machines around. The first machine I used for Win32 programming was my 486DX2-80. I also used a Pentium Pro 200 at one time, which was also featured in my write-up about the PowerVR PCX2. And I have a Pentium II 350, with a Radeon 8500. And then there is my work with 16-bit MS-DOS machines, and the Amiga. Both have C/C++ compilers available, so in theory they can use ‘modern’ code. So let’s look at this from both sides: using old code on new platforms, and using new code on old platforms, and see what we can learn from this.

Roll your own SDK

SDKs are nice, when they are available… But they will not be available forever. For example, Microsoft removed everything but Direct3D 9 and higher from its SDK a few years ago. And the DirectX SDK itself is deprecated as well, its last update dating from June 2010. The main DirectX headers are now moved into the Windows SDK, and extra libraries such as D3DX have been removed. Microsoft also removed the DirectShow code earlier, and especially the base classes used by the samples are interesting to keep around, since a lot of DirectShow code you find online will be based on these. So, it is wise to store your SDKs in a safe place, if you still want to use them in the future.

Another point is: SDKs are often quite large, and cumbersome to install. Especially when you are targeting older computers, with smaller harddisks, and slower hardware. So, for DirectX I have collected all the documentation and the latest header and library files for each major API version, and created my own “DirectX Legacy SDK” for retro-programming. The August 2007 DirectX SDK is the last SDK to contain headers and libraries for all versions of DirectX, as far as I could see. So that is a good starting point. However, some of the libraries may not be compatible with Visual C++ 6.0 anymore (the debug information in dxguid.lib is not compatible with VC++ 6.0 for example), so you may have to dig into older SDKs for that. I then added the documentation from earlier SDKs (the newer SDKs do not document the older things anymore, so you’ll need to get the 8.1 SDK to get documentation on DX8.1, a DX7 SDK for documentation on DX7 etc). And I have now also added some of the newer includes, libraries and headers from the DirectX June 2010 SDK, so it is a complete companion package to the current Windows SDK. When you want to do this, you have to be a bit careful, since some headers in the June 2010 SDK require a recent version of the Windows headers, which cannot be used with older versions of Visual C++. I tried to make sure that all the header files and libraries are compatible with Visual C++ 6.0, so that it is ‘legacy’ both in the sense that it supports all old versions of DirectX, and also supports at least Visual C++ 6.0, which can be used on all versions of Windows with DirectX support (back to Windows 95).

For several other APIs and libraries that I use, such as QuickTime and FMod, I have done the same: I have filtered out the things I needed (and in the case of QuickTime I have also made a few minor modifications to make it compatible with the latest Windows SDK), and I keep this safe in a Mercurial repository. I can quickly and easily install these files on any PC, and get my code to compile on there, without having to hunt down and install all sorts of SDKs and other third-party things. I have even extracted the relevant files from the Windows 8.1 SDK, and use that on Windows XP (you cannot install the SDK on an XP or Vista system, but the SDK is compatible with Visual Studio 2010, so you can use the headers and libraries on an XP system and build XP-compatible binaries, if you configure the include and library paths manually).

So basically: Try to archive any SDKs that you use, and keep note of what OSes they run on, and what versions of Visual Studio and the Windows SDK they are compatible with. Anytime a newer SDK drops support for certain features or APIs, or for older versions of Visual Studio or Windows or such, you may want to take note of that, and keep the earlier SDK around as well, possibly merging the two. And it makes a lot of sense to put the relevant files of the SDK into a code repository (at least the headers and libraries, optionally docs and even sample code). Having version control will help you if you are merging from two or more SDKs, and/or modifying some headers manually (such as the QuickTime example), and allows you to go back to earlier files if something accidentally went wrong, and you run into some kind of compatibility problems. It’s also convenient if you are developing on multiple machines/with multiple developers, because setting up the proper build environment is as simple as cloning the repository on the machine, and adding the proper include and library paths to the configuration. It keeps things nice and transparent, with everything in one place, set up the same way, rather than your SDKs scattered through various subdirectories in Program Files or whatnot.

Working code is not bugfree code

Both my OpenGL and Direct3D code have quite a long history. The first versions started somewhere in the late 1990s. Although there have been quite significant updates/rewrites, some parts of code have been in there since the beginning. My current Direct3D codebase supports Direct3D 9, 10 and 11. The Direct3D 9 code can be traced back to Direct3D 8.1. I started from scratch there, since my Direct3D 7 code was not all that mature anyway, and the API had changed quite a bit. Direct3D 9 was very similar to 8.1, so I could upgrade my code to the new API with mostly just search & replace. In those days, I mainly targeted fixedfunction hardware, and only started to experiment with shaders later. Initially the shaders were in assembly language, later HLSL was introduced.

As a result, my Direct3D 9 code has been used on a variety of hardware, from fixedfunction to SM3.0. When I originally started with D3D10-support, I thought it was best to start from scratch, since the APIs are so different. But once I had a reasonably complete D3D10-engine, I figured that the differences could be overcome with some #ifdef statements and some helper functions, as long as I would stick to a shader-only engine. So I ended up merging the D3D9 code back in (D3D9 was still very important at that time, because of Windows XP).

And it turned out to be a good thing. My engine runs in D3D9Ex mode on Vista or higher, which means it will use the new-and-improved resource management of the D3D10 drivers. No more separate managed pool, and no need to reset lost devices. I periodically run it on an old XP machine, to verify that the resource management still works on regular D3D9. And indeed, that is where I caught two minor bugs.

I then decided to look at pre-DX9 graphics cards. As I said, my D3D9-code was originally used on fixedfunction hardware, and I later merged that code into my D3D10/11-engine. This posed the following dilemma: D3D10+ is fully shader-based. Do I limit my D3D9-code to be shader-only as well, or do I somehow maintain the fixedfunction-code as a D3D9-only feature?

I decided that it would not be too invasive to leave the fixedfunction code in there, so I did. It would just be in there, dormant, since the actual code I was developing, had to be shader-based anyway, in order to be compatible with D3D10/11. Since that code was still in there, I decided to grab an old laptop with a Radeon IGP340M, a DX7-class GPU (no pixelshaders). There were a handful of places where the code assumed that a shader would be loaded. I had to fix up the code there to handle nullpointers gracefully. And indeed, now I could run my code on that machine. I could either use fixedfunction vertexprocessing (based on the old FVF flags), if the shaders were set to null, or use CPU-emulated vertex shaders (the engine has always had an automatic fallback to software vertexprocessing when you try to use shaders, I actually blogged about a bug related to that. The message of that blog was similar: testing on diverse hardware allows you to detect and fix some peculiar bugs). And if the pixelshaders are set to null, the fixedfunction pixel pipeline is enabled as well. This meant that I could at least see the contours of 3d objects, because the vertex processing was done correctly, and it was rasterizing the triangles with the default pixel processing settings. I then added some texture stage states to my materials, to configure the pixel processing, to make some simplified replacements for the pixel shaders I was using. Et voila, the application worked more or less acceptably on this old hardware.

Then it was onto another interesting configuration: my Pentium II 350 with a Radeon 8500. The first thing the Pentium II did, was to remind me about SSE2-support… I had moved my codebase from D3DXMath to DirectXMath a while ago, as I mentioned earlier. I also mentioned that I was thinking of disabling SSE2 for the x86-build of my engine. Well, apparently that has slipped my mind, but the PII reminded me again, since it does not support SSE2, so it crashed. So, first thing I had to do was to add the _XM_NO_INTRINSICS_ flag in the right place.
Take note that you also have to recompile DirectXTK with that flag, since it uses DirectXMath as well, and you will not be able to link SSE2-based DirectXMath code to vanilla x86 DirectXMath code, since the compiler will not see the types as equivalent.

Another SSE2-related issue was that the June 2010 DirectX runtime can not be installed on a non-SSE2 machine. This is because the XAudio component requires SSE2, and once the installer reaches that component, it will run into an error and rollback. However, my engine does not use that component. In fact, I only need D3DX9_43.dll and D3DCompiler_43.dll. Simply copying these files to the \Windows\system32 directory, or the application directory, is enough to make my code run on a non-SSE2 machine.

Now that I had made the code compatible with the Pentium II CPU, it was time to see if it actually worked. And well, it did, but it didn’t, if you know what I mean… That is, the code did what it was supposed to do, but that is not what I expected it to do. Namely, the application ran, and compiled its shaders, but then it appeared to use the same fixedfunction fallback-path as on the Radeon IGP340M. Now, the Radeon 8500 is from a rather peculiar era of programmable GPUs. It is a DirectX 8.1 card, with pixelshader 1.4 functionality. DirectX 9 supports PS1.4 (unlike OpenGL, where there is no standardized API or extension for any shader hardware before SM2.0), so what exactly is happening here?

Well, DirectX 10, that’s what’s happening! Microsoft updated the HLSL syntax for DirectX 10, in order to support the new constant buffers, and also some other tweaks, such as new names for the input and output semantic flags. Since the first DirectX 10-capable SDK (February 2007), DirectX 9 also defaults to this new compiler. The advantage is that you can write shaders that compile for both APIs. The fine print however tells us that this compiler will only compile for PS2.0 or higher (VS1.1 is still supported though). So what happened in my code? My engine detected that I have PS1.4, then it asks the compiler to compile a PS1.4 shader. The compiler does so, but in the process it silently promotes it to PS2.0. So it returns me a properly compiled shader, however it is not compatible with my hardware. When I later try to use this shader, the code will ignore the shader, because it is not compatible with my hardware, and so I get the fallback to fixedfunction. Now isn’t that interesting? Everything works as it should, the shader gets compiled, and the code works without error. You just don’t get to use the shader you compiled.

So, back to the fine print then. I noticed the following flag: D3DXSHADER_USE_LEGACY_D3DX9_31_DLL. Interesting! This will make the function fall back to the compiler of the October 2006 SDK, which is the last SDK before DirectX 10. In fact, this SDK already featured a preview version of D3D10, and also included a beta version of the new compiler. So with this flag you get access to a compiler that supports PS1.x as well. I added a few lines of code to make the engine fall back to this compiler automatically for hardware that is below PS2.0. Now I ran into the problem that my HLSL shaders were converted to the new D3D10 syntax some years ago. So, I did some quick modifications to get them back to the old syntax.

And there we go! At this point my D3D9 once again runs on everything it can run on. DX7-class non-shader hardware, SM1.x hardware, and of course the SM2.0+ hardware which it normally runs on. Likewise it once again runs on older CPUs as well, now that it no longer needs extensions such as SSE2. And in the end, it really was only a few lines of code that I had to change. That is how I tend to look at things anyway: I don’t like to think in terms of “This machine/OS is too old to bother to support”, but rather: “Is it theoretically possible to run this code on such a machine/OS, and if so, how much effort is required for that?” In this case, it really was not a lot of effort at all, and the experience taught me some of the more intricate details of how the DirectX SDK and D3DX evolved over the years.

So basically: Never assume that your code is bugfree, just because it works. Slightly different machines and/or different use-cases may trigger bugs that you don’t test for in day-to-day use and development. Tinkering with your code on different machines, especially machines that are technically too old for your software, can be fun, and is a great way to find hidden bugs and glitches.

Less is more

I have already covered this issue somewhat with the section on SDKs: newer code is not always backward-compatible with older tools. Especially C++ can be a problem sometimes. Older platforms may not have a C++ compiler at all, or the C++ support may be limited, which means you cannot use things like the STL. This made me rethink some of my code: does it really need to be C++ anyway? I decided that most of my GLUX project might as well be regular C, since it is procedural in nature anyway. So I converted the code, which only required a few minor changes anyway. Now it should be compatible with more compilers across more platforms. Which is nice, since I want to use some of the math routines on Amiga as well.

I noticed that in my BHM project, I moved to using STL a while ago, by introducing the BHMContainer class. However, the BHMFile class was still preserved as well, so I could still use that on platforms with no STL support, such as the Amiga, or DOS. This gave me something to think about… In C you do not have standard container classes, so I had written my own linkedlist, vector, hashtable and such, back in the day. In some of my C++ code I still use these container classes, partly because I am very familiar with their performance characteristics. But another advantage is that these container classes can be used with very limited C++ compilers as well, and I could even strip them down back to their original C-only form, to get even better compatibility. So there is something to be said for using your own code instead of STL.

Another thing is the use of standardized datatypes found in stdint.h/stdbool.h/stddef.h and related files. The most useful here are the fixed datatypes, in this case. C/C++ have always had this problem that the size of each datatype is defined by the architecture/platform, rather than being an absolute size. Although integers are 32-bit on most contemporary platforms, you may run into 16-bit or even 8-bit integers on older platforms. Especially for something like a file format, the right way to define your datastructures is to use typedef’ed types, and make sure that they are the same size on all platforms. This has always been good practice anyway, but since the introduction of the fixed datatypes (int8_t, int16_t, int32_t etc), you no longer have to do the typedefs yourself.

Old compilers may not have these headers, or may only have a limited implementation of them. But it is not a lot of work to just do the typedefs manually, and then you can use all the ‘modern’ code that makes use of them. Since my BHM code did not yet use proper typedefs, I have updated the code. That is actually a very important update to the code: finally the datastructures are guaranteed to be exactly the same, no matter what compiler or architecture you use. This means that I can now use BHM for my MS-DOS retro-projects as well, for example.

Lastly, neither my MS-DOS nor my Amiga C++ compiler support namespaces. So that was another thing to think about. In some cases, the old C-style ‘namespace’ technique may be better: You just prefix your symbol names with some kind of abbreviation. For example, in my CPUInfo-project I prefix some things with ‘CI_’. This should avoid name clashes in most cases, and is compatible with virtually all compilers.

So basically: If you want to go for maximum compatibility and portability, less is more, especially with C++. So if you are writing code that is mostly procedural, you might as well use regular C, instead of C++. And if you want to use C++, you can re-use your code on more platforms and compilers if you stick to a relatively basic subset of C++ functionality (eg. not using namespaces, not using STL, and only using relatively basic classes and templates).

Conclusion

Apparently there are some things we can learn from using old code on new systems, and using new code on old systems. It is often not too hard to get the two to meet. I hope the experiences I have written down for you have given you some new insights in how to write and maintain code that is not ‘disposable’, but is as flexible, portable and compatible as can be. In general this idea of ‘digital longevity’ seems to be quite new still. With the disappearing of Windows XP, a lot of people are only now starting to face these problems that the programs they are using may not work on new systems. And they probably wish they would have archived their code and applications at an earlier stage, and mapped out the compatibilies and incompatibilities with machines, OSes and tools better.

Advertisements
This entry was posted in Direct3D, Oldskool/retro programming, OpenGL, Software development and tagged , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s