Now that we are nearing the release of Windows 8, I think it is a good time to talk about the changes in the world of technology in general, and how you deal with them. As a software developer, it is important to look ahead. And quite often it is also a good idea to look back.
I have learnt that lesson long ago, but I know from experience that not everyone learns it quite so easily, or at all. One time, I was part of a larger team of developers who developed a software suite, where the customers mainly ran Windows 2000 or XP systems. When a 64-bit version of XP arrived, nobody seemed to care. I did however, so I installed a copy of it at home, and started porting my code to 64-bit (at this time I was playing around with some ideas for a demo, which was never finished, but I released it as a one-part demo anyway. As you can see, it dates from April 2006, and already includes a fully working 64-bit version, complete with optimizations to make it run faster than the 32-bit version).
The same thing happened when Vista arrived. People were all in the nay-sayer camp. Well, granted, Vista wasn’t the most spectacular success that Microsoft has ever launched. But that doesn’t mean that you should just ignore it completely, as a developer. Because there will undoubtedly be computers sold with Vista preinstalled, and some of your users will buy such a computer and want to use your software on it. So you should at least install Vista on one or two test-machines and see if your software works, and perhaps fix some issues while you’re at it. Again, I installed Vista at home (again the x64 version of course), and I developed a bit with it (I did not have much of a choice anyway, since I wanted to play around with DirectX 10 as well).
But no, the genius at the helm decided that our policy would be not to support Vista. We would simply ignore any problems, and tell our end-users to only order systems with Windows XP. I did not agree with this policy obviously, since it should be crystal-clear that at least some of the ideas that Microsoft introduced in Vista would be part of any future Windows OSes as well. The most obvious example being user accounts with limited rights as opposed to running as administrator all the time. Technically this was already possible in earlier versions of Windows, but it would require fixes to the code, so the problem had been ignored, and people were told to always run as administrator when using our software.
Windows 7 came along, proving that Vista’s ideas were here to say. Although Windows 7 was received far more positively than Vista was, on a technical level it didn’t change much for us. Our software still suffered from all the same issues that it suffered from when trying to run it on Vista. We just had wasted 3 more years ignoring these issues.
And then came the time when Windows XP was no longer an option for new PCs. Now what? Yes, suddenly we could no longer hold on to our policy of XP-only. Now our software HAD to work on Windows 7. That is one thing.. But now Mr. Genius went overboard. Running on Windows 7 alone was not good enough. No, we could no longer sell it to people that you had to run as administrator. Thanks to Vista/Windows 7 and UAC, people now understood that running as administrator was *WRONG*. So, not only did we have to solve the regular glitches in Windows 7… We also had to do a complete audit of the code for any registry issues, writing files to Program Files, and all other sorts of vices that programmers who never bother to read any MSDN API references indulge in.
But wait, it gets worse! Our codebase was ‘frozen in time’ at Visual Studio 2005. And I mean that literally: the original release. We were not allowed to install the service pack. I am not sure what the exact reason was, but I vaguely recall something about not being able to do partial application updates, because the 2005 CRT is not fully binary compatible with the 2005sp1 CRT. Granted, that is a half-decent reason to at least develop updates with vanilla 2005… But we did a full major release once a year, and certainly 2005sp1 should have been pushed during one of those releases.
The problem is, in order to develop for Windows 7, our development machines had to be Windows 7 as well, or so it was decided. The thing with Visual Studio 2005 is that it doesn’t work properly under Windows 7. The irony of it all is that sp1 and a few minor updates on top of that make it work quite well on Windows 7. But… again Mr. Genius wanted to go overboard: we would move right from VS2005 to VS2010, so that we were in sync with the development tools for the next-generation software suite (initially that was part of the reason why the classic suite was held back in terms of support for new OSes. However, it should have been obvious that the next-generation suite was going to be delayed by years, so the classic suite would have to be maintained well into the Windows 7 era). Together with the overly strict policy of having warnings-as-errors on at all times, this meant that a LOT of code had to be fixed.
Since Visual Studio 2005, the compiler issues warnings for CRT-related security issues, and has special *_s() functions as secure alternatives to regular ANSI C functionality. Obviously our code predated these secure functions, and as such, we got tons of security warnings. And since warnings-as-error was enforced, all these issues HAD to be fixed. And then I’m not even getting into the fact that some parts of the code were written in Visual Basic 6, which wasn’t even part of Visual Studio 2005 anyway, but was snuck into the build system through the backdoor… Heck, Mr. Genius even played with the idea of having a native 64-bit version of the suite for a while… but luckily that idea was abandoned.
So not only did we still have to fix ALL those issues that we’ve been ignoring for years (along with creating some new issues for ourselves for no apparent reason)… They now had to be fixed ASAP, because the old software didn’t work on Windows 7, and the new software was not ready in time. The following wisdom went through my mind a lot in those days:
Working code is not bugfree code
As it turned out, a lot of the code just ‘happened’ to work, because some parts of the build process were accidentally done in an order in which they worked. For example, there were circular references between some COM objects. They worked because an old TLB of one of the objects lingered around on the build server. This meant that one of the objects was built against an outdated TLB of the other object, which ‘solved’ the circular dependency. Now that one of the objects was built, the other object could build as well. Problem is, you could never build the whole thing from scratch that way. Yet this is what was required for VS2010 and the new build server.
So let this serve as a warning to you all: don’t just ignore new OSes or other technology just because it is not directly relevant in your world just yet. You have to at least casually acquaint yourself with them, and get a rough idea of where the problem areas lie. Be ready when the time comes to support this new technology.
Well, I think that story covered the importance of looking forward. But be careful not to overdo it. In some cases, such as a new OS which enforces a more strict security policy, it is obvious that you have to adapt. But you should not blindly use the latest version of every library. Namely, newer versions may no longer support older OSes or hardware. Also, it may put an extra burden on the end-user, who has to install the new version of the library.
So you have to be careful in deciding which version of a library to use. Some questions you can answer yourself:
- Does the new version offer functionality that is important to you?
- Does the new version fix critical bugs or security issues?
- Does the new version perform better than the previous version or less? And is this performance important to your end-users?
- Does the new version add support for new OSes/hardware?
- Does the new version drop support for old OSes/hardware?
- Is the size of the redistributable going to be an issue for deployment by your end-users? (think of eg software being employed on remote/offshore locations, where only slow satellite internet connections are available, and every MB counts)
It pays off to keep a few old test boxes around. Boxes with lower specs and older OSes, on which your software once ran, and on which you still want it to run. If you just run your software on it from time to time, you may discover small bugs that silently crept in, which didn’t show up on newer systems for some reason.
I can give a few examples from my own experience. Recently I hooked up my old Athlon 1400 system again, with a Radeon 8500 card. I wanted to try my code on it, just for kicks. I found out though that the latest DirectX runtime does not install on the system. The CPU does not support SSE, which is a requirement for the XACT audio framework included in newer runtimes (which I don’t use, but it crashes the installer). It appears that the runtime from November 2007 was the last one not to have this SSE requirement. So, in this case I had silently locked out non-SSE CPUs from my application by simply upgrading to a newer DirectX SDK. Now, I didn’t have too much of a choice, since my engine also supports DirectX 10 and 11, so a newer version of the SDK is required. I don’t recall reading about this SSE-requirement though. Sometimes these things sneak up on you.
Another thing that I *did* read about, but the ramifications of which did not fully dawn on me until I fired up the Radeon 8500, is that Microsoft moved to a new shader compiler with Direct3D 10. This compiler is also the default for Direct3D 9 for newer runtimes. As a result, you can compile both legacy and D3D10+ style shaders for use in D3D9. I make use of this by writing my shaders in D3D10 syntax, and re-using the same shaders for all 3 APIs.
The small print however warns you that legacy ps1.x shaders will be upgraded to ps2.0. So far this has not been a problem, since I only tested it on SM2.0 and newer hardware. However, the Radeon 8500 only supports ps1.4. As a result, my code compiled the shader correctly, but even though I specified a “ps_1_4” profile, the version of the compiled shader reported 2.0. This led to the interesting situation that my code did not report any compile-errors, yet I didn’t see any pixelshaders in use. Instead it fell back to the fixed function pipeline. The vertex shaders worked correctly though.
Now, since my code was close to working, I decide to see if I could create a workaround for this. My code is not strictly required to work on DX8-class hardware, but it’s nice to do so just because you can, right? Luckily Microsoft included a flag to invoke an old compiler DLL from 2006, which still has full support for ps1.x shaders. Sadly, you can have your cake, but you cannot eat it: the old compiler will not understand the new D3D10 shader code, so you will need alternative shader code (but only for the pixelshaders, as the new compiler compiles to vs1.1 just fine).
Anyway, with just a few hours of playing, I got my code to work on Direct3D 8 hardware again, just like how that same codebase started with Direct3D 8 more than 10 years ago. Ironically enough my code already worked on Direct3D 7 hardware, since the fixed function code does not rely on any compiler, and vertex shaders will simply be emulated in software, where you get the full vs3.0 functionality.
As far as looking forward is concerned, Microsoft will give developers plenty of opportunity. They offer preview versions of their upcoming OSes, which can freely be downloaded by everyone. So naturally you have already downloaded and installed a preview version of Windows 8, and tested your software, haven’t you?