A little knowledge can be a dangerous thing

That is something I found out in a ‘discussion’ (don’t think it really deserves that name) earlier this week. Some guy was talking about the fact that it took a long time for 32-bit OSes to catch on, after the introduction of the 80386. This part is true, simple fact-checking will reveal that the 80386 was introduced in 1985, while the first 32-bit version of Windows (NT 3.1), the most popular x86 OS, was introduced in 1993. And, if you want to get more detailed, another reasonably popular 32-bit OS was OS/2 2.0, which was introduced not much sooner, namely 1992. There were some more obscure early 32-bit OSes, such as Microsoft’s Xenix, which was already made available in a 32-bit version in 1987, but that’s about it.

Now, where this guy went wrong, was his claims that Microsoft deliberately frustrated the 32-bit market by keeping MS-DOS 16-bit only. Then he started a whole debate on how OS-developers deliberately limit the capabilities of hardware, so they can enable it as a paid ‘feature’ later. Claiming that even iOS and Android still do this today, and that the hardware would be much more powerful with an unlimited OS. This sounds like something that you either want to believe, or tastes too much like a conspiracy theory to take seriously. It all depends on how you roll.

In my case, I have actually experienced this 16-bit DOS era, and the rather painful uptake of 32-bit software. So I know for a fact that at least in this instance, his claims are completely false. Namely, a major factor the guy has been ignoring in his argumentation is the price. Computers were a LOT more expensive back in the day, and progress was not as fast. Today, Intel launches a new line of CPUs every year (tick-tock), and the new line is immediately launched as a fully vertical product line, replacing older products.

But in the 80s and 90s, things were different. A new line of CPUs was only introduced when a fully new microarchitecture was developed, which generally took 3-4 years, for example:

  • 8086/8088: 1978
  • 80186/80286: 1982
  • 80386: 1985
  • 80486: 1989
  • Pentium: 1993
  • Pentium Pro: 1995

In between, there were ‘new products’ launched from time to time, which generally was the same architecture, but at a higher clockspeed, because of an improved manufacturing process and/or some tweaks to the architecture. Since the latest generation of CPUs was very expensive, and there generally were only 2 or 3 variations of it in terms of clockspeed, the mainstream and low-end markets made use of the earlier generations. So, although the 8088 was already introduced in the late 70s, PCs with 9.54 MHz 8088 CPUs were still common mainstream computers around 1987/1988, and were actually still sold at prices exceeding those of the Commodore Amiga 500 or Atari ST, which technically had more powerful 68000 processors, and more advanced graphics and audio. So for consumers, these PCs were already quite expensive.

For me personally, my first 32-bit PC was a 386SX-16 (which is pretty much as low-end as you can get as far as 32-bit PCs go). I got it around 1990 or 1991. It was still quite an expensive machine at the time, and among my friends, I was one of the first to have a 32-bit machine (not to mention the Sound Blaster Pro I had in it, which cost me as much as my entire Amiga did earlier. PCs really were horribly expensive). Many people still used an 8088 machine, or a 286. My machine came with 1 MB of memory, because memory was still incredibly expensive around that time. This meant that even though the CPU was capable of running 32-bit software, it was still very limited.

I later expanded the memory to 5 mb, and then I tried to install Windows NT on it. It worked, but not much more than that. It ran like a dog, because the computer was quite underpowered. Besides, most software was still DOS-based, so you would run it inside the NTVDM anyway (which was neither fast nor very compatible). I also tried OS/2 Warp on it, which left roughly the same impression.

Around 1993 or 1994 I got my first 486, a DX2-66. I started out with 4 mb of memory, which again was barely enough to run some early 32-bit games such as Doom, and Windows NT still was not much of a success. Memory was still quite expensive. Around the time that Windows 95 came out though, I also upgraded to 8 mb, and that was the first time I actually had a 32-bit system that was powerful enough to run a full 32-bit OS. Windows 95 was also the first 32-bit PC OS (or well… 32-bit-ish) that became popular in the mainstream.

In those days, we also had some early linux machines at university. They were 486SX-25 machines, with 8 mb of memory, if I recall correctly. They were also quite underpowered running linux with XFree86 and a lightweight window manager. So it’s not like Microsoft/IBM did such a horrible job with OS/2 and Windows NT either. A full 32-bit OS with proper memory protection, virtual address space, virtual memory, and multitasking just takes quite a bit of overhead.

So anyway, blaming it all on Microsoft is a bit strange. For the most part, the problem was that 32-bit PCs were ridiculously expensive in the early years, not to mention underpowered. It is perfectly natural that there were no regular office/home OSes before the early 90s, because the machines were just too expensive. 386 and 486 machines were initially mainly sold as servers, where mainly UNIX-like OSes would be used, and Microsoft did offer Xenix at an early stage.

You can still find old PC Magazines from those days on Google books. Take this one from December 1992 for example, and see the high price for a 486DX2-66. And in 1990, you’d pay similar prices for a 386. Note also that they start with 1 mb of memory, and how quickly prices go up when you want a system with 4 mb of memory. Memory made up a significant chunk of the total price tag of a PC back then. A PC with 8 mb or more, to run a 32-bit OS comfortably was just not very feasible for regular consumers in those days. Let’s throw this one in for fun as well, one of the first 386 systems in 1986, reviewed in PC Magazine. At a whopping 16 MHz, with only 1 mb of memory, you’d pay $6,499 with a 40 mb HDD, or $8,799 with a 130 mb HDD. You want 4 mb more ram? That will cost you $2,999! In 1986 money, that is more expensive than a small family car! Just to illustrate how ridiculous the notion is of 32-bit consumer computing back in the late 80s. And those are 1986 prices. If you were to correct for inflation, it would be more than twice that in today’s money. Computers of more than $20,000? Yes, that is what we are talking about here.

As another funny bit of trivia: because of a bug, some early 80386 CPUs would not work reliably in 32-bit mode, and their package reads “16-bit S/W only”. Which wasn’t that big a deal, as most 386es were mainly running MS-DOS and Windows anyway.Intel-80386-16SW

The guy then said that he tried Slackware, which was ‘already 32-bit and multitasking’. So I said: “What do you mean ‘already’?” By the time the first linux distributions arrived, Windows NT was also on the market, which was a fully 32-bit production-ready multitasking OS. Then he argued that Windows NT was not aimed at consumers, and that it could have been if there were more drivers for consumer hardware. Say what? He even admitted himself that Slackware did not have a lot of drivers for consumer hardware either, and clearly Slackware was not aimed at consumers either.

Besides, drivers for consumer hardware? I would say that the biggest problem was that consumers simply could not AFFORD the hardware, so marketing it at consumers would not have made sense. Drivers were never that much of a problem, in my experience. A lot of the hardware for my 386SX and 486 already came with Windows NT drivers at an early stage. The biggest problems were that my systems were not powerful enough to run the OS, and there was no reason to run the OS anyway, because hardly any of the software I used had a 32-bit version at all, and if they did, it was usually based on a DOS extender, and as such, it ran much better directly from DOS. Mostly a chicken-and-egg problem, which was solved when prices of 386 and 486 systems started coming down in the early 90s. An obvious problem for software developers in those days was that going 32-bit would mean they would lock out all those people who still used 286 and older machines.  DOS extenders were not common before 1992 either, so they arrived more or less at the same time as the 32-bit versions of OS/2, Windows and linux/386BSD.

And DOS extenders were not a good permanent solution, because it was still DOS, so you still had no multitasking and no memory protection. A native 32-bit version of MS-DOS would be more or less the same, and would not be a very compelling OS anyway. When you want to go 32-bit, you should go with a more advanced OS as well, and that is exactly what Microsoft did, with Xenix, OS/2 and Windows NT (and backporting much of NT to Win32s extensions and Windows 9x as a more lightweight solution for consumers).

But well, it seems people who weren’t actually around in those days will choose to believe the conspiracy theory of evil Microsoft over common sense and easily verifiable facts.

Advertisements
This entry was posted in Hardware news, Software news and tagged , , , , , , , , , , , , , , . Bookmark the permalink.

6 Responses to A little knowledge can be a dangerous thing

  1. Analytic D says:

    Great history lesson. Many people have a hard time recognizing the reality that Microsoft succeed(s/ed) ultimately because it makes/made software that actually did actual things that actual people and organizations actually wanted and actually used because it solved actual problems. Market power and rent-seeking behavior only get you so far.

    • Scali says:

      Very true. Microsoft may not always have offered the most advanced technical solutions and such, but they were very pragmatic, especially about legacy support.
      In this case, going 32-bit, while also having nearly full backward compatibility with DOS and 16-bit Windows took a lot of extra overhead, and just was not feasible given the hardware of the time.

      But on the other hand, Microsoft did offer a 32-bit Xenix at an early stage, years before linux or 386BSD appeared. And even those two took a long time to become reasonably standard in the world of UNIX. In those days, x86 systems were not taken all that seriously anyway (again because the hardware wasn’t particularly powerful).

      I suppose what you’re saying is something like: The argument that Microsoft is a huge and powerful company only goes back in history so far. At some point they started with nothing, and they had to work themselves up to get where they are today, and maintain that position for as long as they have.
      So they had to have done something right. If it was all so simple, one of the many other software companies would have snapped up the low-hanging 32-bit fruit, rather than going out of business.

      • k1net1cs says:

        Well, my own haphazard guess would be because it’s easier to blame everything on Microsoft.
        You know, being the big bad bully they are.
        Well, they did, and probably still do, to some extent, but definitely not in the way that guy’s trying to present Microsoft as; reeks too much of conspiracy theory and/or GNU/Linux leftist armpits.

        P.S.
        No, I don’t hate GNU/Linux; I still specifically use it for things that I don’t need Windows to run on.
        It’s just the ‘Torvalds Witnesses’ that bugs me.

      • Scali says:

        I suppose this guy was going for anti-corporate leftist in general, seeing as he also lumped in iOS and Android.
        With Android he has somewhat of a point. I’ve done some Android development, and especially Dalvik just has horrible performance compared to conventional JVMs. So there definitely is room to improve performance and as such, make the hardware do things it cannot currently do.
        However, Google does not block native code, so you can circumvent the Dalvik bottleneck by just using C/C++.
        I also don’t think Google deliberately hampered performance on Android. I think they mainly chose Dalvik because it aims for compact code (Google didn’t develop much of Android themselves, the kernel is taken from linux and the Dalvik VM was an existing project as well).
        Perhaps they did not fully realize the performance-implications of Dalvik (a conventional JVM can be 3 times faster or more in many cases). Or, the smartphone/tablet market evolved more quickly than they thought, where compact code is no longer an issue whatsoever, but on the other hand people want to run more and more performance-critical software on their phones.

        But, never attribute to malice that which can be adequately explained by incompetence. Google just… isn’t very competent at developing OSes and development tools (as I already blogged on earlier).
        As for iOS… I don’t share his ideas that iOS is particularly inefficient given the hardware it’s running on. It is pretty much a stripped-down OS X environment, and it works entirely with native code. Factoring in that ARM processors are not quite comparable to x86 clock-for-clock, it seems to me that iOS applications run quite efficiently given the target hardware (as do native Android apps for that matter).

      • k1net1cs says:

        Yeah, abut that Dalvik; wasn’t it partly because it was an existing project before Android that made it Oracle’s point of contention with Google over Java/Android thing?

  2. Optimus says:

    Computer Conspiracies, how much I loved them back then 🙂

    There was a time when readers of a greek computer magazine were arguing that the software companies are slowing down their code in purpose, paid by the hardware companies. I was reading these theories like crazy, even though that evolved just in a fascination of optimized software that does more in a less powerful computer. But later, I realized many of these people where just playing games, never took the chance to try actually programming and see how hard it is to make things go fast. Some of them didn’t even realized the difference between realtime and prerendered (“Why does Myst run well in a 386 but Quake requires a Pentium to run well”, said a reader). Then I realized, the reason is that PCs evolve, companies have a deadline, so they won’t push it too hard to aim at the older systems, except if it was a game console. That’s why sometimes we see game consoles do really impressive stuff with less powerful hardware (besides custom chips and unified hardware architectures) than our PCs.

    And then I stopped reading that one (but I’ll have a look at old issues for nostalgy and laughs 🙂 and discovered the demoscene. Not many cared about the demoscene or learning coding to actually do our own high performance software from the readers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s