A great upgrade for the PCjr: the jr-IDE

Adding a hard drive to a PC was a reasonably standard upgrade even in the 80s. And in today’s world of retro-computing, we have the XT-IDE card, which adds proper IDE support to old PCs. Which also allows you to use more modern storage, such as Compact Flash cards and Disk-on-Module (DOM) devices.

Before getting into the actual topic of today, the jr-IDE, I want to clear up some confusion regarding IDE first. There are three basic things you need to know about IDE:

  1. IDE stands for Integrated Drive Electronics. This means that, unlike early hard disk systems, the actual controller logic is integrated on the drive itself. The IDE ‘controller’ on the PC side is little more than an interface (very similar to Roland’s ISA card and MPU-401 MIDI solution).
  2. IDE was first introduced in 1986, for AT-class machines, and is more or less a direct connection of the ISA bus to the integrated controller on the drive. As such, alternative names are AT-BUS or ATA (AT Attachment). With the introduction of Serial ATA, classic IDE became known as Parallel ATA.
  3. IDE is a 16-bit protocol, which made it incompatible with PC/XT-class machines, as they only had an 8-bit ISA bus.

In the early days of IDE, PC/XT-class machines were still relevant, and as such, there was a short-lived XT-compatible variant of IDE, which goes by the name XTA or (confusingly) XT-IDE. It was flawed because it used the same approach as regular IDE: a direct bridge between the ISA bus and the drive controller. As the ISA bus was only 8-bit in this case, the integrated controller on the drive had to be specifically compatible with this 8-bit IDE variant. Only a handful of such drives were ever made.

The new XT-IDE

Back in the day, there were a handful of 8-bit IDE controllers available that DID work with 16-bit drives (such as the ADP50). After all, 2×8 bit is also 16-bit. So if you implement a small buffer on the controller, which buffers the 8-bit ISA bus reads and writes and converts them to a single 16-bit IDE bus read or write, you can make any 16-bit IDE drive work on an 8-bit XT bus. From there, all you need is a custom BIOS that implements the required int 13h disk routines that properly interface with this simple buffered interface.

These controllers were rare however, so they are difficult to get hold of for retrocomputing. And this is a problem that the XT-IDE aimed to solve: some retrocomputing enthusiasts developed their own modern IDE controller specifically for 8-bit PC clones. It is a cheap and simple design, which is open source, and you can buy them in kit-form or pre-assembled from a variety of places on the web. And the XT-IDE universal BIOS is also freely available. It even supports DMA transfers.

Yeah yeah, get to the jr-IDE already

This rather elaborate introduction was required to have a good idea of what an XT-IDE is, and why you want one in your vintage 8 bit DOS-machine. And well, the PCjr is an 8-bit DOS-machine. Since it has no ISA slots, you cannot use the XT-IDE as-is. However, it has a ‘sidecar’ expansion bus on the side of the machine, which is similar to an ISA slot. There are adapters available that allow you to use an ISA card in the sidecar slot:

There are some limitations (not all IRQs are available on PCjr, there is no DMA, and BIOS ROMs may or may not work depending on how compatible they are with the PCjr environment), but you can at least get certain ISA cards working on a PCjr with this contraption, including some hard disk controllers.

And that’s interesting. Namely, since the PCjr failed in the marketplace, hard disk interfaces are extremely rare for the PCjr, and to my knowledge, no IDE interface existed, only SCSI or MFM. Which will be different to interface with more modern drives. So, some PCjr fanatics tried to get the XT-IDE working on the PCjr. Did it work? Not initially, but with some massaging of the BIOS, they could get it running.

The obvious next step would be to not use an ISA card with modified BIOS in an ISA-adapter for the PCjr, but to design a new PCB that would plug into the PCjr’s sidecar slot directly, which would no longer make it an XT-IDE, but rather a jr-IDE. Somewhere around this point, Alan Hightower of retrotronics.org joined the project. He would design the final PCB that became the jr-IDE as we know it today.

This project quickly got out of hand though (in a good way), as people were thinking: “If we’re going to make a new sidecar for PCjr anyway, why not add all sorts of other useful features as well?” This resulted in the following:

  1. The IDE interface (including phantom power on pin 20, to power Disk-On-Module devices directly)
  2. 512k of flash ROM BIOS that is fully PCjr-compatible, which can contain the required firmware for the IDE controller, as well as other things you would like to add
  3. A fully PCjr-compatible BIOS image that can boot directly from hard drives
  4. 1 MB of static RAM
  5. Battery-powered real-time clock
  6. Power On Self-Test (POST) display

See the official retronics jr-IDE page for more background information on the design and features.

I think items 4, 5 and 6 may require some explanation…

128k ought to be enough for anyone

In the category of “Things Bill Gates never said”, we have the memory limit of the PCjr. This is at 128k, rather than the common 640k of the regular PC. Or well, technically it is at 112k, give or take. I already made a short mention of it in an earlier article:

This limitation is a result of IBM sharing the system memory between the CPU and the graphics adapter. IBM designed it so that the memory is divided into ‘page’ of 16k each (by default, it will use 16k of system memory and mirror it to segment B800h, to be compatible with the CGA standard, which has 16k of video memory there, which means there’s only 112k left to DOS from the 128k, not a whole lot). With a maximum of 128k internal memory, you can select from a total of 8 pages. Although it is possible to add extra memory to the machine via a sidecar, it is not possible to address the extra memory as video pages. So the video memory always has to be in the first 128k of memory (this is a limitation that was overcome in Tandy 1000 models, which put memory expansions before onboard memory, effectively moving the onboard memory, including video RAM up, so the video RAM was always in the last 128k of the total system memory, instead of the first 128k on the PCjr. Some models even allowed you to add extra memory beyond 640k, which was only used for video).

So, what does it matter where your video memory is exactly? Well, in theory it doesn’t, if you are running PCjr-specific software, for example from a cartridge, of self-booting software, which are aware of the video memory being somewhere in the first 128k. But if you use DOS, there is a problem: DOS does not understand the PCjr memory layout.

DOS was designed with the assumption that system memory is a single contiguous area of memory, starting from address 0, and going up to a limit that is reported by the system BIOS. This assumption does not hold for the PCjr. So what can you do? Well, the workaround is as beautiful as it is kludgy: you write a dummy device driver, which allocates all the memory up to 128k. This driver is loaded at boot time, so DOS will consider all memory below 128k in use, and once DOS is booted, it will load any application into the extended memory beyond 128k. This means that it won’t interfere with the video memory.

The downside is of course that you are wasting memory below 128k, which is neither used by DOS nor by video. Since the device driver ‘owns’ the memory, it can repurpose it for other uses. So a common use is to make a small RAM disk out of it, so you can store small files in-memory.

This article also explains it nicely with some diagrams. The most popular driver that allows you to make use of PCjr memory expansions under DOS is JRCONFIG.SYS, originally written by Larry Newcomb.

Fun fact: because static RAM is used, no memory refresh is required. As you may know, the original IBM PC uses a combination of the PIT and the DMA controller to periodically (once every 72 IO cycles) read a single byte, which effectively refreshes the system memory. The PCjr has no DMA controller. It does not need to refresh its memory explicitly, as it is shared with the video controller, which periodically reads from memory anyway (once every 4 IO cycles), to display the image.

So where a stock 64k or 128k PCjr is slower than a PC because the video controller inserts more wait states on the bus than the DMA refresh does… with a static RAM expansion it is actually the opposite! Since the static RAM is not used as video memory, and does not require a refresh either, there are NO wait states. This means that the PCjr actually runs slightly FASTER than a PC now. Also, where 112k (or even 96k if you want to use the fancy PCjr modes that require 32k of VRAM) is very cramped for any kind of DOS application, you can now have over 600k available for DOS, similar to a regular 640k PC configuration.

Because, wait a second, there’s 1 MB of static RAM on the card? Yes, well, similar cards also exist for the IBM PC. And you cannot necessarily use everything. However, on the PC you CAN use upper memory blocks (UMBs) above the 640k range. It’s just that the memory directly above 640k is reserved for EGA or VGA adapters (segment A000h). So DOS itself will not normally go beyond 640k, and UMBs are a hack to load small device drivers in parts of the range above 640k that is not already mapped to a hardware device.

On a PCjr however, you have a fixed onboard video controller. And it is basically an extended CGA controller, which maps its memory at B800h. So there is no reason why you can’t just push memory up to that limit, which is 736k. This would make up for the ‘lost’ memory below 128k where the video buffers are mapped. JRCONFIG.SYS will do this for you, giving you well over 600k of free conventional memory in DOS. So, with the jr-IDE you not only get FASTER memory than on a regular IBM PC, you get MORE memory as well.

What time is it?

As you may know, the IBM PC/AT introduced a standardized battery-powered clock as part of the MC146818 CMOS chip. But what happened before that? In the early days, computers had no absolute time source at all. DOS used timestamps for its files, so a proper real-time date and time were required. It did this by prompting the user to enter the correct date and time at every boot. Then DOS would use the PIT at 18.2 Hz to periodically update this internal time (this also meant that if you reprogrammed the PIT for your own software, and didn’t call the original handler periodically, the time would not be updated. So running certain software would mess up the time keeping of DOS).

Technically this never changed, once the AT was introduced. The only difference is: when DOS detected an AT, it knew there would be an MC146818 or compatible chip with a real-time clock (RTC) available. And it could just read the correct date and time at startup, rather than having the user enter them manually. The actual time keeping during the DOS session was still done via the PIT at 18.2 Hz.

But for PC/XT-class machines, there were also third-party addons for a battery-powered RTC. And many PC/XT-class clones would integrate a battery-powered RTC on the motherboard. They basically worked the same, except you had to load a non-standard driver from CONFIG.SYS or run a command line utility from AUTOEXEC.BAT to get the date and time from the RTC into DOS at every boot.

And this is what the jr-IDE does as well. It adds a Dallas DS12887 chip, which is a clone of the MC146818. So it is actually register-compatible with an AT. The only difference is the timer interrupt, as the PCjr has only one PIC, where the AT connects it to the second PIC. The sidecar interface only supplies three interrupt lines (IRQ1, IRQ2 and IRQ7). So you can configure the jr-IDE to select one of these, or disable it if you want to keep these scarce resources available for other devices. And it supplies a simple utility to load the date/time at bootup. There we are, you never have to manually enter the date and time again, at every boot.

POST cards from the edge

Power-On Self Test. You are probably familiar with that. Every PC-like machine will first do a self-test when it is powered on, before it will try booting from a disk. The most visible part of that is usually the memory self-test, which shows a counter of how much memory has been detected/tested. This POST is part of the BIOS.

But what you might not know, unless you have been hacking around with BIOS code before, is that as early on as the IBM 5160 PC/XTs, many BIOSes had a simple debugging mechanism built in. The BIOS would write a status byte to a designated port at different parts of its self-test, as a primitive progress indicator.

If you have problems with booting such a machine, you can install a so-called POST-card into the machine, which is a simple ISA card with a two-digit display, which allows you to see the codes as the BIOS is performing its tests.

The jr-IDE also integrates such a POST display on its board, and makes it available through port 10h, which is what the PCjr BIOS uses. This can be a very useful debugging tool when you want to play around with your own low-level/BIOS code.

But does it perform?

There is more than one way to skin a cat. And the same goes for implementing a hard disk interface. On an x86 CPU, you have two ways of interfacing with external devices. You can either map the registers (or onboard storage) of an external device into the memory address space of the CPU, as most systems do. Or, a somewhat quirky characteristic of the x86: you can map the registers into the IO address space of the CPU (‘port-mapped IO‘).

The IO address space of the x86 seems to be a leftover from its earlier 808x predecessors: when you have limited address space, having separate IO space can prevent you from having to sacrifice precious memory address space for device IO. This made sense for the 8080 and 8085, as they only had 16-bit address space. For the 8086 however, it made considerably less sense, as it had a 20-bit address space. It’s also somewhat archaic and cumbersome to use, as you only have the special in and out instructions, which have the DX register hardwired as the IO address, and the AL or AX register hardwired as the operand. This makes them far less flexible than using memory addressing, which has various addressing modes, and memory operands can be used by many instructions directly. The in/out instructions are also slightly slower than a regular memory access. Nevertheless, the CPU supported port-mapped IO, and most of the PC was designed around port-mapped devices, rather than memory-mapped ones.

The original XT-IDE was also designed with port-mapped registers. The IDE standard uses 8 registers, which were mapped on an IO-range 1:1. The problem here is that the IDE interface uses a 16-bit register to transfer data, and the IO registers are 8-bit. So in order to implement the ‘missing’ 8 bits, a latch was added on the card, and placed at the end of the IO registers. This meant that the two 8-bit parts of the data register were not in consecutive IO-ports, which means it is not possible to do a more efficient 16-bit in or out operation (which is more efficient even on an 8-bit bus, as there is less overhead for fetching and decoding instructions).

User Chuck(G) then suggested a small modification to the PCB, which swaps some address lines around, so you do get the two data registers side-by-side, and in the correct order. Combine this with an updated BIOS that supports the new addressing, and you got a 2x to 3x speedup.

From this point onward, the paths of the XT-IDE and jr-IDE split. On a regular ISA-based machine, there is a DMA controller available, and DMA transfer is the fastest option, at least, on low-end devices (on faster machines, the DMA controller will be a bottleneck, as it always runs at ~1 MHz. This is enough to saturate the memory of the original PC, at about 1 MB/s in theory, but no more than that). This is because DMA transfers do not require any CPU intervention, so all cycles are spent transferring data, rather than fetching and decoding instructions. James Pearce of lo-tech presented the XT-CFv3, a modified PCB design and matching BIOS, which had DMA capability. It can reach speeds of over 500 KB/s, which is pushing the limits of the machine.

Since the jr-IDE has no DMA controller, and there is no way to even add one via the sidecar interface, as the required lines are missing, a different path was followed: memory-mapping. Instead of making the data register appear only once in the IO-space, a region of 512 bytes is mapped. This is the size of a single sector. This means that you can read or write consecutive bytes on the device by just using REP MOVSW, making things very elegant (as memory-mapped IO does). This is almost as good as DMA, as the instruction needs to be fetched and decoded only once. The looping is done inside the CPU’s microcode. The result is that even on the relatively slow PCjr, you can get over 330 KB/s in transfers. That is faster than any of the port-IO-based XT-IDE implementations. So very impressive indeed. It is a modification that could be backported to the XT-IDE design, but I have not heard of anyone doing this.


So I’m very happy with the jr-IDE I got. The extra memory and the HDD support make it much easier to develop on the PCjr. And I mean that literally: I will use it for development only. I still want to develop software that runs on a stock 128k (or perhaps sometimes even 64k) PCjr. But during development, the extra memory and storage make the process a lot nicer. All in all, it’s a very impressive and useful product, and great work from all the people involved.

By the way, Alan Hightower no longer builds the jr-IDE itself. The design is available from his website, so you can either build one yourself, or get one from a third party, such as TexElec.

Some other resources that may be helpful are Mike Brutman’s PCjr pages. He has a special page on the jr-IDE. On his downloads page, you can find tools such as the aforementioned JRCONFIG.SYS to make use of memory expansions under DOS.

And on the XT-IDE itself, you can find more info on the minuszerodegrees.net XT-IDE page, which contains a large collection of information on the development of the XT-IDE cards, its various revisions, the BIOSes, quirks, bugs and more.

Posted in Hardware news, Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , | Leave a comment

When is a PC not a PC? The PC-98

I’ve covered PC compatibility in the past, and tried to explain how just having an x86 CPU and running DOS does not necessarily make your machine compatible with an IBM PC. At the time, this was mainly about the IBM PCjr (and its clone, the Tandy 1000), which is still relatively close to a regular PC. As such you could create software that would run on both IBM PCs and compatibles, and on the PCjr/Tandy, with custom code paths for specific functionality, such as sound and graphics.

But pretty much all other DOS/x86-based machines failed, as their hardware was too different, and their marketshare was too small for developers to bother adding support. In fact, the main reason that the Tandy 1000 existed at all, is because the earlier Tandy 2000 was falling into the trap of not being compatible enough. The Tandy 1000 may actually not be a very good example, as Tandy tried to make it nearly 100% compatible, fixing the main reason why the IBM PCjr also failed. So later Tandy 1000 models were more or less a ‘best of both worlds’: nearly 100% compatible with IBM PC, but also offering the enhanced graphics and sound capabilities of the PCjr.

Meanwhile, in Japan…

In Japan however, things took a different turn. The Japanese do not just use the Latin alphabet that is used on all Western machines, including the IBM PC. The Japanese language uses more complex glyphs. They have multiple systems, such as kanji, katakana and hiragana. To display these in a comfortably readable form, you need a high-resolution display. Also, where Latin letters encode sounds, a kanji glyph encodes a word or part of a word. This means that your glyph alphabet contains over 50000 characters, a lot more than the maximum of 256 characters in your usual 8-bit Western character set.

So the Japanese market had very specific requirements, that PCs could not fulfill in the early DOS days. You couldn’t just replace the character ROM on your PC and make it display Japanese text (IBM did later develop the 5550 and the JX, a derivative of the PCjr, specifically for the Japanese market, and later, they developed the DOS/V variant, which added support for Japanese text to their PS/2 line, using standard VGA hardware, which by now had caught up in terms of resolution).

Instead, Japanese companies jumped into the niche of developing business machines for the home market. Most notably NEC. In 1981 they introduced the PC-8800 series, an 8-bit home computer based on a Z80 CPU and BASIC. In 1982, the PC-9800 series followed, a more high-end 16-bit business-oriented personal computer based on an 8086 CPU and MS-DOS. These families of machines became known as PC-88 and PC-98 respectively (Note that the ‘PC’ name here is not a reference to IBM, as NEC had already released the PC-8000 series in 1979).

In this article, I will be looking at the PC-98 specifically. So let’s start by doing that literally: here are some pictures to give an impression of what these machines looked like. They look very much like IBM PC clones, don’t they?

For more machines, specs and background info, I suggest this NEC Retro site.

How compatible is it?

It has an 8086 CPU, an NEC 765 floppy controller, an 8237 DMA controller, an 8253 programmable interval timer, and two 8259A programmable interrupt controllers. Sounds just like a PC, doesn’t it (okay, two PICs sounds more like an AT actually, so NEC was ahead of its time here)?

Well it would be, if it used the same IO addresses for these devices. But it doesn’t. What makes it especially weird is that since it has always been a system with a 16-bit bus (using an 8086 as opposed to the 8088 in early PCs), NEC chose to map any IO registers of 8-bit devices either on even addresses only, or on odd addresses only (so the 16-bit bus is seen as two 8-bit buses). For example, where the first 8259A on a PC is mapped to ports 0x20 and 0x22, the PC-98 places it at 0x00 and 0x02, leaving 0x01 as a ‘gap’ in between. The 8237 DMA controller is actually mapped on address 0x01, 0x03 and so on.

Another major difference is that the base frequency of the PIT is not 1.19 MHz like on the PC, but depending on the model, it can be either 1.99 MHz or 2.46 MHz.

And like the PCjr and the Tandy 1000EX/HX models, it has an expansion bus, but it is not the ISA bus. The PC-98 uses the C-bus. So you cannot use standard expansion cards for IBM PCs in this machine.

Clearly the video system isn’t compatible with any PC standard either. It does not even use int 10h as the video BIOS. Speaking of BIOS, the PC-98 BIOS is not compatible with the IBM PC BIOS either. But as said, the video system was far superior to the IBM PC at the time. The first version in 1982 already supported 640×400 with 8 colours, based on NEC’s own uPD7220 video controller. In 1985 they extended this with a palette of 4096 colours to choose from, and an optional 16 colour mode if an extra RAM board was installed. In 1986 the extra RAM became standard on new models, and they also added a hardware blitter for block transfers, raster operations and bit shifting.

What’s also interesting is that they chose to actually use TWO uPD7220 chips in a single machine. One of them is used for text mode, the other for bitmapped graphics mode. They each have their own video memory, and are used in parallel. So you can actually overlay text and graphics on a single screen.

But, on the other hand…

There are two things that we can use to our advantage:

  1. It runs (an NEC PC-98 OEM version of) MS-DOS
  2. A lot of the hardware is the same as on the PC

So this means that for basic functionality such as file and text I/O, memory management and such, we don’t need the BIOS. We can use MS-DOS for that, which abstracts the machine-specific BIOS stuff away. Also, if we write code that uses the 8237, 8253, 8259A or other similar hardware, in most cases we only need to change the I/O-addresses they use, and adjust for the different PIT frequency (and other minor details, such as the different cascaded configuration of the two PICs and different IRQs for devices), and we can make it work on the PC-98.

So just like with Tandy and PCjr, we can write DOS programs and make them work on PC-98. We can even write a single program that can run on both types of systems, even though it is a bit more complicated than on Tandy/PCjr (on those you mainly had to avoid using DMA, and you should be aware that the keyboard is different, so you should only access it via BIOS, or have separate routines for the different machines).

Challenge accepted

I decided to give this a try. I have made my own little ‘SDK’ of headers and library functions for ASM and C over the years, which includes quite a few constants for addressing I/O ports or memory areas of all sorts of PC and Tandy/PCjr hardware (I modeled it after the Amiga NDK). I figured I would try a PC-98 emulator and port my VGM player over to the PC-98, and update the SDK with PC-98 support in the process.

A convenient emulator is DOSBox-X. It is a fork of DOSBox, which adds a PC-98 machine, among other features, and like DOSBox, the BIOS and DOS emulation is built-in, and you can just mount host directories as drives, so you don’t have to juggle all sorts of ROMs and disk images to get the system running. If you want a more serious emulator though, Neko Project 21/W is one of the more compatible/accurate ones.

And indeed, you can just use OpenWatcom C to write a DOS application, and it will work, as the basic runtime only requires DOS interrupts, no BIOS or direct hardware access. All the BIOS and direct hardware access is done via my ‘SDK’ anyway, so as long as I write the correct code for PC-98 BIOS and addresses, I can use any hardware from C (or assembly of course).

What’s more, it turns out to be relatively simple to detect whether you are running on an IBM PC-compatible or a PC-98 compatible machine. A trick that is used is to call int 10h with AH=0Fh. On an IBM PC-compatible, this will return information about the current video mode, with AH containing the number of columns. On the PC-98, this will not be implemented, so after the call, the value of AH will be unaffected. Since there is no video mode with 15 columns, you can assume that if AH is 0Fh after the call, that you are running on a PC-98 machine.

Anyway, before long I had a basic version of my VGM player working on both the IBM PC and the PC-98. It’s a pretty fun and quirky platform so far. So I might be looking into the graphics chip in the near future.

If you also like to play around with DOS and x86, and want to give the PC-98 a try, here are some good resources (you might need to translate, as most documentation can only be found in Japanese):



Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , | 3 Comments

Why I left the 8088 MPH team

In short, it wasn’t my choice. Then who decided? Jim Leonard, aka Trixter. So is Jim able to singlehandedly decide who is and isn’t part of the ‘team’? Yes, apparently. So, it isn’t really a ‘team’ then, is it? No, it is not.

What happened is a long and complicated story, some of which you may have been able to read between the lines already in these two earlier blogposts. I think it is only fair that I make the full conversation between Jim and myself available, so it’s not just a “he said, she said”. So you can read back what happened exactly. And I will give my view on this:

In short, given the aftermath of BLM protests-turned-riots after George Floyd’s death, and the fact that there were even BLM/Floyd-inspired protests in my country, I was curious about Jim’s view on this, given that these riots had taken place in his country, and some even close to where he lives.

As far as I am concerned, this was a conversation between two people who have known each other for many years, and could be considered friends. And it was a conversation where I showed interest in current affairs and cultural phenomena.

Also, given that we are more or less the same age, I would have assumed that Jim would have a similar liberal, humanist outlook on these things, given that this was the dominant view in Western culture for as long as we can remember, and this woke ‘successor ideology’ has only come into vogue in recent years. Not to mention that just common sense, logic and rationality would lead you in that direction anyway.

But apparently I was wrong. For some reason, Jim did not apply any kind of rational thinking or common sense, but seemed to have been fully emerged into the dogma’s of the woke cult. Which resulted in him just giving knee-jerk reactions. To the point where he escalated by asking me if I were a “White Supremacist”. Because apparently that’s what you call people who don’t want to racialize everything, and who don’t want to judge people based on the colour of their skin, but on the content of their character?

Shortly after that, I let the conversation rest for about a month, and I wrote the first article on wokeness. I don’t know if Jim read that, but when I picked up the conversation again, a month later, he only insisted even more. Instead of just asking, he started literally calling me a “White Nationalist”, and basically became completely unhinged. For no apparent reason as far as I am concerned, as I have said nothing even remotely racist, unlike him.

Not wanting to get too deep into this whole woke/CRT/intersectionality thing again, but there are a few things I’d like to mention.

First, there is this article that talks about Learned Helplessness, which also explains why this woke view on ‘White Privilege’, ‘White Supremacy’ and such, is actually a form of racism in itself.

Secondly, the article also explains how this creates a mindset where black people think they have less agency than they actually do. The included diagrams make it very obvious that the *perception* of racism has changed a lot, while the actual racism has not. Because of the framing of CRT/BLM, people blame more things on racism/discrimination, rather than on their own actions.

Or, in the words of the ever insightful and eloquent Helen Pluckrose:

Speaking of Helen Pluckrose, for more background regarding woke/postmodernist ideology, as well as its antithesis, premodernist ideology, I can suggest this excellent article from 2017, which is still relevant today:

This also covers another pet peeve of mine: how the enemies of Modernity make everything political, and put every topic and opinion either under the left-wing or the right-wing label. If it’s not the one, it must be the other.

Which boils down to this: CRT/BLM is supported by left-wing. So if I am critical of CRT/BLM, then I must be right-wing, and the fact that I even want to discuss this topic means that the discussion is ‘political’.

None of which is true. But that is what I ran into. First, Jim clearly gave me an ‘ultimatum’: as a proper wokie, he made ‘political’ topics ‘off-limits’, where ‘political’ can be any subject we don’t agree on. Free speech, free opinions… we can’t have any of that! And apparently, it was fine that he called me an asshole, White Supremacist and whatnot… but ohnoes, I responded to his many insults and deliberate misrepresenting of my statements with an f-bomb. Totally inappropriate! I mean, why would Jim hold himself to the same rules as me? He can say and do anything, but I must bow to His Greatness, and only speak when spoken to.

These were terms that I was not willing to accept. So I mentioned on the team mailing list that Jim called me a White Supremacist, and that I was not willing to work under those circumstances. To which Jim responded by removing me from the mailing list, Github, Trello and whatever other team resources I had access to. He even blocked me on Twitter. He tried to cover up this cowardly ego-act by claiming he was afraid that I would delete the team resources if I had access. Yea right… Firstly, I didn’t have the rights on most resources to do that, and secondly, I would never do anything remotely that childish. I guess it says a lot about him that he thinks like that.

Okay, so now the ball was in the court of the rest of the team. Did I say team? Whoops, nope. There is no team. Only one person reached out to me, and we could discuss what happened in more detail. I haven’t heard from the others at all. Apparently they were fine with Jim calling me a White Supremacist, and removing me from the team single-handedly (does that mean they too think this? Based simply on Jim saying that?). They didn’t care that I wasn’t part of the team any longer, or that Jim had made it impossible for me to return to the team, as it stands. The disconnect between him and me would have to be resolved, and it is pretty obvious, at least to me, who is in the wrong here.

I mean, I can’t even begin to imagine how you can think that someone whom you’ve known for many years, who’s never made political references, let alone racist references, is somehow a White Supremacist, and is somehow trying to be a political activist, or whatever it is he thinks. What’s worse, I can’t even begin to understand how you can think that it’s just fine to call someone a White Supremacist, and then think you can still make a demo together. You know that when you do that, you cross a line, one that is not easy to get back from. You chose to burn that bridge.

And I don’t understand either how the rest of the people from 8088 MPH could just be silent when someone boots one of the members out of the team, under some vague accusation of “White Supremacist”. I mean, unless you’ve been living under a rock, it should be pretty obvious that accusations of “White Supremacy” are being thrown around gratuitously by radicalized activists, so when you hear that accusation being thrown around, the first thing you do is smell a rat, not think the accusation is an accurate representation of someone’s stance.

But well, with the exception of one person, apparently nobody thought that. After the release of Area 5150, I asked what exactly the other team members thought. But only two people responded, and both of them gave me the same bullshit ‘political’ nonsense. One of them even went into an unhinged rant, making all sorts of assumptions about me that were obviously wrong. When I pointed out that in all my years of demoscene and writing music, I have NEVER included any kind of political statement whatsoever, and even on this blog, I have only written two blogs regarding ‘wokeness’, which obviously were related to what happened with Jim (but were written AFTER our initial clash, so could not have *caused* the disagreement with Jim), there was no response.

I guess I am just deeply disappointed. Both in people in general, and in these people I considered to be friends in particular. You’d think there’d be some kind of bond, some mutual trust and respect after having made 8088 MPH together, visiting Revision, and then staying in touch for years after that, working on a successor. But nope, apparently none of that means anything to them.

But to me it means I won’t join another team lightly. It takes months, if not years, of intense work and collaboration to make a demo of the caliber of 8088 MPH or Area 5150. I thought it should be obvious that this requires a good team, with mutual trust and respect. But apparently others think you can just call people a White Supremacist and think they’ll just continue making a demo with you. Well, not me anyway.

In closing, I would like to mention the book The Parasitic Mind, by Gad Saad. I read it recently, and I recognized various aspects. Like Gad Saad, I value freedom and truth a lot. And I like to approach things with a healthy combination of rationality and humour. Jim’s behaviour in the conversation can be described as ‘enemy of reason’. Some of Gad Saad’s descriptions are spot-on for this case.

This is also about the two modes of thought, as formulated by Daniel Kahneman. “System 1” is where you act primarily, based on emotions and preconceptions, where “System 2” is more elaborate, logical reasoning.

Gad Saad argues in his book that you can use nomological networks of cumulative evidence to show that something is likely to be true (or false). And if you read back the conversation, you can see that I try to bring in various sources of information, and try to approach topics from various sides. Sadly, Jim does not bother to even look at them. He outright rejects sources, simply based on the messenger (or more accurately: the radicalized activist propaganda against the messenger that he has been exposed to). So there’s a problem with that approach: you can bring a horse to water, but you can’t make them drink. Well, at least I tried…

It just surprised me that even demosceners, who should be more System 2-oriented, can be so socially awkward they they are only going for knee-jerk System 1-thinking when the topic is more social/cultural than technical, and don’t bother to dive any deeper in the subject matter, even when it’s being spoonfed to you, and aren’t able to keep an open mind and have a more logical, rational approach (not even the rather obvious possibility that organizations such as BLM and Antifa might not be exactly what their name implies). And instead, actually choose to behave like a total asshole toward you, and not even have the self-reflection to see what you’re doing. To me, Jim shows all the signs of having been radicalized.

For everyone who is as daft and insensitive as Jim: https://thepushback.us/2022/12/09/the-woke-mind-virus/

Anti-woke isn’t anti-black, anti-immigrant, anti-LGBT, or anti-equality. It’s neither MAGA nor fascist, and it’s absolutely not discriminatory or extremist. Anti-woke rejects the artificially constructed forced morality binary. Elevating arbitrarily chosen “grievances” and protecting select groups above others can only end in disaster. Not all grievance rises to a level that requires or demands redress. Don’t allow yourself to be manipulated.

This sounds like Jim:

Ironically, in practice, wokeness manifests in bullying, threats, identitarian essentialism, and devaluation of others. Many times it’s just narcissism masquerading as empathy.

Update: It has come to my attention that Jim has posted a response: https://trixter.oldskool.org/2022/12/30/the-semantics-of-discourse/

Sadly, it does not address any of the core issues, and he also lies about the fact that he singlehandedly removed me from the mailinglist and other resources. Also, it doesn’t make sense that he claims “we could continue making demos together as long as we never discuss politics again” (which as I already said, is an unacceptable ultimatum), while at the same time he has blocked me from Twitter and other resources. The one who would want to continue is the one who doesn’t block the other party, and that would be me. As I said, just not under circumstances that are unacceptable, so those had to be worked out first. With the whole team obviously (as it was clear at this point that Jim had completely shut down the conversation with me, and there was no way for me to get any kind of point across, and trigger any kind of introspection. He had dug himself in too deep for that). There was no reason to block me if I had just chosen to no longer contribute. But blocking me does guarantee that I CANNOT contribute, and cannot even get in contact. And that is what Jim did. For those who still need subtitles: look up “passive-aggressive behaviour“.

Bonus: there is a rather long comment that provides quite a bit of scientific/historic information, and you can find Jim’s response once again denying science and history, as he’s still fully sucked into the cult. Jim’s response also seems to indicate that he sees no difference between my stance and the commenter’s, while there clearly is a fundamental difference (as anyone who has read my earlier articles on wokeness should be able to pick out). Spoiler:

To me, the only race that exists is the ‘human race’.

Update: Jim continues lying. I was removed from the mailing list almost immediately after my last message. That is BEFORE anyone could even respond. Which means Jim had not had feedback from anyone. Jim was also the sole administrator of the mailing list, so he was the only person who could remove me from it. Ergo, he singlehandedly removed me from it, without consulting the rest of the team (if you read between the lines, he basically admits this, since he doesn’t mention any feedback from the team, between points 4 and 5. He just pulls up a smokescreen to hide that, by claiming I’m wrong, which I’m not. He’s just being manipulative, as we’ve already seen in the private exchange as well).

Aside from the obvious logic that I had no reason to leave the team other than Jim calling me a white supremacist repeatedly, combined with all the other underhanded bully tactics you can find in our mail exchange. It should be obvious that I would have wanted an apology and some kind of reconciliation, but other than that, there was no reason not to continue on the demo. But Jim clearly didn’t want me on the team anymore, so that never happened (Jim isn’t man enough to just outright say “I didn’t want you on the team because I think you’re a racist and white supremacist, and that’s why I took action”. He wants to disguise his actions and motives, but he’s the one who decided to burn that bridge. Now he can’t own up to it).

Even now, if you read his message, he doubles down on me actually being a racist/white supremacist. He just regrets saying it to me. That’s not an apology, is it? The problem is that you think I’m a racist. Not whether or not you say it out loud. Who wants to be friends with a racist? And who wants to be friends with someone who thinks they’re a racist? So what we would need, is some reconciliation where you understand that my viewpoints are not racist (nor political for that matter). Jim is just a terrible person. And he tries to cover up for how terrible he is. He even removed the comment from user ‘Catweazle666’, to hide his denial of science and history. Just keep digging that hole deeper, Jim!

He removed it under the guise of ‘political’. Which it wasn’t. It was a combination of some historical facts, and Catweazle666’s personal view on these. There were no politics involved. History isn’t political (well, it is when you’re woke, because you want to rewrite history to suit your political agenda). So there we are, Jim has given us the perfect proof that he indeed will censor anything he doesn’t agree with, under the guise of it being ‘political’, exactly as I said.

The part where he claims he was “trying to help me” because I would have been “deeply troubled” is hilarious (not to mention completely arrogant and misplaced). Anyone who reads the conversation, can see I was not asking for help, nor was he giving it (but perhaps he means that he thinks that everyone who doesn’t share his opinions, is deeply troubled and needs help). Also, it is clear that I wasn’t deeply troubled. I was more surprised that groups of people in various parts of the country were tearing down entire city blocks in riots, based on ideology that at least as far as I had looked into it, was basically a conspiracy theory (similar to the classic antisemitic conspiracy theory where a small group of powerful Jews would control the world, except the Jews were replaced with ‘White Supremacists’ in this version. A conspiracy theory that Jim has clearly bought into, given that the only ‘proof’ he goes on for me being a ‘White Supremacist’, is that I do not condemn ‘The System’ enough to his liking. As such, I must be ‘complicit’ in this ‘system’, which makes me a ‘White Supremacist’, despite the fact that I have never uttered anything remotely racist or white supremacist myself. That is how radicalized he is). Given Jim’s responses, and his inability to have a reasonable, factual conversation, but instead shut down and start insulting his otherwise calm and rational discussion partner, he is the one who is deeply troubled. Are his social skills that bad that he can’t even read a conversation properly? Or is he again lying and manipulating to make himself look good in the shitshow that he himself created, at my expense?

His claims about deleting resources are also quite sad. The fact that *someone* may have done this in the past still doesn’t mean that *I* would ever do that (aside from the fact that as I already said, I wouldn’t have the rights to do that in the first place). I’m a different person. Apparently Jim is a very poor judge of character. And apparently, Jim thinks very lowly of me. Again, where’s the mutual trust and respect, if I were to do a demo with him? The fundamental difference here is that Jim judges me on things that he *thinks* I would say or do (but never actually said nor did, nor even planned to say or do, as he is very wrong in his judgement), whereas I merely judge Jim on things he *actually* said and did.

His tribal groupthink is also obvious in how he thinks he is a spokesperson for all Americans, when he describes what he thinks. Whereas in the conversation it is clear I only asked him what *he* thought, and would in no way generalize that to ALL Americans. We’re all individuals, all capable of making our own decisions, forming our own thoughts etc.

Posted in Uncategorized | Tagged , , , , , , , , , , , , | 5 Comments

A new MPU-401 solution: The Music Quest clone

This is a draft that I have kept around since March 16, 2019, so it is about time I finally finish and publish it.

This story started somewhere in 2015, when a user named Keropi posted a thread on the Vogons forum, about cloning a Music Quest card. What is a Music Quest card, you may ask? It is a clone of the Roland MPU-401 MIDI interface. Keropi had collected a lot of information on it in a previous thread. But I suppose that begs the question: what exactly is a Roland MPU-401?

I would gather that most people know MPU-401 as a standard for MIDI on the PC, and then usually they are familiar with the UART-mode aka ‘dumb’ mode, since that is the simple interface, which most clones, including many Sound Blasters, support, and which was also used for the later Wave Blaster standard.

An actual MPU-401 is somewhat different though. For starters, strictly speaking, the MPU-401 is not a PC device. It is an external module, and it looks like this:

As you can see, it has a connector labeled “To Computer”. So what do you connect this to? Well, this explains why Roland designed it like this: the MPU-401 is the “MIDI Processing Unit”. This unit contains a Z80 processor and some support chips, and does the actual MIDI handling:

So technically this is like a computer on its own. Roland decided to make this part generic, so they only had to design a simple interface for the various computer systems around in the 80s, and use the same MPU-401 unit for all of them. For the PC, the interface was a simple ISA card, known as MIF-IPC:

So for the PC, a Roland MPU-401 interface was actually a setup of both the ISA card and the MPU-401 unit.

Intelligent Design

Okay, so we mentioned the ‘dumb’ mode. But what is ‘intelligent mode’? This is why the MPU-401 is such a complex design: it is actually a basic hardware MIDI sequencer (Roland calls it a ‘Conductor’ in its manual). That is: you can send MIDI data to it in advance, and queue it up with timestamps, and the device will send out the MIDI data at the correct time by itself. Then it will signal the host with an interrupt that it is ready to receive new data.

Why did they make MIDI so complex back in the day? These were early days. MIDI was introduced somewhere around 1983, and the Roland MPU-401 is to my knowledge the first available MIDI interface for home/personal computers, and was introduced in 1984, and supported computers such as the Commodore 64, Apple II and MSX, and of course the IBM PC.

So we are talking early 80s 8-bit machines, with limited memory and processing power. Sure, these machines could play music, but this was generally synchronized to the display refresh rate, so you had an update rate of 50 or 60 Hz. This severely limited you in terms of tempo. MIDI allowed for much higher resolutions, as it was aimed at professional musicians in a studio setting.

Now, while I have developed a MIDI player myself a few years ago, and explained how you could get the high resolution of MIDI even on an original IBM PC or PCjr, there are two obvious limitations here:

  1. My player requires preprocessing of the data, so you cannot play MIDI files as-is. It can only be output as a single data-stream, no separate MIDI tracks, as you would have in a sequencer.
  2. Playing a MIDI file is possible, but recording and editing of MIDI will not work in realtime with this approach, as you would need to convert between MIDI and preprocessed data.

So for various reasons it made a whole lot of sense to use an MPU-401, which has hardware that is custom-designed to record and play back MIDI data in realtime, with its own clock (we have seen that the IBM Music Feature Card also has its own clock that is designed to handle MIDI timings, although it is not as advanced as the MPU-401 is). You could easily guarantee that your MIDI data was played at the correct time, without having to worry about what the CPU was doing. And editing MIDI data was also simpler, as you wouldn’t need to convert to some internal system clock that was not even remotely similar to the timestamps that MIDI uses (which are based on a 1 MHz clock).

So in short, ‘intelligent mode’ is where you offload playback to the MPU-401, rather than just outputting the MIDI data in realtime, where you need to make sure that your CPU output routine is accurately timing each byte. The MPU-401 has a total of 8 ‘tracks’ internally, where you can queue up timestamped data for each of these tracks in a fire-and-forget fashion.

Who cares?

One of the problems with the MPU-401 was that it was very expensive. Aside from requiring the ISA board and the MPU-401 unit, you’d also need an actual synthesizer. In 1987, Roland introduced the MT-32, a somewhat affordable semi-professional MIDI module (a synthesizer without a keyboard attached, basically), which became somewhat of a de facto standard, as Sierra and various other game developers started to support MPU-401+MT-32 as an audio device.

But by the time MIDI started to gain traction, we were approaching the 90s, and PCs had become much more powerful. On the one hand, playing MIDI in realtime wasn’t such a big problem anymore. On the other hand, games generally had MIDI music that was already quantized to a relatively low resolution anyway (somewhere in the 140-270 Hz range for example) that ‘dumb’ UART mode output was good enough. And since the PC has always been the platform of the lowest common denominator, and most cheap sound cards only supported the UART mode of the MPU-401, most games won’t require ‘intelligent mode’ compatibility.

For that small selection of software that does require a full MPU-401 though, your options were limited. There is a software solution known as SoftMPU, which emulates the intelligent stuff on the CPU-side, so you can upgrade your UART MPU-401 to a full intelligent MPU-401 with this TSR. Downside is that it requires a 386 CPU or better. So for your 286 or older machine, you still need a hardware solution for true MPU-401 compatibility.

But we can anyway

As technology progressed, it became simpler and cheaper to integrate both the host interface and the MPU-401 on a single ISA card. Roland introduced its own MPU-IPC, where the external module was now a simple breakout box:

And later still, they introduced the MPU-401AT, which had mini DIN connectors on the card itself, and a Wave Blaster interface to plug a MIDI module directly on the card:

And of course there were other parties that made their clones of the concept. The original MPU-401 setup was rather complex and expensive to clone. But when one can integrate it all on a single card with off-the-shelf parts, it becomes feasible to try and make a cheaper alternative to the Roland offerings. This is where the Music Quest comes in. It was one of these clones, and it was a good one, as with the later v010 firmware, it is considered to be 100% compatible with Roland:

So, after this lengthy introduction, we can now finally get to the actual card this blog is about:

Because well, if a card is made mostly from off-the-shelf parts, like a Z80 and an EPROM, in this day and age, it is not that difficult for a single skilled individual to design and build their own PCB, dump the roms, and clone the clone.

And this is what was done here. What’s great about this, is that it is a good clone of a good clone: people can now get their hands on reliable MPU-401-compatible hardware at relatively low cost. I received mine a few years ago, and wanted to write about it, as I was quite happy with the product. Better late than never, I suppose.

I received a somewhat later revision, which also includes a Wave Blaster interface:

And another revision has been developed since, which uses a higher level of integration on the PCB:

So, if you want a good MPU-401 interface for your retro DOS machine, either for running MIDI software, or developing your own MPU-401 routines, I can recommend this card. For some more information, software, and a link to the order form, you can go to the Serdashop store page for this card.

Posted in Hardware news, Oldskool/retro programming | Tagged , , , , , , , , , , , , | 1 Comment

A new game for DOS/EGA/AdLib: Super Space Fuel Inc.

It seems there has been a boom in Match-3 games. They seem to be a popular target for casual gaming on mobile devices/in browsers. But what’s better than playing Match-3 on your mobile phone? Playing Match-3 under DOS!

I say ‘new game’, but it was released in August last year. So yes, it’s a ‘new’ game for DOS, but I’m rather late to the party. Anyway, the game is Super Space Fuel Inc., and it was made by some friends I know from my demogroup DESiRE.

Code is done by sBeam, graphics by VisionVortex, and music by No-XS.

The game requires EGA, and has AdLib music. In theory it can run on any machine with an 8088 CPU or better. In practice you probably want at least a Turbo XT or 286 for the best gaming experience.

The game was written in Turbo C. You can download the game for free, and if you make a small donation, you can also download the source code, if you are interested. Of course I hope you will donate, as any retro DOS stuff, especially with EGA and AdLib, deserves a reward.

Dosgamert made a video with gameplay to get a decent impression of the game:

You may recognize the music. And I must say, the graphics and animation look very slick. Hope you like it!

Posted in Oldskool/retro programming, Software news | Tagged , , , , , , , , | Leave a comment

Windows 7: the end is nigh

Windows 7, originally released on October 22, 2009. It’s had a good run of over 13 years so far. At its introduction, I wrote a somewhat cynical piece, given that people were so negative about Windows Vista, and so positive about Windows 7, while technically Windows 7 was much closer to Vista than it was to the XP that people wanted to stick to.

And during its lifetime, Windows 7 remained a favourite, particularly because Windows 8 was a mixed bag, as Microsoft tried to create a single OS for both desktops and tablets/smartphones, leading to a somewhat confused UI with large tiles to cater to touchscreens. And technology-wise, there was little reason to move from Windows 7 to Windows 8 or 8.1 either.

This changed when Windows 10 arrived in 2015, where the UI was more friendly to desktop users again, and Windows 10 also introduced new technology such as DirectX 12. Even so, Windows 7 remained popular, and Microsoft continued to support it.

Until recently, that is. The first cracks in the armour are starting to show. For starters, Microsoft officially ended mainstream support of Windows 7 on January 14, 2020. But despite this, various software and hardware vendors would still release products that supported Windows 7 to a certain extent, including Microsoft themselves.

But I’ve run into a few devices already that no longer have Windows 7 drivers. I got an Intel AX210 WiFi adapter, which does not have drivers for Windows 7 at all, requiring me to install an older WiFi adapter to get internet access when I boot my machine into Windows 7.

My GeForce video card also hasn’t received mainstream driver updates for a while now. It only gets ‘security updates’ from time to time.

And when you install Visual Studio 2022, it also gives a warning that not everything will work correctly. Most notably, .NET 4.8.1 and .NET 7.0 are not supported on Windows 7. On the other hand, it’s somewhat surprising that Visual Studio 2022 installs and works on Windows 7 at all, even if some features are not available. Vista was nowhere near as lucky, and was cut off after Visual Studio 2010 already, which somewhat ironically meant that even though Vista supported .NET 4.5 out of the box, and could also support .NET 4.6, there was no Visual Studio environment you could run on Vista itself to develop for these versions of .NET.

Another sign is that Microsoft has released the last update of Edge Chromium for Windows 7. There is now a notification that you should upgrade to Windows 10 or newer. Google will follow suit with Chrome.

Anyway, it seems we’re in the final stages of Windows 7 support. Windows 8 and 8.1 have already bitten the dust, as have early versions of Windows 10. We have reached the point where you need a fairly up-to-date version of Windows 10 to get decent driver and application support.

Posted in Software news | Tagged , , , , , , , , , , | Leave a comment

The myth of the vertical retrace interrupt

This is a draft from March 18th, 2019. We’re slowly cleaning out the backlog. The few remaining drafts are going to take more time to work into an article though, as actual code or configuration is required to work out the idea I wanted to write about. This article was also like that, except that someone else ‘adopted’ it along the way.

The topic is vertical retrace interrupts, specifically on the early PC platform. The first few display standards on PC did not have any kind of display interrupt. For MDA, CGA and clones based on these standards, such as Hercules and Plantronics ColorPlus, the only way to get any indication of the video signal was to poll the status registers. In 1984 this changed, because two things happened in 1984: the IBM PCjr, and the EGA standard.

Both the IBM PCjr and EGA introduced a vertical retrace interrupt (or vsync interrupt), but in good IBM tradition, they were incompatible with each other. On the IBM PCjr I can be short about things: there was only one IBM PCjr, and only one successful clone of its video standard: the Tandy 1000. On the real PCjr the vertical retrace interrupt works flawlessly. And on the Tandy 1000 it is the same. With one small exception: the PCjr uses IRQ5 for its vertical retrace. This was also used by certain other hardware, such as printer ports and sound cards. This is why certain Tandy 1000 models allow you to disable the vsync interrupt on the motherboard, so it can be used by other hardware without problems. Clearly, this will break compatibility with the PCjr’s vsync interrupt. But as long as IRQ5 is enabled for vsync, it will work fine on Tandy machines as well.

So what is this titular myth then? In the documentation of EGA and VGA, the vsync IRQ is clearly documented. But in software it seems to be elusive. No software seems to actually use it. So what is going on? Why wouldn’t you use such a feature?

The can of worms is with EGA, for multiple reasons. Firstly, the EGA standard was cloned by numerous vendors. Secondly, VGA was a superset of EGA, and as such backward-compatible with its vsync IRQ (or at least it should be), and VGA was also cloned and extended by numerous vendors, and lives on in PC display controllers to this day. Lastly, even IBM made a mess of things with VGA.

A few years ago, I got a PCjr myself, and started using the vsync IRQ on it. It can be quite a useful way of synchronizing the screen with audio and other routines. Since I vaguely knew about a vsync IRQ on EGA, I wanted to find out if I could use it on EGA/VGA systems as well. But I also knew that nobody seemed to be doing that, and was vaguely familiar with issues regarding its use. I tried to do a survey, to figure out what’s what. I wrote some small test utilities and created some threads on retro-computing forums, asking people to test on all kinds of EGA and VGA cards, and report their findings.

I managed to get quite a bit of useful information. I had already come to the conclusion for myself that indeed using the vsync IRQ for EGA/VGA was not feasible, as there would be too many compatibility issues in practice. So my interests in the vsync IRQ issue shifted to just documenting what I had found. Which would have been this blog, but it never got written at the time, and was more or less forgotten.

However, since my threads and utilities were out in the open, other people also got interested in the issue. And a few months ago, PCRetroTech posted this video on their YouTube channel:

It pretty much wraps up my experiments and presents all the information you need to know about the vsync IRQ, which had been scattered around some threads on some forums, or only known by a few people who bothered to dig into it.

In short I suppose we can mention these problems:

  1. Real IBM EGA seems to work reliably, but clone chipsets may or may not. At any rate, the interface is really clumsy because it’s not a fire-and-forget mode like on the PCjr. You have to acknowledge every interrupt and then set up the CRTC to generate the next interrupt. It will always be somewhat error-prone.
  2. Some clones appear to have some states inverted, so they may work reliably, but you’d need to use alternative code.
  3. IBM VGA was designed for PS/2, which has level-triggered interrupts, instead of the edge-triggered interrupts on classic ISA systems. IBM produced a VGA adapter for ISA systems (pictures here), but left the IRQ line disconnected from the ISA bus. This may have been because the chip was not designed to work reliably in an edge-triggered configuration. At any rate, this means that even the original IBM VGA standard does not support IRQ in all configurations.
  4. Various clone chips do implement the IRQ, but whether or not it is connected to the ISA bus depends on who integrated the VGA chipset on the ISA board. There are boards where there is a jumper to enable/disable the IRQ. There are also boards where the line is not connected at all. People have experimented with running a wire from the ISA slot to the correct pin on the VGA clone chip, and in some cases this actually worked reliably.
  5. EGA chose IRQ2 for the vsync IRQ. This is the same IRQ that is used for the AT to cascade its second interrupt controller. In effect the real IRQ2 line on the ISA bus is rerouted to IRQ9 (you will often see it listed as IRQ2/9) on an AT, and the IRQ trigger on the second controller is connected to IRQ2 on the first controller. This means there are even more potential problems for using the vsync IRQ on ATs and newer systems, as the IRQ signal actually travels through both controllers, and they both have to be signaled and acknowledged correctly.

All these potential problems related to the vsync IRQ explain why nobody ever seems to have used it on EGA/VGA. Developers probably all came to the same conclusion sooner or later: it’s just never going to work reliably on a wide range of configurations.

I guess all the low hanging fruit of drafts is now gone, so it will take a bit longer to address the few remaining ones (and I have already pulled this one ahead, so we are no longer working in chronological order). Hopefully that won’t mean they’ll just lie there in waiting for some more years, as the earlier ones have.

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , , , , | 2 Comments

Managers in software

This draft was originally made on September 18th, 2018, and contains nothing but the title. So what was it that I wanted to say about managers in software? I suppose some of it may have been covered in the article I’ve done since on Politicians vs entrepeneurs.

But I think this may be more related to a point I made in an earlier article:

The way I see it, there are two types of architects:

  1. The architect who designs the system upfront, documents it, and then passes on the designs to the team of engineers that build it.
  2. The architect who works as an active member of the team of engineers building the system

I think a similar rough binary division can be made with (project) managers in software:

  1. Managers with a background in management
  2. Managers with a background in software development, or at least a closely related field of engineering

I have found that in practice, you will run into type 1 more often. I suppose that is because there are more managers of this type, and they are cheaper to hire.

Related to this, I suppose you can also have a rough binary division of how to manage a team/project. There are a number of stakeholders, and the manager will be a linking pin between them. Roughly you’d have the upper management (to which the manager reports), the client and the team.

  1. The manager considers himself a representative of upper management and the client, and negotiates on their behalf with the team
  2. The manager considers himself part of the team, and negotiates with the upper management and client as a representative of the team

Now these management approaches tend to overlap strongly with the manager types. The reason for that is probably that without a technical background it is difficult to actually be a part of the team, as you will only superficially understand the technical details. It’s just ‘safer’ to try and please management and the client. Firstly because you understand the project better at their level, and secondly because it’s straightforward to just try and deliver the goals that they set.

However, that means you are working inside the limitations set by upper management and the client. In order to get the full potential out of your team, and thereby the developed product, you really need to get the team’s input. Allow them to signal potential roadblocks early, and allow them to suggest alternative solutions. In the end, it will just come down to upper management wanting to get the best possible return on investment, and the client to get the best possible solution to their problem.

So that is what a manager should try to do: look at the bigger picture, because upper management and clients often think they know what they want, but their ideas are rarely the best possible solutions to what they want. So with a talented and inspired team, you can usually think outside the box and suggest better ideas. The key is to show upper management and the client that the solution is better than the one they were expecting. That should convince them, as it gives them what they want, even more and better than what they were expecting.

In turn, the team is also more happy because they can give valuable input and develop solutions that they believe in, rather than just going through the motions, and doing what others order them to do, knowing it’s not going to result in the best product they could have made.

Posted in Software development | Tagged , , , , , , | Leave a comment

Why AMD should never have made x86 processors in the first place

Another draft, this time create on June 22nd, 2016. The title may have been written down in a hurry, and seems a bit click-baity. I suppose it needs a bit of nuance: the point I wanted to make here is not about the early x86-CPUs that were made as a second source under license from Intel. It has to do with the CPUs from the 386 onwards, which AMD marketed under their own brand, and at least initially, without a license from Intel.

I’m not sure how many people would actually agree with the notion that AMD shouldn’t have made x86-based processors under their own brand. But it is interesting to see why and how AMD actually got to that point.

My draft contained the following links:


http://jolt.law.harvard.edu/digest/patent/intel-and-the-x86-architecture-a-legal-perspective (working archived link: http://web.archive.org/web/20170120173519/http://jolt.law.harvard.edu/digest/patent/intel-and-the-x86-architecture-a-legal-perspective)



In short, it is about the legal history between AMD and Intel. AMD and various other companies acted as a second-source for Intel chips, which meant that a license was in place between Intel and the second-source companies. Intel supplied the licensed designs to these companies, so they could build the Intel designs as-is, which meant that these second-source chips were 100% compatible and interchangeable with the Intel ones. This wasn’t necessarily Intel’s choice, but IBM insisted that if they were going to use Intel as a supplier for their PC product line, they wanted secondary sources, to rule out supply problems, if anything happens at Intel (imagine that… we have come to know Intel as ‘Chipzilla’, but back then apparently they were still a small and potentially unreliable source to the giant of that time, IBM).

But in the era of the 286, things started to change. On the one hand, IBM was losing its grip on the market, now that clones were flooding the market. Which meant that IBM could no longer put such demands on Intel, as Intel could in theory do without IBM as a client now. The market was big enough. So Intel wanted to get rid of the second-source deal. They did not provide designs for the new 386 CPU to second-source companies, and they did not renew the deal, so it would just run its course and that was that.

On the other hand, some second-source companies would not just stick to the designs provided by Intel. For example, NEC created the V20 and V30 CPUs, which were pin-compatible with the 8088 and 8086 respectively, but implemented the instructionset of the 80186, and had some performance-tweaks. Other companies would not go quite as far, but would still modify the designs to fine-tune the performance for their manufacturing, and allow higher clockspeeds. So eventually we would get 808x-like CPUs at 10-12 MHz, and the 286 was eventually pushed to 20 and even 25 MHz (by Harris, a company that has since left the mainstream CPU market).

So, if Intel didn’t provide AMD with the 386 designs, how did AMD build a 386 anyway? By reverse-engineering. And this is where AMD is starting to do dubious things, in a legal sense. AMD figured that the license deal SHOULD have given them access to the 386 design, and by that logic their reverse-engineering would not be illegal.

Clearly Intel did not agree with that, and sued AMD for their 386, which was based on Intel’s design, so technically it was an unlicensed copy. The AMD 386 was held up in court for many years. Intel released the 386 in October 1985, and AMD had their copy ready in 1990. By March 1991, the case was finally settled, and AMD could release their Am386 onto the market.

Now, people commonly think AMD won the case, meaning that Intel was in the wrong, and AMD actually *did* have the rights, according to the license. But if you actually read the outcome of the lawsuit, Intel was technically correct, and AMD technically infringed on the license. However, the judge ruled that x86 was too important for the market for Intel to have a monopoly on the instructionset. So Intel was forced to offer AMD a new cross-license agreement. As the Harvard article writes:

While the arbitrator found that Intel was not obligated to transfer every new product to AMD or accept any of AMD’s products, he also decided that Intel had breached an implied covenant of good faith to make the relationship work. Id. at 1013. But this legal strategy paid off for Intel. By refusing to give AMD a license to the 386 and leaving the issue tied up in a lengthy arbitration, Intel excluded AMD during a critical period of growth in the personal computer market, during which the PC platform’s market share grew from 55% in 1986 to 84% in 1990; leaving Apple’s Macintosh at a distant second place with just 6%. Without access to Intel’s technical specifications, AMD took over five years to reverse-engineer the 80386, which was substantially more complicated than and contained almost ten times as many transistors as the 8086.

So in essence, this is a more or less arbitrary decision, based on antitrust laws, rather than an enforcement of license terms. Technically speaking, the Am386 wasn’t legal, and Intel was right, legally. But it was *made* legal by the ruling. And that is unusual. If Intel wasn’t as big and successful as it was, it should just have been able to claim ownership, copyright and trademark on its 386, given that Intel was the sole designer and developer of the product, and had retained all rights. But as you can read, the fact that the x86 became so successful, and it was so difficult for AMD to reverse-engineer, was used against Intel, and in AMD’s favour.

Which is why I would say: AMD should never have made the Am386, from a legal standpoint. But they just bluffed their way through, and managed to crawl through the eye of the needle. I suppose sometimes crime does pay.

Posted in Hardware news | Tagged , , , , , , , , , , | 4 Comments

More donut madness

This draft, created on May 27, 2015, only had a single link: http://www.lugreat.com/ihe/pdfs/v1article3.pdf

And the problem with a link that was stored over 7 years ago? It’s no longer available, and it also wasn’t cached by the Wayback Machine. But, as luck would have it, I saved a copy locally, so I can provide it to you:

It was a paper written by Ali Barimani, titled “A review of methods of creation of torus and torus-like objects”, published in 2007 apparently. It seems to be a somewhat obscure paper, as I haven’t been able to dig it up via Google Scholar or other academic search sites. All I could find was this link, which includes the paper. It appears to be some random archive site, which has apparently archived the publication that this paper was published in, apparently from the original ‘lugreat.com’ domain, which is now gone. Anyway, let’s look at what we have here.

The paper gives some background information on certain properties of the torus, or donut, and describes a few ways to generate the geometry for 3d-rendering a textured, shaded torus.

I originally saved the link to this paper, because it was a good starting point for talking about why I like to use a torus for my retro rendering projects. And that has to do with these properties of the torus.

For starters, a torus is not a convex object. A convex object has a number of properties, which make it easy to render. Most notably, convexity means that no matter from which orientation you look at the object, there will only be one surface of the object facing you, and one surface facing away from you. And the surface facing you will be closer to you than the surface facing away from you. In other words, all backfaces are behind all frontfaces by definition. This means that when you render a solid object, all backfaces are occluded by frontfaces, and as such, they never have to be drawn. This means that you can draw a convex object correctly, by just culling all backfaces. No additional depth sorting is required.

Since a torus is not convex, backface culling alone is not enough. From certain orientations there can be frontfaces behind other frontfaces. Which means the frontfaces have to be sorted on depth in some way. Which is one reason why I choose the donut for rendering: it shows off that my renderer can perform efficient and correct depth-sorting for inconvex objects.

Another interesting property is that a perfect torus has a perfectly smooth, rounded surface. This makes it an interesting target for more advanced types of shading such as Gouraud or Blinn-Phong smooth-shading, or environment-mapping. Trying to make a low-poly torus look smoother than it actually is, based on its geometry. It especially is a nice case for specular highlights. The “chrome donut” is like a Hello World for a decent software-renderer targeting an early 90s machine such as an (accelerated) Amiga or 486/Pentium PC. It’s also an interesting case for early 3D accelerators.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , | Leave a comment