Area 5150, a reflection

As I said in my previous coverage of Area 5150, I was not sure whether I should comment on the demo. But I have given it some thought, and have decided to give some background info on the development process.

There was a 7-year period between 8088 MPH and Area 5150. After 8088 MPH, the original team kept in contact. We evaluated 8088 MPH, and tried to assess its strong and weak points, both from our own perspectives, and also taking into account what other people said about it. We used this as input for our next demo.

For the first 5 years of that total 7-year period, I remained part of the team (I left around August 2020). This even included an attempt to release the demo at Revision 2020, which ultimately failed, partly because the demo wasn’t quite finished, and partly because we did not want to release the demo at an online-only party, so we aborted that attempt.

Most of the demo that you now know as Area 5150 was already done at that point, however. The main missing piece was the music. Most of the basic effects and artwork were already done, and in fact, some of them ended up in Area 5150 with little or no changes. Others were slightly modified to match the final theme and music of the demo.

Criticism of 8088 MPH

Perhaps the biggest ‘weakness’ of 8088 MPH, I thought, was that it has a very ‘oldskool’ feel to it: it was an oldskool ‘effect show’. You see an effect for a certain period of time, then some in-between text from the loader, and then the next effect is shown. There is music playing in the background, but there is little or no relation to what is happening on the screen.

Related to that was also that some effects were shown for quite a while, which broke the overall ‘flow’ of the demo. And while there were some minor transitions into or out of certain effects, they were developed independently, and had no connection to any kind of overarching ‘theme’ or anything.

I mean, sure, you could argue that it fit the aesthetic that we were going for anyway. Which is true. But it is also an aesthetic we chose because it was technically the least demanding. 8088 MPH was very much driven by all sorts of technical-tricks-turned-into-effects, but they were isolated effects. Making a cool effect is one thing. Making a number of cool effects and smoothly transitioning from one to the next is even more challenging, especially if you also want to sync it to the soundtrack, and have some kind of ‘theme’ to tie the effects together in terms of style and graphics. It was difficult enough to get a full demo working with all the effects we were aiming for, from a technical point-of-view. Trying to also take on cutting-edge design at the same time would have been too big an effort all in one go. So we had to pick our battles.

Getting a demo that smoothly flows from one effect to the next, not only requires designing effects to transition from one scene to the next… You also need to design your code and data in a way that you don’t have to load huge chunks of data from disk in one go, or have to spend many seconds on precalcing certain things. You need to cut everything up in small pieces so you can keep the show going on screen.

Another thing was that the music, with the exception of the endpart, was not exactly groundbreaking from a technical point-of-view. It was a very ‘safe’ 60 Hz PC speaker routine, with 3-channel or 4-channel multiplexing. It didn’t sound bad, but it had been done many times in various games from the 80s and 90s, so it was par-for-the-course at best, technically.

So those are some of the shortcomings we wanted to address in the next demo. I dug up various examples of C64 and Amiga demos that showed good ‘flow’ and had interesting transitions between effects that may translate to the PC in some way. If I see the responses to the demo, various people actually say it feels like a C64 or Amiga demo. Someone even said that they are the Booze Design (famous C64 group, who made some of the demos I showed for inspiration) of the PC platform. So I would say: mission accomplished.

And for those who have been following this blog, it is no secret that I mainly focused on audio after 8088 MPH. One of the first things I did after 8088 MPH was to make a few improvements to MONOTONE, so it would be a more effective tool for composers. MONOTONE already supported other speeds than just 60 Hz, and using 120 Hz or 240 Hz was considered as a possibility for future demos.

I also looked into streaming digital audio from disk, and using various old sound devices from the 80s, which may have been an acceptable target. Being able to stream from disk while also playing music and showing graphics would also be an important factor for getting a smooth flow. Eventually some of these experiments ended up in the release I did recently at Revision 2022: my 8088-compatible ST-NICCC 2000 port, which streams the data from a floppy disk, like the original (a final version and write-up should arrive eventually).

For the music in Area 5150, Shiru was ultimately recruited, based on the System Beeps music disk he made, with his own VST-powered 120 Hz PC speaker engine. He has published an article on it. His engine takes the music up a notch from the 60 Hz MONOTONE music in 8088 MPH. Although, to be fair, there were some 80s/90s games that also used more than 60 Hz for PC speaker (I believe Zak McKracken is one, and Commander Keen may not have music on PC speaker, but its sound effects run at 140 Hz), so it’s a bit more state-of-the-art, but still not technically groundbreaking.

Even so, games on CGA generally didn’t do anything time-critical, so the music routine could be any rate. For CRTC effects on CGA, you need to carefully design your effects so that you can distribute the two PC speaker updates per frame at 120 Hz evenly in your per-frame code somehow.

Please adjust antenna for best reception

The reception of 8088 MPH was a huge success. Not all of that was because of the demo itself. There was also the novelty-factor. It was the first true megademo for the original IBM PC. And as far as I know it was also the first megademo to target composite output (the only other CGA composite production I know of is 8088 Domination). At the very least, it was a platform that was ‘new’ to the demoscene, as the PC demoscene didn’t start until around 1989, and really took off in the early 90s when 486es, VGA and soundcards had become commonplace. Only people who had experienced early PC gaming in the 1980s would have been familiar with what 8088, (composite) CGA and a PC speaker could do.

But we knew that whatever we did after 8088 MPH, the ‘novelty factor’ would be gone. Any kind of sequel was unlikely to have the same impact that 8088 MPH would have, simply because it would not be the first of its kind. People have seen and heard it before.

And from a technical point-of-view it was a similar story: with 8088 MPH we could do many things for the first time, break a lot of limitations of the hardware. It was unlikely that we would be able to combine as many technical breakthroughs in a sequel as we did for 8088 MPH.

And while there certainly are a few technical breakthroughs again in Area 5150, there are also various effects that are based on basically the same ideas that were already in 8088 MPH, but just executed better, with more polish, resulting in new effects.

So I think we can say that a major goal for a successor was: better flow and better polish.

Central to Area 5150 is the 80×25 column textmode, which effectively gives 640×200 resolution at 16 colours. This mode was also used in 8088 MPH in various parts, and the main issue with the mode is that accessing the video memory while the display is active will result in so-called snow. The technical reason is that the memory is single-ported. This means that the memory can be accessed either by the data bus, or by the video output circuit. CGA is designed to give CPU access priority. This means that the CPU can always access memory, and read and write operations are consistent. When the video output circuit tries to read memory while a CPU transfer is in progress, then its read will simply be ignored, and it will just receive whichever data is on the bus at that particular moment. This will be ‘random’, and as such you get a kind of ‘snow’ pattern where some attributes and/or characters are not read correctly, but replaced with this random data on screen.

Why do other modes not suffer from snow? That is because they require only half the bandwidth. The CGA circuit inserts wait states onto the bus whenever the video output circuit needs to read video memory. This means that if the CPU wants to access video memory, it will wait until the CGA card has finished its transfer.

So why did they not just add extra wait states for 80c mode? Likely because it would mean that the CPU would be locked out of accessing video memory almost entirely. It would make video access very slow. So they kept the one wait state, but didn’t add an extra one. A trade-off between speed and visual quality. You can actually see that on screen, as there are columns with and without snow, vertically. That’s because the wait state prevents snow in some places, not in others.

The plasma effect in 8088 MPH is one effect that runs in 80c mode, and tries to write only during the inactive periods of the display, to avoid snow. You can actually buffer some writes in main memory or registers and wait for the DISPLAY_ENABLE flag to go low before writing them to video memory without snow.

So basically, if you thought 8088+CGA was slow… it just got even slower, as you only have a very limited window of opportunity to update parts of the video memory without snow. The key is to stick to effects that only require very small updates to video memory. In the case of the plasma, the resolution was actually 80×25, so it was easy to update a large part of the screen at a high frame rate, at the cost of low resolution, so the plasma was somewhat blocky. For tweakmodes with higher vertical resolution, such as 80×50, 80×100 or 80×200, that won’t be possible.

Another effect that wasn’t really used as an effect, aside from the Kefrens bars, is to reprogram the screen offset mid-screen. This effectively allows you to choose any scanline from video memory at any part of the screen. Or well, technical limitations of CGA mean that you work in sets of 2 scanlines, so you can only start at even scanlines. Or at least, that is what we did for 8088 MPH.

As said, it wasn’t really used as an effect in itself, but it was required to make the 1024-colour mode possible. In order to use only the first row of the characterset, the CRTC had to be reset every other scanline. Effectively we were ‘stacking’ 100 frames of 2 scanlines on top of each other. These frames had no vertical blank area, so they fit together seamlessly on screen, leading to a continuous picture.

A new CRTC frame normally means that the screen offset (the pointer to video memory) is also reset to the starting value stored in the CRTC registers. So in order to display the pictures, we had to ‘manually’ point each frame to the correct position in video memory. We could have pointed it to any position if we wanted (and there were some simple transition effects that did this), but it wasn’t really exploited.

The upside of this trick is that it doesn’t require any access to video memory, only to the CRTC registers, so it does not generate snow.

These are the two main ingredients for many effects in Area 5150: A few small updates to the video memory every frame, and manipulating which scanline is shown where on the screen. That allows you do to many tricks in 640×200 resolution at 16 colours, with no snow. It was already in the DNA of 8088 MPH, but at the time, we only made limited use of the possibilities. Now it was time to push it further with some polished effects.

Most of these effects were done by VileR. Many of them were actually done shortly after 8088 MPH.

Let’s run down some of the effects and discuss them.

The demo starts off with a very C64-like trick: the first effect manipulates the current screen itself, so it has a ‘seamless’ start:

Since you can assume what mode the screen is in, you can easily manipulate the existing contents. In this case by changing the font height to shrink the screen, and then playing with the position of the visible window to get an interesting first bouncing transition, right off the bat, before the demo even seems to have started officially. The border is also set to blue, so the effect stands out better.

Then a similar trick to 8088 MPH is pulled: a screen that appears to be 40c text mode, is suddenly displaying graphics. Spoiler: it is not actually 40c textmode. It is faked with ANSI-From-Hell graphics in 80c textmode (just like 8088 MPH faked textmode in graphics).

While it is possible to switch between certain modes per-scanline (we will get to that later), it is virtually impossible to switch between modes reliably within a scanline. Also, it is not possible to switch reliably between 80c textmode and any of the other modes, because 80c is the ‘odd one out’, as also discussed above with the snow.

Here we have the first effect that manipulates the screen offset mid-frame. It uses 80c mode, but in overscan, so the entire screen is covered, including the border area. By cleverly re-using the same scanlines multiple times, the various gradient patterns can be drawn and animated. By updating the data on-the-fly, the colours are changed over time. Because it runs so smoothly, and the dithering at 640×200 resolution is so fine, it gives an Amiga copper-effect kind of feel with thousands of colours.

I suppose that’s also because RGBI has really bright colours. It’s only 16 colours, but it gives this VGA/Amiga-like RGB richness to it. The C64 doesn’t really have that, its colours are more subdued (as is CGA composite).

In the capture on YouTube you can actually see some snow at the bottom of the screen. This is where some updates to video memory are done. On a real monitor this is usually outside the visible area.

This is the first effect that was not in the works yet for Revision 2020. It appears to be a relatively simple 40c mode effect. 40c is used for two reasons: no snow, and the low resolution makes it easy to move large parts of the screen around.

It seems pretty straightforward, just some sort of masked blit routine. The main remarkable thing is again that it is in full overscan.

Now we are getting to the transitions. These image-transitions give a very C64-like feel, and give the demo a coherent feel and flow to it.

On C64, most graphics are character-based, and by manipulating the colorram, you can create interesting effects like fading colours in/out, creating glow-effects etc. This is often done in interesting patterns. It cannot be done per-pixel, but only per-character, in 40×25 resolution. But this coarse resolution is good enough for various effects.

This could be translated to CGA textmode, as it also has a sort of ‘colorram’ with its attributes per-character. Again, we see here that there are only slight changes per-frame, so these can be done in 80c mode without snow.

These particular transitions have a sort of ‘pre-glow’ effect as the image is unveiled. Especially the searchlight-pattern is nice.

Another variation here where one image can be slowly dissolved into another, following a certain pattern.

And later we again see a similar effect, with a circular pattern this time.

Then we see another interesting transition. This one runs in overscan. It again makes use of re-using scanlines. Only a few unique scanlines are required to create the wavy pattern. You can store about 100 unique scanlines in memory, which should be enough to get a smooth motion from left-to-right, or right-to-left. And the actual ‘transition’ in colours is only a few characters wide, so you could update it on-the-fly without snow if required.

This effect is probably my favourite. Again running in overscan. And very clever use of repeating scanline patterns to get the rotating animation going, and a nice smooth scroller as icing on the cake. The colour changes are also really nice. Again, this feels almost like an Amiga-effect, with that richness of the RGBI colours and dithering.

Then it transitions seamlessly into a classic twister-effect. From a technical point-of-view again very similar: repeating scanline patterns. But again, nicely done, with lots of colours, good dithering, and even a drop shadow.

The ending is also nice, again, making use of repeating patterns of scanlines to do a wavy animation, and then some small updates to dissolve them.

This is actually two effects really. The first is a vertical scroll. The interesting thing is that it actually scrolls per-scanline. With regular CRTC manipulation of the start offset, you can’t do that in this mode.

So instead this effect manipulates the timing of the frame as well, in order to get more fine-grained positioning. Tricky to do. On Amstrad CPC464, which also uses the 6845 CRTC, people have even managed to do 1/64th subpixel vertical positioning:

So while the sprite-effect in 8088 MPH performed perfectly smooth 60 fps scrolling, it still was a tad ‘jumpy’ because it had to scroll at 2-scanline increments. This does 60 fps scrolling with 1-scanline increments. As good as it gets.

Then we get to the crystal ball effect. This might remind you of other C64 effects. They tend to make scrollers with various patterns, usually full-screen, where they can manipulate the colorram to paint the letters.

This is similar, except it can’t be done in fullscreen. But the attributes are turned on and off based on the scroll-text which is a 1-bit pattern.

Then we see another transition. This one is very straightforward: just plotting gray blocks in a simple pattern. Again, not too much per frame, avoid snow.

This is another effect I don’t think we had on the table yet for Revision 2020. It seems like basically a chunky table-based effect in 40c textmode.

Here is the classic checkerboard effect. It again makes use of repetitive patterns of scanlines, and small updates per frame cleverly. We had an earlier version of this effect, which ran at 30 fps in fullscreen, or 60 fps if the width of the screen was a bit smaller. Apparently in this version it is 60 fps at full width, so some optimizations have been found to get the best-of-both-worlds. Another classic demoscene effect done to perfection on the 8088+CGA, which can be crossed off the list.

The end of this effect introduces another transition. This one again runs in full overscan, and uses repeating scanlines.

Here is another classic effect, a sine scroller. Again this is done with a clever combination of updating small bits of videomemory on-the-fly and modifying the start offset register per-scanline. In this case the actual active display is very small, which allows for more CPU-time to update the scroller data without snow.

Then the actual dancing elephant. To my knowledge this is the first time that an actual animation has been tried in ANSI-from-Hell. The result is very nice. It also gives the PC a sort of ‘cartoony’ feeling, which it never quite had, because CGA was not colourful enough and animation was not fast and smooth enough to do this kind of thing. Until now.

Another nice touch is that this is one of the few parts in the demo where the visuals are closely synced to the soundtrack (another obvious one is the part where the Cacodemon is shown, and the soundtrack plays a reference to the DOOM soundtrack). So this shows the kind of flow and polish that we did not have in 8088 MPH yet, but which is common in good C64 and Amiga demos. Of course this idea was first popularized by Desert Dream, and then taken even further with Second Reality.

The next effect is a plasma. A nice design-touch is that it is applied to a part of an image, in this case the sunglasses. A design-idea that is common with C64 and Amiga demos.

The transition out of the plasma-effect… I am not entirely sure if this is another application of the earlier table-based transition pattern routine, or if this is its own thing. At any rate, it looks different, and appears only in this spot in the demo.

Then we have the isometric scroller. This is exactly the opposite of what it looks like. It looks like the bars at the top-right and bottom-left are stationary, while the text is moving diagonally. In actual fact, the text is not moving in video memory. The start offset is changed so the position of the text changes on screen. The bars are the ones that are updated (only the endpoints need to be redrawn), and only the wraparound has to be handled to keep the text scrolling once the end of the video memory is reached.

This same concept had already been used in the game Zaxxon. It scrolls the screen by updating the start offset, and redraws the HUD to make it look stationary. Zaxxon uses graphics mode instead of textmode though, so it is less colourful. It also does not scroll as smoothly.

This is obviously a reference to the game Marble Madness. A nice 8-way scroller in 80c textmode. Again, this uses scanline-accurate scrolling, at 60 fps, so perfect smoothness.

I had seen an earlier version of this effect, which had different artwork at the time, and was still targeting a composite mode. It wasn’t planned for inclusion at Revision 2020. This effect is apparently not quite finished yet, and should show a larger playfield in the final version. Currently it just stops when the ball reaches the two chips, and sits there for the remainder of the timeslot.

Another interesting touch, which is only apparent when watching at 60 Hz, is that the ball sprite uses some kind of temporal aliasing to reduce the blockiness that is inherent in ANSI-from-Hell graphics. It gives a better sense of transparency and roundness at the edges.

Ah, polygons! My area of expertise! While we were planning on doing translucent polygons, aka ‘glenz’, in the Revision 2020 version, that was a different routine from the one that ended up in Area 5150. This effect was done by Utter Chaos, who you may know as William Hart from his PCRetroTech YouTube channel. On his channel, he has published a video where he discusses this effect, among other things:

In his video, he also shows some rough footage from the early version that I had made. What is interesting here is that there is more than one way to skin a cat.

I originally decided to try a glenz effect after 8088 MPH, when I realized that the delta-rendering technique gives you transparency almost for free. That is, I store the colour per-span. That means I only have to blend a single ‘pixel’ to blend two overlapping spans together. The actual drawing is the same regardless of whether there is transparency or not: just draw all pixels in the span with the same colour.

I also figured out that in RGBI-mode, even though you only have 4 colours in graphics mode, you could set up a specific palette that conveys translucency between red and gray.

So I could extend the 8088 MPH polygon renderer with transparency, while maintaining all of its other advantages, including the ability to only draw the differences between two frames. The examples you see in the PCRetroTech video are done with this delta-rendering technique.

However, as you may recall from 8088 MPH, the delta-rendering technique was specifically chosen because it allowed you to draw large images quickly. Which is also why the cube and donut in 8088 MPH cover nearly the entire screen.

My early attempts showed that the translucent objects I chose, were not scaling as well as I had hoped. There was more overhead than expected, since there were more changes on screen that took longer to draw than I had hoped. It can probably be made to run acceptably with some more optimization, but trying to draw large translucent objects with delta-rendering may not be the best choice for 8088+CGA.

Utter Chaos however took a different approach: focus on rendering small objects instead, on only a small part of the screen. He ended up using a variation of the so-called eor-filler that is commonly used on the C64. The basic idea is similar to the area-fill of the Amiga blitter, except the blitter works horizontally, where an eor-filler works vertically.

The key is that you draw the outline of your polygons, and then eor (or xor in x86-parlance) each pixel with the next (on the next scanline). When you hit the first outline (going from outside the polygon to inside the polygon), the bits will be enabled, and eor-ing with 0 will keep them on, so you paint these pixels, and fill the polygon. Once you hit another outline, the bits will be disabled again with the eor, so you no longer fill.

The beauty of this technique is that you can draw all outlines first, and then do a single eor-pass over your entire buffer, and you’re done. On C64 anyway, where you can perform double-buffering easily. On PC you cannot. Also, on CGA, because of the waitstates, it is relatively slow to perform read-modify-write operations like xor directly on video memory.

So perhaps it’s better to have a buffer in main memory, and then copy that, to avoid flicker. But, once we do that, there is another trick we can pull. This time borrowed from the Amiga. I believe they call it ‘cookie-cut’. That is, you draw the outline in an offscreen buffer, and use that buffer as the input for the blitter fill. But you set the output directly to your framebuffer. So the outline is never physically filled in memory. The filled pixels are written directly into the output buffer. It’s like a fill-and-blit rolled into one.

This trick can also be made to work on the eor-filler for CGA: since you only read each pixel once, it doesn’t matter if you don’t write the actual filled pixel back to the buffer. So you don’t have to do read-modify-write in place. You can do the write directly to video memory. Even directly to the frontbuffer if you are fast enough.

Another very neat trick done here is that the top and bottom of the screen are 40c textmode, while the middle is 320×200 4-colour graphics mode. So unlike the ‘fake’ combinations of text and graphics that we’ve seen before, this is the real thing.

Some other details worth mentioning are that it runs in overscan mode, and the vertical motion of the object is actually done by varying the point at which the switch from text to graphics mode is performed, rather than the object physically moving in video ram.

Another classic effect translated to 8088+CGA: the voxel landscape. This runs in 40c textmode, as it is another chunky effect that needs to update large areas on the screen without snow.

I know we have discussed the voxel effect, and went into techniques used on Amiga, such as in Oxyron’s Planet Rocklobster, which had been adapted to Atari 8-bit in Arsantica 3. At the time we figured it should be possible to do a voxel like this on 8088+CGA, but I don’t think we had a prototype implementation ready yet for Revision 2020.

The parallax scroller. Another effect that cleverly makes use of repositioning scanlines. Technically clever, but the artwork really makes the effect shine as well. And again, this is 80c textmode, it’s ANSI-from-Hell.

Then one last effect before the end-part. Again, an 80c textmode image where the scanline positions are manipulated.

And the endpart, which you may think is the same effect as the previous one. But it’s not. The previous effect was a fullscreen 80c image. In 80c mode, each colunm takes up 2 bytes (character+attribute), so you have 160 bytes per scanline. Since CGA has 16384 bytes of memory, you can only fit 102 scanlines into memory. So a fullscreen image cannot use the full vertical resolution.

This however is not a fullscreen image. The whole videomemory is already used at the top, so it uses the full resolution. The bottom is just re-using the same scanlines, but mirroring them, and applying a sine-wave to it to give the illusion of reflection on a water surface.

We actually used this 80×200 tweaked textmode in the ‘mugshot’ part of 8088 MPH as well, to get the best possible resolution for the scanned photographs. However, at the time, we just showed it as a ‘letterboxed’ screen, so you only got 100 filled scanlines, and black borders on top and bottom. A more basic way to work around the fact that there’s not enough memory for an entire screen.

Another interesting trick is that the routine plays one PWM sample at every scanline. This gives us a replay rate of exactly 15.7 kHz. Slightly lower than the 16.5 kHz in the 4-channel MOD player used in 8088 MPH, but the trade-off is that there are now more interesting graphics.

What you may not know is that the Kefrens-effect in 8088 MPH was actually designed the same way: it is a cycle-counted effect, where originally there was a PWM-sample output at every scanline. But since our demo only had MONOTONE-based music, not sample-based (aside from the endpart, which was a specific mixing routine that could not be combined with the Kefrens effect), the Kefrens effect ended up just playing the MONOTONE music once per frame, and performing nops during the scanlines where the PWM output would otherwise be.

One last effect, which is the opposite of how the demo started: the effect drops us back to regular textmode first, then does a clever ANSI-animation, before dropping back to the DOS prompt as seamlessly as the demo started.

And that’s the end of Area 5150. I hope you liked some of the background information that I could provide.

Posted in Oldskool/retro programming | Tagged , , , , , , , , | 1 Comment

Area 5150: 8088 MPH gets a successor

In case you have missed it, over the weekend, a new demo for IBM PC with CGA was released at Evoke, under the name Area 5150. It placed first in the competition, which should not surprise anyone:

I was wondering if I should comment on this demo, as I’m reasonably familiar with the man behind the curtain. I suppose the creators will write blogposts about the various parts, and give some insight into how it works, so I suggest you just wait for those.

I am not one of the creators. I was also wondering if I should comment on that. Perhaps at a later time.

I suppose what I can say about this is that during development of 8088 MPH, we chose to make a demo that runs entirely on a composite monitor, as the ‘party trick’ was the 1024 colour mode, and it wouldn’t make sense to have to watch part of a demo on one type of monitor, and another part on another.

But this meant that the ‘other monitor’ was still a valid target for a future demo. There were various tricks and effects that we had developed before or during 8088 MPH, which work fine on RGBI monitors as well, and there were also some tricks, effects, or at least ideas we had, that would work on RGBI, but not on composite. Like the so-called ANSI-from-Hell graphics.

So the logical conclusion was that 8088 MPH was going to be the ‘composite’ demo, and a future demo (under an internal title different from Area 5150 at the time) would be the ‘RGBI’ demo. Funny enough, shortly before 8088 MPH was finished, Genesis Project released GP-01, which was targeted at 8088 and CGA, using the ANSI-from-Hell mode to get 640×200 16-colour graphics. So they can claim a ‘first’ on that one, I suppose. At least, in a demo-environment.

The first use of this mode as far as we know, is in a game by Macrocom called ICON: Quest for the ring from 1984. A demo program known as ICONDEMO, which promotes the game and showcases the special CGA mode can still be found here. I suspect that Genesis Project got their inspiration from there. And I know for a fact that VileR got his inspiration there (the Macrocom trick is also part of what makes the 1024-colour magic happen in 8088 MPH, and it was already mentioned in his blogpost here). Trixter also covered ICON on his Oldskool “Life before demos” Shrine page (see under “Graphics Forged From Text Mode”).

A capture of ICONDEMO can be seen here:

Oh, and one more thing I want to mention is: overscan. This demo runs various effects in overscan mode. That’s not very obvious from a capture video (although the capture seen above does correctly capture the border environment, so you can see it if you know what to look for), and most emulators do not emulate a border area at all. But you should watch it on a real (CRT) monitor, and certain effects will fill the entire screen, much like ‘borderless’ effects on C64, Amiga and such.

Again, this is not entirely new. Trixter’s CGA Compatibility Tester already had some tests using border/overscan modes. But as far as I know, it’s the first time such a mode is used in a demo on CGA.

Oh, and for those who don’t get the “dancing elephant”-reference. Reportedly, an analyst commented that “IBM bringing out a personal computer would be like teaching an elephant to tap dance”.

This was a phrase also used by Lou Gerstner, former CEO of IBM. IBM was an ‘elephant’: a huge company, which was slow to respond to a changing market, and could not keep up with technological breakthroughs. Lou had to ‘teach the elephant to dance’, to turn around the outdated company, and become profitable again, and back at the forefront of technical innovation. And making a demo on an IBM PC with CGA is also like trying to teach an elephant to dance.

I suppose the obvious next step would be 8088 + Hercules?

A quick-and-dirty video of the demo running on a real IBM 5160 and CRT:

Edit: An official capture of the demo is now available on YouTube:

Edit: A video of the demo running on a real IBM 5155 and CRT by RetroErik:

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , | 2 Comments

GPU-accelerated video decoding

Picking up where I left off last time, I’d like to discuss a few more things when using video decoding libraries in your own graphics pipeline. Actually, the previous article was just meant as the introduction to the more technical implementation details, but I got carried away.

Video decoding… it’s a thing. A solved thing, if you just want your basic “I have a video and a window, and I want to display the video in the window”. There are various libraries that can do that for you (DirectShow, Media Foundation, VLC, FFmpeg etc), and generally they will use the GPU to accelerate the decoding. Which is pretty much a requirement for more high-end content, such as 4k 60 fps video.

But I want to talk about just how thin of a line that GPU-accelerated decoding is. Because as soon as you want to do anything other than just displaying the video content in a window managed by the library, you run into limitations. If you want to do anything with the video frames, you usually just want to get a pointer to the pixel data of the frame in some way.

And that is where things tend to fall apart. Such a pointer will have to be in system memory. Worst case (which used to be quite often) this will trigger a chain reaction of the library doing everything in system memory in that case, which means it will also use the CPU to decode, rather than the GPU. See, as long as the library can manage the entire decoding chain from start to finish, and has freedom to decide which buffers to allocate where, and how to output the data, things are fine. But as soon as you want to have access to these buffers in some way, it may falls apart.

In the average case, it may use GPU acceleration for the actual decoding, but then copy the internal GPU buffer to a system buffer. And then you will have to copy it BACK to the GPU in your own texture, to do some actual rendering with it. The higher the resolution and framerate, the more annoying this GPU<->CPU traffic is, because it takes up a lot of precious CPU time and bandwidth.

But there’s a tiny bit more to it…

RGB32 vs NV12 format

In the modern world of truecolour graphics, we tend to use RGB pixelformats for textures, the most common being 8 bits per pixel, packed into a 32-bit word. The remaining 8-bits may be left undefined, or used as an extra alpha (A) component. The exact order may differ between different hardware/software, so we can have RGBA, BGRA, ARGB and whatnot, but let’s call this class ‘RGB32’, as in: “some variation of RGB, stored in a 32-bit word”. That is the variation you will generally want to use when rendering with textures.

For video however, this is not ideal. YUV colour models were used by JPEG and MPEG, among other formats, because they have interesting properties for compression. A YUV (again an umbrella term for various different, but related pixel formats) colour model  takes human perception into consideration. It decomposes a colour into luminance (brightness) and chrominance (colour) values. The human eye is more sensitive to luminance than to chrominance, which means that you can store the chrominance values with a lower resolution than the luminance values, without having much of an effect on the perceived visual quality.

In fact, getting back to the old analog PAL and NTSC formats: These formats were originally black-and-white, so they contained only the luminance of the signal. When colour information (chrominance) was added later, it was added at a lower resolution than the luminance. PAL actually uses YUV, and NTSC uses the similar YIQ encoding. The lower resolution of the chroma signal leads to the phenomenon of artifacting, which was exploited on CGA in 8088 MPH.

In the digital world, the Y (luminance) component is stored at the full resolution, and the U and V (chrominance) components are stored at a reduced resolution. A common format is 4:2:0, which means that for every 4 Y samples, 1 U sample and 1 V sample is stored. In other words, for every 2×2 block of pixels, all Y-values are stored, and only the average U and V values of the block are stored.

When converting back to RGB, the U and V components can be scaled back up, usually with a bilinear filter or such. This can easily be implemented in hardware, so that the pixel data can be stored in the more compact YUV layout, reducing the required memory footprint and bandwidth when decoding video frames. With RGB32, you need 32 bits per pixel. With an YUV 4:2:0 format, for 4 pixels you need to store a total of 6 samples, so 6*8 = 48 bits. That is effectively 48/4 = 12 bits per pixel, so only 37.5% of RGB32. That matters.

When you want to get access to the frame data yourself, you generally have to tell the decoder which format to decode to. This is another pitfall where things may fall apart, performance-wise. That is, a lot of hardware-accelerated decoders will decode into a YUV-layout. If you specify that you want to decode the frame into an RGB32 format, this may cause the decoder to choose a decoding path that is partially or even entirely run on the CPU, and as such will perform considerably worse.

In practice, the most common format that accelerated decoders will decode to is NV12. For an overview of NV12 and various other pixel formats, see this MSDN page. In short, NV12 is a format that stores YUV data in a single buffer, with the Y component first, and then the U and V components packed together:

figure 10. nv12 memory layout

This format is supported in hardware on a wide range of devices, and is your best bet for efficient accelerated GPU decoding.

What’s more: this format is also supported as a texture format, so for example with Direct3D11, you can use NV12 textures directly inside a shader. The translation from YUV to RGB is not done automatically for you though, but can be done inside the shader.

The format is a bit quirky. As it is a single buffer, that contains two sets of data, at different resolutions, Direct3D11 solves this by allowing you to create two shader views on the texture. For the Y component, you create an ID3D11ShaderResourceView with the DXGI_FORMAT_R8_UNORM format. For the U and V components, you create an ID3D11ShaderResourceView with the DXGI_FORMAT_R8G8_UNORM format. You can then bind these views as two separate textures to the pipeline, and read the Y component from the R component of the R8_UNORM view, and the U and V components from the R and G components of the R8G8_UNORM view respectively. From there you can do the usual conversion to RGB.

So the ideal way to decode video is to have the hardware routine decode it to NV12, and then let you have access to the NV12 buffer.

Using Media Foundation

With Media Foundation, it is possible to share your Direct3D11 device between your application and the Media Foundation accelerated decoders. This can be done via the IMFDXGIDeviceManager, which you can create with the MFCreateDXGIDeviceManager function. You can then use IMFDXGIDeviceManager::ResetDevice() to connect your D3D11 device to MediaFoundation. Important is to set your device to multithread-protected via the ID3D10Manager interface first.

This IMFDXGIDeviceManager can then be connected for example to your IMFSourceReader by setting its MF_SOURCE_READER_D3D_MANAGER attribute. As a result, any GPU acceleration done through D3D11 will now be done with your device, and as such, the resources created will belong to your device, and as such can be accessed directly.

A quick-and-dirty way to get to the underlying DXGI buffers is to query the IMFMediaBuffer of a video sample for its IMFDXGIBuffer interface. This interface allows you to get to the underlying ID3D11Texture2D via its GetResource method. And there you are. You have access to the actual D3D11 texture that was used by the GPU-accelerated decoder.

You probably still need to make a copy of this texture to your own texture with the same format, because you need to have a texture that has the D3D11_BIND_SHADER_RESOURCE flag set, if you want to use it in a shader, and the decoder usually does not set that flag. But since it is all done on the GPU, this is reasonably efficient.

Timing on external clock

Another non-standard use of video decoding frameworks is to take matters in your own hand, and output the audio and video frames synchronized to an external clock. By default, the decoder framework will just output the frames in realtime, based on whatever clock source it uses internally. But if you want to output it to a device with an external clock, you need to sync the frames yourself.

With DirectShow and MediaFoundation, this is not that difficult: every audio and video sample that is decoded, is provided with a timestamp, with an accuracy of 100 ns. So you can simply buffer a number of samples, and send them out based on their timestamp, relative to the reference clock of your choice.

For some reason, LibVLC only provides timestamps with the audio samples, not with the video samples it decodes. So that makes it difficult to use LibVLC in this way. Initially it did not have an easy way to decode frames on-demand at all, but recently they added a libvlc_media_player_next_frame() function to skip to the next frame manually. Then it is up to you to figure out what the frame time should be exactly.

One issue here though, is that if you let the library decode the video in realtime, it will also automatically compensate for any performance problems. So it will automatically apply frame skipping when required. If you are decoding manually, at your own speed, then you will need to manually take care of a situation where you may not be able to keep your decode buffer full, when the decoder cannot keep up. You may need to manually skip the playback position in the decoder ahead to keep in sync with the video output speed.

All in all, things aren’t always that straightforward when you don’t just let the video library decode the video by itself, and letting it time and display the output itself.

Posted in Software development | Tagged , , , , , , , , , , , , | 9 Comments

Video codecs and 4k

Recently I was optimizing some code to reliably play 4k video content at 60 fps through a 3D pipeline on low-end hardware. And it gave me a deja-vu of earlier situations with video. It seems that there is this endless cycle of new video formats turning up, followed by a period required for the hardware to catch up. It also reminded me that I had wanted to write a blog about some issues you run into when decoding video. So I think this is finally the time to dive into video codecs.

The endless cycle

At its basis, video playback on computer hardware is a very resource-intensive process. Worst-case you need to update all pixels in memory for every frame. So the performance depends on the number of pixels per frame (resolution), the colour-depth (bits per pixel), and the frame rate (number of frames per second).

If we want to get a bit retro here, convincing video playback on a consumer PC more or less started when hardware cards such as the Video Blaster arrived on the market. This was in 1992, before local bus was a widespread thing. The ISA bus was too slow for anything other than playing video in a really small window in the low 320×200 resolution at 256 colours.

The Creative Video Blaster circumvented this issue by having its own video output on board, and having video encoding/decoding hardware. It uses a Chips & Technologies F82C9001 chip, which supports YUV buffers in various compressed formats (2:1:1, 4:1:1 and 4:2:2), and it can also perform basic scaling. This meant that the CPU could send compressed video over the ISA bus, and it could be decoded on-the-fly on the Video Blaster board, at a relatively high resolution and colour depth. It’s difficult to find exact information on its capabilities, but it appears to be capable of PAL and NTSC resolution, and supports ‘over 2 million colours’, which would indicate 21-bit truecolour, so 7 bits per component. So I think we can say that it is more or less “broadcast quality” for the standards of the day: still in the era of Standard Definition (SD) PAL and NTSC.

The original webpage for the first Video Blaster (model CT6000) is archived here. It apparently requires an “IBM PC-AT and higher compatibles”, but the text also says it is for 386 PCs. So I suppose in theory it will work in a 286, but the software may require a 386 for best performance/compatibility.

Anyway, it should be clear that a 386 with an ISA VGA card could not play video anywhere near that well. You really needed that special hardware. To give an indication… a few years later, CD-ROM games became commonplace, and Full Motion Video (FMV) sequences became common. For example, see the game Need For Speed from 1994, which requires a fast 486 with localbus VGA:

The video quality is clearly not quite broadcast-level. The resolution is lower (320×240), and it also uses only 256 colours. The video runs at 15 fps. This was the best compromise at the time for the CPU and VGA capabilities, without any special hardware such as the Video Blaster.

From there on it was an endless cycle of the CPU and video cards slowly catching up to the current standard, and then new standards, with higher resolutions, more colours, better framerates and better compression would arrive, which again required special hardware to play back the video in realtime.

We moved from SD to HD, from interlaced video to progressive scan, from MPEG-1 to MPEG-2, MPEG-4 and beyond, and now we are at 4k resolution.

I would say that 4k at 60 fps is currently the ‘gold standard’: that is the highest commonly available content at the moment, and it currently requires either a reasonably high-end CPU and video card to play it back without any framedrops, or it requires custom hardware in the form of a Smart TV with built-in SoC and decoder, or a set-top media box with a similar SoC that is optimized for decoding 4k content.

Broadcast vs streaming

I have mentioned ‘broadcast quality’ briefly. I guess it is interesting to point out that in recent years, streaming has overtaken broadcasting. Namely, broadcast TV quality, especially in the analog era, was always far superior to digital video, especially on regular consumer PCs. But when the switch was made to HD quality broadcasting, an analog solution would require too much bandwidth, and would require very high-end and thus expensive receiver circuitry. So for HD quality, broadcasting switched to digital signals (somewhere in the late 90s to early 2000s, depending on which area). They started using MPEG-encoded data, very similar to what you’d find on a DVD, and would broadcast these compressed packets as digital data via ether, satellite or cable. The data was essentially packed digitally into existing analog video channels. The end-user would require a decoder that would decompress the signal into an actual stream of video frames.

At this point, there was little technical difference between playing video on your PC, and watching TV. The main difference was the delivery method: the broadcast solution could offer a lot of bandwidth to your doorstep, so the quality could be very high. Streams of 8 to 12 mbit of data for a single channel were no exception.

At the time, streaming video over the internet was possible, but most consumer internet connections were not remotely capable of these speeds, so video over the internet tended to be much lower quality than regular television. Also, the internet does not offer an actual ‘broadcasting’ method: video is delivered point-to-point. So if 1000 people are watching a 1 mbit video stream at the same time, the video provider will have to deliver 1000 mbit of data. This made high-quality video over the internet very costly.

But that problem was eventually solved, as on the one hand, internet bandwidth kept increasing and cost kept coming down, and on the other hand, newer video codecs would offer better compression, so less bandwidth was required for the same video quality.

This means that a few years ago, we reached the changeover-point where most broadcasters were still broadcasting at HD quality in 720p or 1080i quality, while streaming services such as YouTube or Netflix would offer 1080p or better quality. Today, various streaming services offer 4k UHD quality, while broadcasting is still mostly stuck at HD resolutions. So if you want that ‘gold standard’ of 4k 60 fps video, streaming services is where you’ll find it, rather than broadcasting services.

Interlacing

I really don’t want to spend too much time on the concept of interlacing, but I suppose I’ll have to at least mention it shortly.

As I already mentioned with digital HD broadcasting, bandwidth is a thing, also in the analog realm. The problem with early video is flicker. With film technology, the motion is recorded at 24 frames per second. But if it is displayed at 24 frames per second, the eye will see flickering when the frames are switched. So instead each frame is shown twice, effectively doubling the flicker frequency to 48 Hz, which is less obvious to the naked eye.

The CRT technology used for analog TV has a similar problem. You will want to refresh the screen at about 48 Hz to avoid flicker. So that would require sending an entire frame 48 times per second. If you want to have a reasonable resolution per frame, you will want about 400-500 scanlines in a frame. But the combination of 400-500 scanlines and 48 Hz would require a lot of bandwidth, and would require expensive receivers.

So instead, a trick was applied: each frame was split up in two ‘fields’. A field with the even scanlines, and a field with the odd scanlines. These could then be transmitted at the required refresh speed, which was 50 Hz for PAL and 60 Hz for NTSC. Every field would only require 200-250 scanlines, halving the required bandwidth.

Because the CRT has some afterglow after the ray has scanned a given area, the even field was still visible somewhat as the odd field was drawn. So the two fields would blend somewhat together, giving a visual quality nearly as good as a full 50/60Hz image at 400-500 lines.

Why is this relevant? Well, for a long time, broadcasting standards included interlacing. And as digital video solutions had to be compatible with analog equipment at the signal level, many early video codecs also supported interlaced modes. DVD for example is also an interlaced format, supporting either 480i for NTSC or 576i for PAL.

In fact, for HD video, the two common formats are 720p and 1080i. The interlacing works as simple form of data compression, which means that a 1920×1080 interlaced video stream can be transmitted with about the same bandwidth as a 1280×720 progressive one. 1080i became the most common format for HD broadcasts.

This did cause somewhat of a problem with PCs however. Aside from the early days of super VGA, PCs rarely made use of interlaced modes. And once display technology moved from CRT to LCD displays, interlacing actually became problematic. An LCD screen does not have the same ‘afterglow’ that a CRT has, so there’s no natural ‘deinterlacing’ effect that blends the screens together. You specifically need to perform some kind of digital filtering to deinterlace an image with acceptable quality.

While TVs also adopted LCD technology around the time of HD quality, they would always have a deinterlacer built-in, as they would commonly need to display interlaced content. For PC monitors, this was rare, so PC monitors generally did not have a deinterlacer on board. If you wanted to play back interlaced video on a PC, such as DVD video, the deinterlacing would have to be done in software, and a deinterlaced, progressive frame sent to the monitor.

This also means that streaming video platforms do not support interlacing, and when YouTube adopted HD video some years ago, they would only offer 720p and 1080p formats. With 1080p they effectively surpassed the common broadcast quality of HD, which was only 1080i.

Luckily we can finally put all this behind us now. There are no standardized interlaced broadcast formats for 4k, only progressive ones. Interlacing will soon be a thing of the past, together with all the headaches of deinterlacing the video properly.

Home Video

So far, I have only mentioned broadcast and streaming digital video. For the sake of completeness I should also mention home video. Originally, in the late 70s, there was the videocassette recorder (VCR) that offered analog recording and playback for the consumer at home. This became a popular way of watching movies at home.

One of the earliest applications of digital video for consumers was an alternative for the VCR. Philips developed the CD-i, which could be fitted with a first-generation MPEG decoder module, allowing it to play CD-i digital video. This was a predecessor of the Video CD standard, which used the same MPEG standard, but was not finalized yet. CD-i machines could play both CD-i digital video and Video CD, but other Video CD players could not play the CD-i format.

This early MPEG format aimed to fit a full movie of about 80 minutes at a quality that was roughly equivalent to the common VHS format at the time, on a standard CD with about 700 MB of storage. This analog format did not deliver the full broadcast quality of PAL or NTSC. You had about 250 scanlines per frame, and the chrominance resolution was also rather limited, so effectively you had about 330 ‘pixels’ per scanline.

VideoCD aimed at a similar resolution, and the standard arrived at 352×240 for NTSC and 352×288 for PAL. It did not support any kind of interlacing, so it output progressive frames at 29.97 Hz for NTSC, and 25 Hz for PAL. So in terms of pixel resolution, it was roughly the equivalent of VHS. The framerate was only half that though, but still good enough for smooth motion (most movies were shot at 24 fps anyway).

VideoCD was an interesting pioneer technically, but it never reached the mainstream. Its successor, the DVD-Video, did however become the dominant home video format for many years. By using a disc with a much larger capacity, namely 4.7 GB, and an updated MPEG-2 video codec, the quality could now be bumped up to full broadcast quality PAL or NTSC. That is the full 720×576 resolution for PAL, at 50 Hz interlaced, or 720×480 resolution for NTSC at 60 Hz interlaced.

With the move from SD to HD, another new standard was required, as DVD was limited to SD. The Blu-ray standard won out eventually, which supports a wide range of resolutions and various codecs (which we will get into next), offering 720p and 1080i broadcast quality video playback at home. Later iterations of the standard would also support 4k. But Blu-ray was a bit late to the party. It never found the same popularity that VHS or DVD had, as people were moving towards streaming video services over the internet.

Untangling the confusion of video codec naming

In the early days of the MPEG standard (developed by the Moving Picture Experts Group), things were fairly straightforward. The MPEG-1 standard had a single video codec. The MPEG-2 standard had a single video codec. But with MPEG-4, things got more complicated. In more than one way. Firstly, the MPEG-4 standard introduced a container format that allowed you to use various codecs. This also meant that the MPEG-4 standard evolved over time, and new codecs were added. And secondly, there wasn’t a clear naming scheme for the codecs, so multiple names were used for the same codec, adding to the confusion.

A simple table of the various MPEG codecs should make things more clear:

MPEG standardInternal codenameDescriptive name
MPEG-1
(1993)
H.261
MPEG-2
(1995)
H.262
MPEG-4 Part 2
(1999)
H.263
MPEG-4 Part 10
(2004)
H.264Advanced Video Coding (AVC)
MPEG-H Part 2
(2013)
H.265High Efficiency Video Coding (HEVC)
MPEG-I Part 3
(2020)
H.266Versatile Video Coding (VVC)
MPEG codecs

What is missing? MPEG-3 was meant as a standard for HDTV, but it was never released, as in practice, the updates required were only minor, and could be rolled into an update of the MPEG-2 standard.

H.263 is also not entirely accurate. It was released in 1996. It is somewhat of a predecessor to MPEG-4, aimed mainly at low-bandwidth streaming. MPEG-4 decoders are backwards compatible with the H.263 standard, but the standard is more advanced than the original H.263 from 1996.

With MPEG-1 and MPEG-2, things were straightforward: there was one standard, one video codec, and one name. So nobody had to refer to the internal codename of the codec.

With MPEG-4, it started out like that as well. People could just refer to it as MPEG-4. But in 2004, another codec was added to the standard: the H.264/AVC codec. So now MPEG-4 could be either the legacy codec, or the new codec. The names of the standard were too confusing… “MPEG-4 Part 2” vs “MPEG-4 Part 10”. So instead people referred to the codec name. Some would call it by its codename of H.264, others would call it by the acronym of its descriptive name: AVC. So MPEG-4, H.264 and AVC were three terms that could all mean the same thing.

With H.265/HEVC, it was again not clear what the preferred name could be, so both H.265 and HEVC were used. What’s more, people would also still call it MPEG-4, even though strictly speaking it was part of the MPEG-H standard.

MPEG-I/H.266/VVC has not reached the mainstream yet, but I doubt that the naming will get any less complicated. The pattern will probably continue. And the MPEG-5 standard was also introduced in 2020 (with EVC and LCEVC codecs), which may make things even more confusing, once that hits the mainstream.

So if you don’t know that H.264 and AVC are equivalent, or H.265 and HEVC for that matter, it’s very confusing when one party uses one name to refer to the codec, and another party uses the other. Once you figured that out, it all clicks.

4k codecs

A special kind of confusion I have found is that it seems that it is often implied that you require special codecs for 4k video. But even MPEG-1 supports a maximum resolution of 4095×4095, and a maximum bandwidth of 100 mbit. So it is technically possible to encode 4k (3840×2160) content even in MPEG-1, at decent quality. In theory anyway. In practice, MPEG-1 has been out of use for so long that you may run into practical problems. A tool like Handbrake does not include support for MPEG-1 at all. It will let you encode 4k content in MPEG-2 however, which ironically it can store in an MPEG-4 container file. VLC actually lets you encode to MPEG-1 in 3820×2160 at 60 fps. You may find that not all video players will actually be able to play back such files, but there it is.

The confusion is probably because newer codecs require less bandwidth for the same level of quality. And if you move from HD resolution to 4k, you have 4 times as many pixels per frame, so roughly 4 times as much data to encode, resulting in roughly 4 times the bandwidth requirement for the same quality. So in practice, streaming video in 4k will generally be done with one of the latest codecs, in order to get the best balance between bandwidth usage and quality, for an optimal experience. Likewise, Blu-ray discs only have limited storage (50 GB being the most common), and were originally developed for HD. In order to fit 4k content on there, better compression is required.

But if you encode your own 4k content, you can choose any of the MPEG codecs. Depending on the hardware you want to target, it may pay off to not choose the latest codec, but the one that is best accelerated by your hardware. On some hardware, AVC may run better than HEVC.

Speaking of codecs, I have only mentioned MPEG so far, because it is the most common family of codecs. But there are various alternatives which also support 4k with acceptable performance on the right hardware. While MPEG is a widely supported standard, and the technology is quite mature and refined, there is at least one non-technical reason why other codecs may sometimes be preferred: MPEG is not free. A license is required for using MPEG. The license fee is usually paid by the manufacturer of a device. But with for example desktop computers this is not always the case. The licensing model also makes MPEG incompatible with certain open source licenses.

One common alternative suitable for 4k video is Google’s VP9 codec, released in 2013. It is similar in capabilities to HEVC. It is open and royalty-free, and it is used by YouTube, among others. As such it is widely supported by browsers and devices.

Another alternative is the Alliance of Open Media‘s Video 1 (AV1), released in 2018. It is also royalty-free, and its license is compatible with open source. This Alliance includes many large industry players, such as Apple, ARM, Intel, Samsung, NVIDIA, Huawei, Microsoft, Google and Netflix. So widespread support is more or less guaranteed. AV1 is a fairly new codec, which is more advanced than HEVC, so it delivers more compression at the same quality. The downside is that because it’s relatively new, and the compression is very advanced, it requires a quite powerful, advanced, modern CPU and GPU to play it back properly. So it is not that well-suited for older and more low-end devices.

In practice, you will have to experiment a bit with encoding for different codecs, at different resolutions, framerates and bitrates, to see which one is supported best, and under which conditions. I suppose the most important advice you should take away here is that you shouldn’t necessarily use the latest-and-greatest codecs for 4k content. There’s nothing wrong with using AVC, if that gives the best results on your hardware.

Hardware acceleration

One last thing I would like to discuss is decoding video inside a (3D) rendering context. That is, you want to use the decoded video as a texture in your own rendering pipeline. In my experience, most video decoding frameworks can decode video with acceleration effectively, if you pass them a window handle, so they can display inside your application directly, and remain in control. However, if you want to capture the video frames into a graphics texture, there often is no standardized way.

The bruteforce way is to just decode each video frame into system memory, and then copy it into the texture yourself. For 1080p video you can generally get away with this appoach. However, for 4k video, each frame is 4 times as large, so copying the data takes 4 times as long. On most systems, the performance impact of this is simply too big, and the video cannot be played in realtime without dropping frames.

For Windows, there is the DirectX Video Acceleration framework (DXVA), which should allow you to use GPU-acceleration with both DirectShow and MediaFoundation. So far I have only been able to get the frames in GPU-memory in MediaFoundation. I can get access to the underlying DirectX 11 buffer, and then copy its contents to my texture (which supports my desired shader views) via the GPU. It’s not perfect, but it is close enough. 4k at 60 fps is doable in practice. It seems to be an unusal use-case, so I have not seen a whole lot in the way of documentation and example code for the exact things I like to do.

With VLC, there should be an interface to access the underlying GPU buffers in the upcoming 4.0 release. I am eagerly awaiting that release, and I will surely give this a try. MediaFoundation gives excellent performance with my current code, but access to codecs is rather limited, and it also does not support network streams very well. If VLC offers a way to keep the frames on the GPU, and I can get 4k at 60 fps working that way, it will be the best of both worlds.

Posted in Software development | Tagged , , , , , , , , , , , , , , , , | 10 Comments

Retro programming, what is it?

As you may have seen, in the comment section of my previous two articles, a somewhat confused individual has left a number of rather lengthy comments. I had already encountered this individual in the comments section of some YouTube videos (also with an Amiga/oldskool/retro theme), and had already more or less given up on having a serious conversation with this person. It is apparent that this person views things from an entirely different perspective, and is not capable of being open to other perspectives, making any kind of conversation impossible, because you simply hit the brick wall of their preconceptions at every turn.

Having said that, it did trigger me to reflect on my own perspective, and as such it may be interesting to formalize what retro/oldskool programming is.

The hardware

Perhaps it’s good to first discuss the wider concept of ‘retro computing’. A dictionary definition of the term ‘retro’ is:

imitative of a style or fashion from the recent past.

This can be interpreted in multiple ways. If we are talking about the computers themselves, the hardware, then there is a class of ‘retro computing’ that imitates machines from the 70s and 80s, that ‘8-bit’ feeling. Examples are the PICO-8 Fantasy Console or the Colour Maximite. These machines did not actually exist back then, but try to capture the style and fashion of machines from that era.

A related class is that of for example the THEC64 Mini and THEA500 Mini. While these are also not exact copies of hardware from the era, they are actually made to be fully compatible with the software from the actual machines. They are basically emulators, in hardware form. Speaking of emulators, of course most machines from the 70s and 80s have been emulated in software, and I already shared my thoughts on this earlier.

Also related to that are peripherals made for older machines, such as the DreamBlaster S2P. These are not necessarily built with components that were available in the 70s and 80s, but they can be used with computers from that era.

In terms of hardware, my interests are focused on actual machines from the 70s and 80s. So actual ‘classic’ hardware, not ‘retro’ hardware; the PICO-8 and Colour Maximite fall outside the scope. I mostly focus on IBM PCs and compatibles, Commodore 64 and Amiga, as I grew up with these machines, and have years of hands-on experience with them.

My interests in emulation are in function of this: I may sometimes use emulation for convenience when developing, reverse-engineering and such. And I may sometimes modify emulators to fix bugs or add new features. I may also sometimes use some ‘retro’ peripherals that make the job easier, or are more readily available than actual ‘classic’ peripherals. Such as the DreamBlaster S2P, or an LCD monitor for example.

The software

My blog is mainly about developing software, and the only software you can develop is new software, so in that sense it is always ‘retro programming’: new software, but targeting machines from a specific bygone era.

There are also people who discuss actual software from the past, more from a user perspective. That can be interesting in and of itself, but that is not for me. I do occasionally discuss software from the past, and sometimes reverse-engineer it a bit, to study its internals and explain what it is doing. But usually the goal of this is to obtain knowledge that can be used for writing new software for that class of hardware.

Anyway, I believe I already said it before, when I started my ‘keeping it real‘ series: I went back to programming old computers because they pose very different programming challenges to modern machines. It’s interesting to think about programming differently from your daily work. Also, it’s interesting that these machines are ‘fixed targets’. A Commodore 64 is always a Commodore 64. It will never be faster, have more capabilities, or anything. It is what it is, and everyone knows what it can and cannot do. So it is interesting to take these fixed limitations and work within them, trying to push the machine as far as it can go.

Why the comments are barking up the wrong tree

Getting back to the comments on the previous articles, this person kept arguing about the capabilities of certain hardware, or lack thereof, and made all sorts of comparisons with other hardware. Within the perspective explained above, it should be obvious why this is irrelevant.

Since I consider the machine I develop for a ‘fixed target’, it is not relevant how good or bad it is. It’s the playground I chose, so these are the rules of the game that I have to work with. And the game is to try and push the machine as far as possible within these rules.

The machines I pick also tend to be fairly common off-the-shelf configurations. Machines exactly as how most people remember them. Machines as people bought and used them, and how software from the era targeted them.

Yes, there may have been esoteric hardware upgrades and such available, which may have made the machines better. But that is irrelevant, as I don’t have these, and do not intend to use them. I prefer the ‘stock’ machines as much as possible.

So I am not all that interested in endless arguments about what hardware was better. I am much more interested in what you can make certain hardware do, no matter how good or bad it may be.

Related to that, as I said, I like to use machines in configurations as how most people remember them. This person kept referencing very high-end and expensive hardware, and then made comparisons to the Amiga, which was in an entirely different price class. I mean, sure, you could assume a limitless budget, and create some kind of theoretical machine on paper, which at a given point in history combined the most advanced and powerful hardware available on the market. But that wouldn’t be a realistic target for what I do: retro programming.

I like to write code that actually works on real machines that most people either still have from when they were young, or which they can buy second-hand easily, because there’s a large supply of these machines at reasonable prices. And in many cases, the code will also work in emulators. If not, then the emulators need to be patched. I will not write my code around shortcomings of compilers. Real hardware will always be the litmus test.

Posted in Oldskool/retro programming | Tagged , , , , , , , , | Leave a comment

An Amiga can’t do Wolfenstein 3D!

Like many people who grew up in the 80s and early 90s, gaming was a 2d affair for me. Scrolling, sprites and smooth animation were key elements of most action games. The Commodore 64 was an excellent gaming machine in the early part of the 80s, because of its hardware capabilities coupled with a low pricetag. In the latter part of the 80s, we moved to 16-bit machines, and the Amiga was the new gaming platform of choice, again offering silky smooth scrolling and animations, but because of advances in technology, we now got higher resolutions, more colours, better sound and all that.

But then, the stars aligned, and Wolfenstein 3D was released on PC. The stars of CPUs becoming ever faster, the PC getting more powerful video and audio hardware, and 3D gaming maturing. A first glimpse of what was to come, was Catacomb 3-D by id Software, released in November 1991:

This game made use of the power of the 16-bit 286 processor, which was starting to become mainstream with PC owners, and the EGA video standard. The PC was not very good at action games, because it had no hardware sprites, and scrolling was very limited. But id Software saw that EGA’s quirky bitplane layout and ALU meant that it was relatively good at certain things. We’ve already seen that it is fast at filling large areas with a single colour, for polygon rendering for example. But it is also good at rendering vertical columns of pixels.

And that is the key to having fast texture-mapped 3D walls. By taking a simple 2D-map with walls, and performing raycasting from the player’s position in the viewing direction, you can make a simple perspective projection of the walls. You use raycasting to determine the distance from the player to the nearest visible wall, and then render a texture-mapped version of that wall by rendering scaled vertical columns based on the distance, with perspective projection.

Catacomb 3-D was a good first step, but the game still felt rather primitive, with the limited EGA palette, and the gameplay not quite having the right speed and feel yet.

But only a few months later, in May 1992, id Software released the successor, where everything really came together. The developers figured out that the EGA trick of rendering scaled vertical columns works nearly the same in the newly discovered ‘mode X‘ of VGA. The big advantage was that you could now use the full 256 colours, which made for more vibrant textures and level design. The game itself was also refined, and now had just the right look-and-feel to become a true milestone in gaming. Things have never been the same since.

Here is an excellent overview of how Wolfenstein 3D evolved:

But… that change did not bode well for the Amiga. Suddenly, everything the Amiga was good at, was no longer relevant for these new 3D action games. What’s worse… these games relied on very specific properties of the EGA and VGA hardware. They did not translate well to the Amiga’s hardware at all.

And to add insult to injury, id Software followed up with DOOM the next year, which again took advantage of ever faster and more powerful PC hardware, and refined the 3D first-person shooter even further.

A few brave attempts were made on the Amiga to try and make Wolfenstein 3D-like or DOOM-like games for the platform, but sadly they could not hold a candle to the real thing:

As a result, the consensus was that the Amiga could not do 3D first-person shooters because its bitplane-oriented hardware was outdated, and unsuitable.

But all that depends on how you look at it. As you know, demosceners/retrocoders tend to look at these situations as a challenge. Sure, your hardware may not have the ideal featureset for a given rendering method… but you can still make the best of it. The key is to stop thinking in terms of EGA and VGA hardware, and instead think of ways to scale and render vertical columns as fast as possible on the Amiga hardware.

One very nice approach was shown at Revision 2019 by Dekadence. It runs on a standard Amiga 500, and achieves decent framerates while rendering with a decent amount of colours and detail:

Another interesting project is a port of the original Wolfenstein 3D-game, which is optimized for a stock Amiga 1200. It achieves good framerates by rendering at only half the horizontal resolution:

The Amiga 1200 has a 14 MHz CPU. We can compare it to the closest 286es, which are 12 MHz and 16 MHz, and those are just about adequate to run Wolfenstein 3D as well, albeit at a slightly reduced window size for better performance. So this is not a bad attempt at all, on the Amiga.

Another touch I really like is that it uses the original PC music for AdLib, and uses an OPL2 emulator to render the music to a set of samples.

Another really nice attempt is this DreadStein3D:

It makes use of the engine for the game Dread, which is currently in development. This game is actually aiming more at DOOM than at Wolfenstein 3D, but it has a very efficient renderer, and rendering Wolfenstein 3D-like levels can be done very well on an Amiga 500, as you can see.

Here is a good impression of what the Dread game actually looks like:

As you can see, it’s not *quite* like DOOM, in the sense that there are no textured floors and ceilings. And it does not support height differences in levels either. But it does offer various other features over Wolfenstein 3D, such as the skybox and the lights and shadows. So it is more of a Wolfenstein 3D++ engine (or a DOOM– engine).

And the performance is very good, even on a stock Amiga 500. So… all these years later, we can now finally prove that the Amiga indeed CAN do Wolfenstein 3D. All it took was to stop thinking in terms of the PC renderer, and making poor conversions of the x86/VGA-optimized routines on Amiga hardware, but instead to develop Amiga-optimized routines directly.

If you look closely, you’ll see that they have a ‘distinct’ look because of the way the rendering is performed. Britelite discussed the technical details of what became Cyberwolf in a thread over at the English Amiga Board. It gives you a good idea of how you have to completely re-think the whole renderer and storage of the data, to make it run efficiently on the Amiga. It has always been possible. It’s just that nobody figured out how until recently.

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , | 17 Comments

Do 8-bit DACs result in 8-bit audio quality?

During a “””discussion””” some weeks ago, I found that apparently some people think that any system that uses 8-bit DACs is therefore ‘8-bit quality’. A comparison was made between the Amiga and a Sound Blaster 1.0. Both may use 8-bit DACs, but the way they use them is vastly different. As a result, the quality is also different.

The Sound Blaster 1.0 is as basic as it gets. It has a single 8-bit DAC, which is mono. There is no volume or panning control. Samples are played via DMA. The sample rate is dictated by the DMA transfer that the DSP microcontroller performs, and can be set between 4 kHz and 23 kHz. In theory, the sample rate can be changed from one DMA transfer to another. But when playing continuous data, you cannot reprogram the rate without an audible glitch (as explained earlier).

I would argue that this is the basic definition of 8-bit audio quality. That is, the bit-depth of a digital signal defines how many discrete levels of amplitude the hardware supports. 8-bit results in 2^8 = 256 discrete levels of audio. This defines various parameters of your sound quality, including the dynamic range, the signal-to-noise ratio, and how large the quantization error/noise is.

This is all that the Sound Blaster does: you put in 8-bit samples, and it will send them out through the DAC at the specified sample rate. It does not modify or process the signal in any way, neither in the digital nor in the analog domain. So the signal remains 8-bit, 256 discrete levels.

The Amiga also uses 8-bit mono DACs. However, it has 4 of them, each driven by its own DMA channel. Two DACs are wired to the left output, and two DACs are wired to the right output, for stereo support. Also, each DAC has its own volume control, with 64 levels (6-bit). And this is where things get interesting. Because this volume control is implemented in a way that does not affect the accuracy of the digital signal. Effectively the volume control gives you additional bits of amplitude: they allow you to output a signal at more than 256 discrete levels.

If you only had one DAC per audio channel (left or right), this would be of limited use. Namely, you can play samples softer, while retaining the same level of quality. But you trade it in for the volume, the output level. However, the Amiga has two DACs per channel, each with their own individual volume control. This means that you can play a soft sample on one DAC, while playing a loud sample on the other DAC. And this means you actually can get more than 8-bit quality out of these 8-bit DACs.

Anyone who has ever used a MOD player or tracker on a Sound Blaster or similar 8-bit PC audio device, will know that it doesn’t quite sound like an Amiga. Why not? Because a Sound Blaster can’t do better than 8-bit quality. If you want to play a softer sample, you actually need to scale down the amplitude of the samples themselves, which means you are effectively using less than 8 bits for the sound, and the quality is reduced (more quantization noise, worse signal-to-noise ratio etc).

Likewise, if you want to play two samples at the same time, you need to add these samples together (adding two 8-bit numbers yields a 9-bit result), and then scale the result back down to 8-bit, meaning you lose some precision/quality.

Another difference with the Amiga is that the Amiga can set the replay rate for each of the 4 DACs individually. So you can play samples at different rates/pitches at the same time, without having to process the sample data at all. Where as mentioned above, the Sound Blaster has a playback rate that is effectively fixed by the looping DMA transfer. This means that to play samples at different pitches, the samples have to be resampled relative to the playback rate. This generally also reduces quality. Especially with early MOD players this was the case, as your average PC still had a relatively slow CPU, and could only resample with a nearest-neighbour approach. This introduced additional aliasing in the resulting sound. Later MOD players would introduce linear interpolation or even more advanced filtering during resampling, which could mostly eliminate this aliasing.

Some clever coder also figured out that you can exploit the capabilities of the Amiga to play back samples of more than 8-bit quality. Namely, since you have two DACs per channel, and you can set one soft and one loud, and they are then mixed together, you can effectively break up samples of higher quality into a high-word and low-word portion, and distribute it over the two DACs. This way you can effectively get 8+6 = 14-bit accurate DACs, so playing a stereo stream of 14-bit quality is possible on an Amiga. The AHI sound system provides standard support for this.

14-bit, now that isn’t quite CD-quality, is it? Well… that depends. The next step up from 8-bit audio is generally assumed to be 16-bit. But that is a giant leap, and with 16-bit you are expected to be able to produce a dynamic range (and therefore signal-to-noise ratio) of about 96 db, as per the CD specification. That requires quite accurate analog components.

Perhaps this is a good moment to give some quick rules-of-thumb when reasoning about digital audio and quality/resolution.

The first is known as the Nyquist-Shannon sampling theorem, which deals with sample rate. It says:

A periodic signal must be sampled at more than twice the highest frequency component of the signal

Which makes sense… If you want to represent a waveform, the most basic representation is its minimum and its maximum value. So that is two samples. So, Nyquist-Shannon basically says that the frequency range of your analog signal is limited to half the sample rate of your digital signal. So if you have 44.1 kHz, the maximum audible frequency you can sample is 44.1/2 = 22.05 kHz. In practice the limit is not quite that hard, and filtering is required to avoid nasty aliasing near the limit. So effectively at 44.1 kHz sampling rate you will get a maximum of about 20 kHz.

The second is the definition of the decibel. Decibel uses a logarithmic scale where every step of ~6.02 db indicates an amplitude that is twice as large. Combine that with binary numbers, where every bit added will double the amount of values that can be represented. This leads to the simple quick-and-dirty formula of: N bits == N*6.02 db dynamic range. So our 8-bit DACs are capable of about 48 db dynamic range (although some sources argue that because audio is signed, you should only take the absolute value, which means you should actually use N-1, and basically get a value that is ~6 db lower. Clearly manufacturers tend to use the higher numbers, because it makes their products look better).

Although the CD has always been specced as 16-bit PCM data, ironically enough the first CD players weren’t always 16-bit. Most notably Philips (one of the inventors, the other being Sony) did not have the means to mass-produce 16-bit DACs for consumers yet. They had developed a stable 14-bit DAC, and wanted the CD to be 14-bit. Sony however did have 16-bit DACs and managed to get 16-bit accepted as the CD standard.

So what did Philips do? Well, they made CD players with their 14-bit DACs instead. However, they would introduce a trick called ‘oversampling’, where they would use the 14-bit DAC and effectively run it 4 times as fast (so at 176.4 kHz), which allowed them to ‘noise shape’ the signal at a high frequency, and then filter it down, to effectively get the full 16-bit quality from their 14-bit DAC (and ironically enough some ‘audiophiles’ now try to mod these old Philips CD players and bypass the oversampling, to listen to the 14-bit DAC directly, which they of course claim to sound better, because oversampling would ‘only be used to get better measurements on paper, but actually sounds worse’. The reality is probably that it does actually sound objectively ‘worse’, because the filters aren’t designed to remove the aliasing you now get, because you removed the oversampling and noise-shaping feedback loop. But perhaps that added aliasing and distortion sounds ‘subjectively’ better to them, just as people say of tube amplifiers, or vinyl records).

In fact, this oversampling trick had an interesting advantage in that it resulted in better linearity. A classic DAC is made using a register ladder (as we know from the Covox), and to get its response as linear as possible, you need VERY low tolerances to get enough accuracy for a full 16-bit resolution. And the resistance may vary depending on temperature. This meant that building high quality 16-bit DACs was expensive. Also, these DACs generally had to be calibrated in the factory, to fine-tune the linearity. This made it even more expensive.

Another advantage of oversampling is that you now ran the DAC at extremely high frequencies, and the audible frequencies from the original source material are now nowhere near the Nyquist limit of the DAC. Which means that the analog filtering stage after the DAC can be far less steep, resulting in less distortion and noise.

So we quickly saw manufacturers taking this idea to the extreme: taking a 1-bit DAC and using a lot of oversampling (like 64 times, or some even 256 times), running it at extremely high frequencies to still get 16-bit quality from the DAC. The advantage: the DAC itself was just 1-bit, it was guaranteed to be linear. No calibration required. This meant we now had cheap DACs that delivered very acceptable sound quality.

By the time the successor to the CD came out, the Super Audio CD, 1-bit oversampling DACs were now so common, that the designers figured they could ‘cut out the middle man’. A Super Audio CD does not encode Pulse Code Modulation-samples (PCM), like a CD and most other digital formats. Instead, it encodes a 1-bit stream at 2.8224 MHz (64 times as high as 44.1 kHz, so ’64 times oversampling’), in what they call ‘Direct Stream Digital‘ (DSD), an implementation of Pulse Density Modulation (PDM). So now you could feed the digital data stream directly to a 1-bit DAC, without any need for converting or oversampling.

Ironically enough, modern DAC designs would eventually move back to using slightly more than 1-bit, to find the best possible compromise between the analog and the digital domain. So some modern DACs would use 2-bit to 6-bit internally. Which means that you would once again need to process the data on a Super Audio CD before sending it to a DAC which uses a different format, in the name of better quality.

Another interesting example of an audio device that isn’t quite 16-bit is the AdLib Gold card. Although it was released in 1992, in a time when 16-bit sound cards were more or less becoming the standard, it only had a 12-bit DAC. Did it matter? Nah, not really. It was an excellent quality 12-bit design, so you actually did get 12-bit quality. Many sound cards may have been 16-bit on paper, but had really cheap components, so you had tons of noise and distortion, and would get nowhere near that 96 db dynamic range anyway. Some of them are closer to 12-bit in practice (which is about 72 db of dynamic range), or actually worse.

See also this excellent article, which explains about bit-depth and quality (and how things aren’t stair-stepped except with really basic DACs… which by now you should understand, since most devices use 1-bit DACs, there’s no way they can have that kind of stair-stepping anyway).

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , , , , | 4 Comments

Migrating to .NET Core: the future of .NET.

More than 20 years ago, Microsoft introduced their .NET Framework. A reaction to Java and the use of virtual machines and dynamic (re)compiling of code for applications and services. Unlike Java, where the Java virtual machine was tied to a single programming language, also by the name of Java, Microsoft opened up their .NET virtual machine to a variety of languages. They also introduced a new language of their own: C#. A modern language which was similar to C++ and Java, which allowed Microsoft to introduce new features easily, because they controlled the standard.

Then, in 2016, Microsoft introduced .NET Core, a sign of things to come (and a sign of confusion, because we used to have only one .NET, and now we had to separate between ‘Framework’ and ‘Core’). Where the original .NET Framework was mainly targeted at Windows and Intel x86 or compatible machines, .NET Core was aimed at multiple platforms and architectures, as Java before it. Microsoft also moved to an open source approach.

This .NET Core was not a drop-in replacement, but a rewrite/redesign. It had some similarities to the classic .NET Framework, but was also different in various ways, and would be developed alongside the classic .NET Framework for the time being, as a more efficient, more portable reimplementation.

On April 18th 2019, Microsoft released version 4.8 of the .NET Framework which would be the last version of the Framework product line. On November 10th 2020, Microsoft announced .NET 5. This is where the Framework and Core branches would be joined. Technically .NET 5 is a Core branch, but Microsoft now considered it mature enough to replace .NET 4.8 for new applications.

As you may know from my earlier blogs, I always say you should keep an eye on new products and technologies developing, so this would be an excellent cue to start looking at .NET Core seriously. In my case I had already used an early version of .NET Core for a newly developed web-based application sometime in late 2016 to early 2017. I had also done some development for Windows Phone/UWP, which is also done with an early variation of the .NET Core environment, rather than the regular .NET Framework.

My early experiences with .NET Core-based environments were that it was .NET, but different. You could develop with the same C# language, but the environment was different. Some libraries were not available at all, and others may be similar to the ones you know from the .NET Framework, but not quite the same, so you may had to use slightly different objects, namespaces, objects, methods or parameters to achieve the same results.

However, with .NET 5, Microsoft claims that it is now ready for prime time, also on the desktop, supporting Windows Forms, WPF and whatnot, with the APIs being nearly entirely overlapping and interchangeable. Combined with that is backward compatibility with existing code, targeting older versions of the .NET Framework. So I figured I would try my hand at converting my existing code.

I was getting on reasonably well, when Microsoft launched .NET 6 in November, together with Visual Studio 2022. This basically makes .NET 5 obsolete. Support for .NET 5 will end in May 2022. .NET 6 on the other hand is an LTS (Long-Term Support) version, so it will probably be supported for at least 5 or 6 years, knowing Microsoft. So, before I could even write this blog on my experiences with .NET 5, I was overtaken by .NET 6. As it turns out, moving from .NET 5 to .NET 6 was as simple as just adjusting the target in the project settings, as .NET 6 just picks up where .NET 5 left off. And that is exactly what I did as well, so we can go straight from .NET 4.8 to .NET 6.

You will need at least Visual Studio 2019 for .NET 5 support, and at least Visual Studio 2022 for .NET 6 support. For the remainder of this blog, I will assume that you are using Visual Studio 2022.

But will it run Cry… I mean .NET?

In terms of support, there are no practical limitations. With .NET 4.7, Microsoft moved the minimum OS support to Windows 7 with SP1, and that is still the same for .NET 6. Likewise, .NET Framework supports both x86 and x64, and .NET 6 does the same. On top of that, .NET 6 offers support for ARM32 and ARM64.

Sure, technically .NET 4 also supports IA64 (although with certain limitations, such as no WPF support), whereas .NET 6 does not, but since Windows XP was the last regular desktop version to be released for Itanium, you could not run the later updates of the framework anyway. If you really wanted, you could get Windows Server 2008 R2 SP1 on your Itanium, as the latest possible OS. Technically that is the minimum for .NET 4.8, but I don’t think it is actually supported. I’ve only ever seen an x86/x64 installer for it. Would make sense, as Microsoft also dropped native support for Itanium after Visual Studio 2010.

So assuming you were targeting a reasonably modern version of Windows with .NET 4.8, either server (Server 2012 or newer) or desktop (Windows 7 SP1 or newer), and targeting either x86 or x64, then your target platforms will run .NET 6 without issue.

Hierarchy of .NET Core wrapping

Probably the first thing you will want to understand about .NET Core is how it handles its backward compatibility. It is possible to mix legacy assemblies with .NET Core assemblies. The .NET 6 environment contains wrapper functionality which can load legacy assemblies and automatically redirect their references to the legacy .NET Framework to the new .NET environment. However, there are strict limitations. There is a strict hierarchy, where .NET Core assemblies can reference legacy assemblies, but not vice versa. So the compatibility only goes one way.

As you probably know, the executable assembly (the .exe file) contains metadata which determines the .NET virtual machine that will be used to load the application. This means that a very trivial conversion to .NET 6 can be done by only converting the project of your solution that generates this executable. This will then mean the application will be run by the .NET 6 environment, and all referenced assemblies will be run via the wrapper for .NET Framework to .NET 6.

In most cases, that will work fine. There are some corner-cases however, where legacy applications may reference .NET Framework objects that do not exist in .NET 6. or use third-party libraries that are not compatible with .NET 6. In that case, you may need to look for alternative libraries. In some cases you may find that there are separate NuGet packages for classic .NET Framework and .NET Core (such as with CefSharp, which has separate CefSharp.*.NETCore packages). Sometimes there are conversions of an old library done by another publisher.

And in the rare case where you can not find a working alternative, there is a workaround, which we will get into later. But in most cases, you will be fine with the standard .NET 6 environment and NuGet packages. So let’s look at how to convert our projects. Microsoft has put up a Migration Guide that gives a high-level overview, and also provides some crude command-line tools to assist you with converting. But I prefer to dig into the actual differences of project files and things under the hood, so we have a proper understanding, and can detect and solve problems by hand.

Converting projects

The most important change is that project files now use an entirely different XML layout, known as “SDK-style projects”. Projects now use ‘sensible defaults’, and you opt-out of things, rather than opt-in. So your most basic project file can look as simple as this:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Library</OutputType>
    <TargetFramework>net6.0</TargetFramework>
  </PropertyGroup>

</Project>

So you merely need to tell Visual Studio what kind of project it is (eg “Library” or “Exe”), and which framework you want to target. This new project type can also be used for .NET 4.8 or older frameworks, so you could convert your projects to the new format first, and worry about the .NET 6-specific issues later.

What happens here is that by default, the project will include all files in the project directory, and any subdirectories, and will automatically recognize standard files such as .cs and .resx, and interpret them as the correct type. While it is possible to set the EnableDefaultItems property, and go back to the old behaviour of having explicit definitions for all files included, I would advise against it, for at least two reasons:

  • Your project files remain simple and clean when all your files are included automatically.
  • When files and folders are automatically included, it will more or less force you to keep your folders clean and not have files remaining in there, which are not relevant to the project, or should not be in the folder containing the code, but should be stored elsewhere.

So this type of project will force you to exclude files and folders that should not be used in the project, rather than including all files you need.

I would recommend just backing up your old project files, and replacing them with this new ’empty’ project file, and just load it in Visual Studio (not right away, you may want to read about some possible issues, like with NuGet packages, below). You will immediately see which files it already includes automatically. If your projects are clean enough (merely consisting of .cs and .resx files), they should be more or less correct automatically. From there on, you simply need to add the references back, to other projects, to other assemblies, and to NuGet packages. And you may need to set ‘copy to output’ settings for non-code files that should also be in the application folder.

As mentioned above, you probably want to start by just converting the project for your EXE, and get the project building and running that way, with all the other projects running via the .NET 4-to-6 compatibility wrapping layer. Then you will want to work your way back, via the references. A good help is to display the project build order, and work from the bottom to the top of the list, converting the projects one by one, and creating a working state of the application at every step. Right-click your project in the Solution Explorer, choose “Build Dependencies->Project Build Order…”:

The solution format has not been modified, so you do not need to do anything there. As long as your new projects have the same path/filename as the old ones, they will be picked up by the solution as-is.

Now to get to some of the details you may run into.

NuGet issues

NuGet packages were originally more or less separate from the project file, and stored in a separate packages.config file. The project would reference them as normal references. NuGet was a separate process that had to be run in advance, in order to import the packages into the NuGet folder, so that the references in the project would be correct.

Not anymore, NuGet packages are now referenced directly in the project, with a PackageReference tag. MSBuild can now also import the NuGet packages itself, so no separate tool is required anymore.

This functionality was also added to the old project format. So I would recommend to first convert your NuGet packages to PackageReference entries in your project, getting rid of the packages.config file.

This also implies that if you build your application not from Visual Studio itself, but via an automated build process via MSBuild, such as a build server (Jenkins, Bamboo, TeamCity or whatnot), that you may need to modify your build process. You may need to replace a NuGet-stage with an MSBuild-stage that restores the packages (running MSBuild with the -t:restore switch).

So I would recommend first converting your projects from packages.config to PackageReference, and getting your build process in order, before converting the projects to the new format. Visual Studio can help you with this. In the Solution Explorer, expand the tree view of your project, go to the References-node, right-click and choose “Migrate packages.config to PackageReference…”:

AssemblyInfo issues

Another major change in the way the new project works, is that by default, it generates the AssemblyInfo from the project, rather than from an included AssemblyInfo.cs file. This will result in compile issues when you also have an AssemblyInfo.cs-file, because a number of attributes will be defined twice.

Again, you have the choice of either deleting your AssemblyInfo.cs file (or at least removing the conflicting attributes), and moving the info into the project file, or you can change the project to restore the old behaviour.

For the latter, you can add the GenerateAssemblyInfo setting to your project, and set it to false, like this:

<PropertyGroup>
   <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
</PropertyGroup>

Limitations of .NET Core

So, .NET is now supported on other platforms than Windows, such as linux and macOS? Well yes and no. It’s not like Java, where your entire application can be written in a platform-agnostic way. No, it’s more like there is a lowest common denominator for the .NET 6 environment, which is supported everywhere. But various additional frameworks/APIs/NuGet packages will only be available on some platforms.

In the project example above, I used “net6.0” as the target framework. This is actually that ‘lowest common denominator’. There are various OS-specific targets. You will need to use those when you want to use OS-specific frameworks, such as WinForms or WPF. In that case, you need to set it to “net6.0-windows”. Note that this target framework will also affect your NuGet packages. You can only install packages that match your target.

There is also a hierarchy for target frameworks: the framework “bubbles up”. So a “net6.0” project can only import projects and NuGet packages that are also “net6.0”. As soon as there is an OS-specific component somewhere, like “net6.0-windows”, then all projects that reference it, also need to be “net6.0-windows”.

This can be made even more restrictive by also adding an OS version at the end. In “net6.0-windows”, version 7 or higher is implied, so it is actually equivalent to “net6.0-windows7.0”. You can also use “net6.0-windows10.0” for example, to target Windows 10 or higher only.

In practice this means that if you want to reuse your code across platforms, you may need to define a platform-agnostic interface-layer with “net6.0”, to abstract the platform differences away. Then you can implement different versions of these interfaces in separate projects, targeting Windows, linux and macOS.

Separate x86 and x64 runtimes

Another difference between .NET 4.8 and .NET 6 is that the runtimes are now separated into two different installers, where .NET 4.8 would just install both the x86 and x64 environment on x64 platforms.

This implies that a 64-bit machine may not have a 32-bit runtime installed, and as such can only run code that specifically targets x64 (or AnyCPU). That may not matter for you if you already had separate builds for 32-bit and 64-bit releases (or had dropped 32-bit already, and target 64-bit exclusively, as we should eventually do). But if you had a mix of 32-bit and 64-bit applications, because you assumed that 64-bit environments could run the 32-bit code anyway, then you may need to go back to the drawing board.

Of course you could just ask the user to install both runtimes, or install both automatically. But I think it’s better to try and keep it clean, and not rely on any x86 code at all for an x64 release.

Use of AnyCPU

While on the subject of CPU architectures, there is another difference with .NET 6, and that relates to the AnyCPU target. In general it still means the same as before: the code is compiled for a neutral target, and can be run on any type of CPU.

There is just one catch, and I’m not sure what the reasoning is behind it. That is, for some reason you cannot run an Exe built for AnyCPU on an x86 installation. The runtime will complain that the binary is not compatible. The same binary will run fine on an x64 installation.

I have found that a simple workaround is to build an Exe that is specifically configured to build for x86. Any assemblies that you include, can be built with AnyCPU, and will work as expected.

It is a small glitch, but easy enough to get around.

Detecting and installing .NET Core

Another problem I ran into, as .NET Core is still quite a fresh platform, is that it may not be supported by the environment that you create your installers with. In my case I had installers built with the WiX toolset. This does not come with out-of-the-box detection and installation of any .NET Core runtimes yet. What’s worse, the installer itself relies on .NET in order to run, and custom code is written against the .NET Framework 4.5.

This means that you would need to install the .NET Framework just to run your installer, while your application will need the .NET 6 installer, and the .NET Framework is not required at all, once it is installed. So that is somewhat sloppy.

Mind you, Microsoft includes a stub in every .NET Core binary that generates a popup for the user, and directs it to the download page automatically:

So, for a simple interactive desktop application, that may be good enough. However, for a clean, automated installation, you will want to take care of the installation yourself.

I have not found a lot of information on how to detect a .NET Core installation programmatically. What I have found, is that Microsoft recommends using the dotnet command-line tool, which has a –list-runtimes switch to report all rutimes installed on the system. Alternatively, they say you can scan the installation folders directly.

As you may know, the .NET Framework could be detected by looking at the relevant registry key. With .NET Core I have not found any relevant registry keys. I suppose Microsoft deliberately chose not to use the registry, in order to have a more platform-agnostic interface. The dotnet tool is available on all platforms.

Also, a quick experiment told me that apparently the dotnet tool also just scans the installation folders. If you rename the folder that lists the version, e.g. changing 6.0.1 to 6.0.2, then the tool will report that version 6.0.2 is installed.

So apparently that is the preferred way to check for an installation. I decided to write a simple routine that executed dotnet –list-runtimes and then just parsed the output into the names of the runtimes and their versions. I wrapped that up in a simple statically linked C++ program (compiled to x86), so it can be executed on a bare bones Windows installation, with no .NET on it at all, neither Framework nor Core. It will then check and install/upgrade the .NET 6 desktop runtime. I also added a simple check via GetNativeSystemInfo() to see if we are on an x86 or x64 system, so it selects the proper runtime for the target OS.

Workarounds with DllExport/DllImport

Lastly, I want to get into some more detail on interfacing with legacy .NET assemblies, which are not directly compatible with .NET 6. I ran into one such library, which I believe made use of the System.Net.Http.HttpClient class. At any rate, it was one of the rare cases where the compatibility wrapper failed, because it could not map the calls of the legacy code onto the equivalent .NET 6 code, since it is not available.

This means that this assembly could really only be run in an actual .NET Framework environment. Since the assembly was a closed-source third-party product, there was no way to modify it. But there are ways around this. What you need is some way to run the assembly inside a .NET Framework environment, and have it communicate with your .NET 6 application.

The first idea I had was to create a .NET Framework command-line tool, which I could execute with some command-line arguments, and parse back its output from stdout. It’s a rather crude interface, but it works.

Then I thought about the UnmanagedExports project by Robert Giesecke, that I had used in the past. It allows you to add [DllExport] attributes to static methods in C# to create DLL exports, basically the opposite of using [DllImport] to import methods from native code. You can use this to call C# code from applications written in non-.NET environments such as C++ or Delphi. The result is that when the assembly is loaded, the proper .NET environment is instantiated, regardless of whether the calling environment is native code or .NET code.

Mind you, that project is no longer maintaned, and there’s a similar project, known as DllExport, by Denis Kuzmin, which is up-to-date (and also supports .NET Core), so I ended up using that instead.

So I figured that if this works when you call a .NET Framework 4.8 assembly from native C++ code, it may also work if you call it from .NET 6 code. And indeed it does. It’s still a bit messy, because you still need a .NET 4.8 installation on the machine, and you will be instantiating two virtual machines, one for the Core code and one for the Framework code. But the interfacing is slightly less clumsy than with a command-line tool.

So in the .NET 4.8 code you will need to write some static functions to export the functionality you need:

class Test
{
    [DllExport]
    public static int TestExport(int left, int right)
    {
        return left + right;
    } 
}

And in the .NET 6 code you will then import these functions, so you can call them directly:

[DllImport("Test.dll")]
public static extern int TestExport(int left, int right);
...
public static void Main()
{
    Console.WriteLine($"{left} + {right} = {TestExport(left, right)}")
}

Conclusion

That should be enough to get you off to a good start with .NET 6. Let me know how you get on in the comments. Please share if you find other problems when converting. Or even better, if you find solutions to problems.

Posted in Software development, Software news | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Running anything Remedy/Futuremark/MadOnion/UL in 2020

There has always been a tie between Futuremark and the demoscene. It all started with the benchmark Final Reality, released by Remedy Entertainment in 1997.

Remedy Entertainment was a gaming company, founded by demosceners from the legendary Future Crew and other demo groups. Remedy developed an early 3D acceleration benchmark tool for VNU European Labs, known as Final Reality, which showed obvious links to the demoscene, both because of the demo-like design of its parts, the name “Final Reality” being a reference to the Future Crew demo Second Reality, and the fact that a variation of Second Reality’s classic city scene was included in Final Reality.

After this first benchmark, a separate company focused on benchmarking was founded, which was to become Futuremark. After releasing 3DMark99, they changed their name to MadOnion.com. Then after releasing 3DMark2001, they changed back to Futuremark Corporation. Eventually, after being acquired in 2018 by UL, they changed the name to UL Benchmarks.

With every major milestone of PC hardware and software, generally more or less coinciding with new versions of the DirectX API and/or new generations of hardware, they released a new benchmark to take advantage of the new features, and push it to the extremes.

Traditionally, each benchmark also included a demo mode, which added a soundtrack, and generally had extended versions of the test scenes, and a more demo-like storytelling/pacing/syncing to music. As a demoscener, I always loved these demos. They often had beautiful graphics and effects, and great music to boot.

But, can you still run them? UL Benchmarks was nice enough to offer all the legacy benchmarks on their website, complete with registration keys: Futuremark Legacy Benchmarks – UL Benchmarks

Or well, they put all of them up there, except for Final Reality (perhaps because it was released by Remedy, not by Futuremark/MadOnion). But I already linked that one above.

I got all of them to run on my modern system with Windows 10 Pro x64 on it. I’ll give a quick rundown of how I got them running, starting from the newest.

3DMark11, 3DMark Vantage, 3DMark06, 3DMark05 and 3DMark03 all installed and ran out-of-the-box.

3DMark2001SE installed correctly, but the demo would not play. Looking at the error log revealed that it had problems playing back sound (which explains why regular tests would work, they have no sound). But when you select the Compatibility mode for Windows 8, that fixes the sound, and the whole demo runs fine.

3DMark2000 was a bit more difficult. On my laptop it installed correctly, but on my desktop, the installer hung after unpacking. The trick is to go to Task Manager, find the setup.exe process in the Details tab, right-click it and select “Analyze wait chain”. It will tell you what the process is waiting for. In my case it was “nvcontainer.exe”. So I killed all processes by that name, and the setup continued automatically.

Now 3DMark2000 was installed properly, but it still did not work correctly. There is a check in there, to see if you have at least 4MB video memory. Apparently on a modern video card with multiple GBs of memory, the check overflows, and thinks you have less than 4MB. It then shows a popup, and immediately closes after you click on it. So I disassembled the code, found the check, and simply patched it out. Now it works fine.

If you want to patch it yourself, use a hex editor and change the following bytes in 3DMark2000.exe:

Offset 69962h: patch 7Dh to EBh
Offset 69979h: patch 7Dh to EBh

XL-R8R dates from the same era as 3DMark2000, and I ran into the same issue of setup.exe getting stuck, and having to analyze the wait chain to make it continue. It did not appear to have a check for video memory, so it ran fine after installation.

3DMark99Max was more difficult still. The installer is packaged with a 16-bit version of a WinZip self-extractor. You cannot run 16-bit Windows programs on a 64-bit version of Windows. Luckily you can just extract the files with a program like 7-Zip or WinRar, by just right-clicking on 3DMark99Max.exe and selecting the option to extract it to a folder. From there, you can just run setup.exe, and it should install properly.

Like 3DMark2000, there’s also a check in 3DMark99Max that prevents it from running on a modern system. In this case, it tries to check for DirectX 6.1 or higher, and the check somehow mistakenly thinks that the DirectX version is too low on a modern system. Again, a simple case of disassembling, finding the check, and patching it out.

If you want to patch it yourself, use a hex editor and change the following byte in 3dmark.exe:

Offset 562CCh: patch 75h to EBh

Final Reality then, the last one. Like 3DMark99Max, it has a 16-bit installer. However, in this case the trick of extracting it does not help us. You can extract the setup files, but in this case the setup.exe is still 16-bit, so it still cannot run. But not to worry, there are ways around that. Initially I just copied the files from an older installation under a 32-bit Windows XP. But an even better solution is otvdm/winevdm.

In short, x86 CPUs can generally only switch between two modes on-the-fly under Windows. So a 32-bit version of Windows can switch between 32-bit and 16-bit environments, which allows a 32-bit version of Windows to run a 16-bit “NTVDM” (NT Virtual DOS Machine) environment, in which it runs DOS and 16-bit Windows programs. For 64-bit versions of Windows, there’s a similar concept, known as Windows-on-Windows (WOW64). This allows you to run 32-bit Windows programs. The original NTVDM for DOS and Win16 programs is no longer available.

Otvdm works around this by using software emulation for a 16-bit x86 CPU, and then uses part of the Wine codebase to translate the calls from 16-bit to 32-bit. This gives you very similar functionality to the real NTVDM environment on a 32-bit system, and allows you to run DOS and Win16 applications on your 64-bit Windows system, albeit with limited performance, since the CPU emulation is not very fast. Because it’s not a sandbox environment like most emulators, but it actually integrates with the host OS via 32-bit calls.

In our case, we can simply run the Final Reality installer via otvdm. Just download the latest stable release from the otvdm Github page, and extract it to a local folder. Then start odvdmw.exe, and browse to your fr101.exe installer file. It will then install correctly, directly onto the host system.

There appear to be no compatibility problems with this oldest benchmark of them all, funny enough, so that rounds it all up.

Here is a video showing all the 3DMark demos:

And here is the XL-R8R demo:

And finally the Final Reality demo:

Posted in Direct3D, Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | 6 Comments

The Trumpians love their children too

After expressing my worries on the development of extreme leftism and Wokeness, I thought I should also express my concerns about the aftermath of the elections.

What worries me is how people responded to Trump’s loss, both in the US and in the rest of the world. I have seen images of people going out on the streets, cheering and chanting, and attacking Trump dolls and such.

There’s also a site “Trump Accountability” that wants to attack Trump supporters.

As I grew up during the Cold War, and I saw the demise of Communism and its dictators, this sort of thing reminds me very much of those days.

The big difference is: the US was not under dictatorship, and although Trump may have lost the elections, a LOT of people voted for him. I suppose this is the result of 4 years of sowing hatred against a president and his politics. And now that Trump is gone, it seems they want to go after his supporters. But for what? It is a democracy, and these people simply cast their democratic vote. That’s how it works. If you start oppressing people with the ‘wrong’ vote, you are actually CREATING a dictatorship, not getting rid of one, oh the irony.

At the time of writing, Trump has received around 71 million votes, and Biden has received around 74 million votes. And that is what troubles me. Are these people serious about persecuting such a large group? There aren’t 71 million fascists, racists, or whatever you think in the US. That just doesn’t make sense at all. Most of these 71 million people are just normal people like you and me. They could be your neighbour, your hair dresser, your plumber, etc.

And that’s where I think things go wrong, badly. As a European, I live in a country that is FAR more leftist than the US. We are at Bernie Sanders level, if that. So theoretically I couldn’t be further removed from Trump/Republican/conservative voters. People who are generally quite religious, pro-life, anti gay-marriage etc. And then they are often patriotic. I’m not even American, let alone a patriot for that country. So in that sense I suppose I have very little if anything in common with these people, and my views are very different.

Nevertheless, I had some interesting talks with some of these people. I recall one discussion where a religious Republican sincerely did not understand how you can value life if you don’t believe in God. That’s interesting, I never even gave that any thought, since I’m not religious, yet I do value life. And I can understand that to them, if God didn’t create life, then they don’t see how life is in any way holy, or however you want to put it. Perhaps it is actually true that non-religious people value life less, who knows?

Thing is, they did make me think about it, and we had a discussion. I suppose my explanation is one of ‘theory of mind’: I know how it feels if I get hurt, and I know that I don’t want to die. So I can understand how that must feel for others as well, so I do not want to do that to them either. Which in some way comes back to what Christians already know: Don’t do unto others what you don’t want done unto you.

But the key here is that we could have this discussion, and we had mutual respect and understanding for our different views.

And I suppose that is also the problem with the people who are now cheering on the Democrat win… or actually Trump’s loss. While as a European, I may be closer to the Democrat political view than the Republican one, this is something that goes COMPLETELY against who I am, and how I want the world to be. I grew up with the value of tolerance and understanding. I suppose political views aren’t everything. I cannot get behind you if I share the basic views, but reject the way in which you actually conduct yourself (which I think is against these very views anyway).

If half of the US cannot tolerate the other half simply for having different ideas on what is best for their country, then that is a recipe for disaster.

Getting back to the Cold War, the song Russians by Sting comes to mind:

Back when this song came out, the Cold War also set up the US against the USSR with lots of propaganda in the media. Not everything you heard or read was true. In this song, Sting makes some very good points. Mostly that Russians are just people like you and me. Their government may have a certain ideology, but most Russians just try to lead their lives and mind their own business, just as we do.

As he says:

“In Europe and America there’s a growing feeling of hysteria”

“There is no monopoly on common sense
On either side of the political fence
We share the same biology, regardless of ideology
Believe me when I say to you
I hope the Russians love their children too”

“There’s no such thing as a winnable war
It’s a lie we don’t believe anymore”

I think these lines still contain a lot of truth. There’s hysteria in the US as well now, fueled by the mainstream media and social media, much like in the Cold War back then.

No monopoly on common sense on either side of the political fence. That’s so true. You can’t say the Democrat voters have all the common sense and the Republican voters have none, just based on who won an election.

And indeed, he says “we share the same biology”, that is of course even more true for Democrats vs Republicans than it was for the US vs USSR situation, as both are Americans. They may even be related.

And the most powerful statement: “I hope the Russians love their children too”. Of course he was referring to nuclear war, and mutually assured destruction. But it is very recognizable: Russians are humans too, of course they love their children, they are just like us. And it’s the same with Democrats and Republicans.

So I hope this also remains only a Cold War between Democrats and Republicans, and both sides will accept the results, and try to find ways to come together again, understand and tolerate eachother, and work together for a better world.

Update: Clearly I am not the only one with such concerns. Douglas Murray has also written an article about his concerns of this polarization, division and possible outcomes. I suggest you read it.

More update: Bret Weinstein and Heather Heying also comment on some of these anti-Trump sentiments and actions. And they make a good point about what the REAL left is, or is supposed to be (and as I said, that is also more or less my political position, I am by no means right-wing, certainly not by American standards), and how these far left people have lost the plot.

And another update: James Lindsay, one of the authors of the book Cynical Theories, which I mentioned before, has actually decided to vote for Donald Trump, despite being a liberal rather than a conservative/Republican. He explains in the video below how he sees Wokeness as possibly the biggest threat to the country, and how Biden is unlikely to stop its rise. So at least some people who voted Trump, aren’t actually Trump/Republican/conservative supporters, they just thought the alternative was worse.

And yet another update: Here Jordan B Peterson talks about how liberals and conservatives should listen to eachother, and keep eachother balanced. One side is not necessarily wrong, the other side is not necessarily right. They each have a different focus in life, and they need each other. Ideas may be good or bad depending on the situation in which they are applied. Very much the message I wanted to give. I will probably return to this in more detail in a future post.

Posted in Science or pseudoscience? | Tagged , , , , | 4 Comments