GPU-accelerated video decoding

Picking up where I left off last time, I’d like to discuss a few more things when using video decoding libraries in your own graphics pipeline. Actually, the previous article was just meant as the introduction to the more technical implementation details, but I got carried away.

Video decoding… it’s a thing. A solved thing, if you just want your basic “I have a video and a window, and I want to display the video in the window”. There are various libraries that can do that for you (DirectShow, Media Foundation, VLC, FFmpeg etc), and generally they will use the GPU to accelerate the decoding. Which is pretty much a requirement for more high-end content, such as 4k 60 fps video.

But I want to talk about just how thin of a line that GPU-accelerated decoding is. Because as soon as you want to do anything other than just displaying the video content in a window managed by the library, you run into limitations. If you want to do anything with the video frames, you usually just want to get a pointer to the pixel data of the frame in some way.

And that is where things tend to fall apart. Such a pointer will have to be in system memory. Worst case (which used to be quite often) this will trigger a chain reaction of the library doing everything in system memory in that case, which means it will also use the CPU to decode, rather than the GPU. See, as long as the library can manage the entire decoding chain from start to finish, and has freedom to decide which buffers to allocate where, and how to output the data, things are fine. But as soon as you want to have access to these buffers in some way, it may falls apart.

In the average case, it may use GPU acceleration for the actual decoding, but then copy the internal GPU buffer to a system buffer. And then you will have to copy it BACK to the GPU in your own texture, to do some actual rendering with it. The higher the resolution and framerate, the more annoying this GPU<->CPU traffic is, because it takes up a lot of precious CPU time and bandwidth.

But there’s a tiny bit more to it…

RGB32 vs NV12 format

In the modern world of truecolour graphics, we tend to use RGB pixelformats for textures, the most common being 8 bits per pixel, packed into a 32-bit word. The remaining 8-bits may be left undefined, or used as an extra alpha (A) component. The exact order may differ between different hardware/software, so we can have RGBA, BGRA, ARGB and whatnot, but let’s call this class ‘RGB32’, as in: “some variation of RGB, stored in a 32-bit word”. That is the variation you will generally want to use when rendering with textures.

For video however, this is not ideal. YUV colour models were used by JPEG and MPEG, among other formats, because they have interesting properties for compression. A YUV (again an umbrella term for various different, but related pixel formats) colour model  takes human perception into consideration. It decomposes a colour into luminance (brightness) and chrominance (colour) values. The human eye is more sensitive to luminance than to chrominance, which means that you can store the chrominance values with a lower resolution than the luminance values, without having much of an effect on the perceived visual quality.

In fact, getting back to the old analog PAL and NTSC formats: These formats were originally black-and-white, so they contained only the luminance of the signal. When colour information (chrominance) was added later, it was added at a lower resolution than the luminance. PAL actually uses YUV, and NTSC uses the similar YIQ encoding. The lower resolution of the chroma signal leads to the phenomenon of artifacting, which was exploited on CGA in 8088 MPH.

In the digital world, the Y (luminance) component is stored at the full resolution, and the U and V (chrominance) components are stored at a reduced resolution. A common format is 4:2:0, which means that for every 4 Y samples, 1 U sample and 1 V sample is stored. In other words, for every 2×2 block of pixels, all Y-values are stored, and only the average U and V values of the block are stored.

When converting back to RGB, the U and V components can be scaled back up, usually with a bilinear filter or such. This can easily be implemented in hardware, so that the pixel data can be stored in the more compact YUV layout, reducing the required memory footprint and bandwidth when decoding video frames. With RGB32, you need 32 bits per pixel. With an YUV 4:2:0 format, for 4 pixels you need to store a total of 6 samples, so 6*8 = 48 bits. That is effectively 48/4 = 12 bits per pixel, so only 37.5% of RGB32. That matters.

When you want to get access to the frame data yourself, you generally have to tell the decoder which format to decode to. This is another pitfall where things may fall apart, performance-wise. That is, a lot of hardware-accelerated decoders will decode into a YUV-layout. If you specify that you want to decode the frame into an RGB32 format, this may cause the decoder to choose a decoding path that is partially or even entirely run on the CPU, and as such will perform considerably worse.

In practice, the most common format that accelerated decoders will decode to is NV12. For an overview of NV12 and various other pixel formats, see this MSDN page. In short, NV12 is a format that stores YUV data in a single buffer, with the Y component first, and then the U and V components packed together:

figure 10. nv12 memory layout

This format is supported in hardware on a wide range of devices, and is your best bet for efficient accelerated GPU decoding.

What’s more: this format is also supported as a texture format, so for example with Direct3D11, you can use NV12 textures directly inside a shader. The translation from YUV to RGB is not done automatically for you though, but can be done inside the shader.

The format is a bit quirky. As it is a single buffer, that contains two sets of data, at different resolutions, Direct3D11 solves this by allowing you to create two shader views on the texture. For the Y component, you create an ID3D11ShaderResourceView with the DXGI_FORMAT_R8_UNORM format. For the U and V components, you create an ID3D11ShaderResourceView with the DXGI_FORMAT_R8G8_UNORM format. You can then bind these views as two separate textures to the pipeline, and read the Y component from the R component of the R8_UNORM view, and the U and V components from the R and G components of the R8G8_UNORM view respectively. From there you can do the usual conversion to RGB.

So the ideal way to decode video is to have the hardware routine decode it to NV12, and then let you have access to the NV12 buffer.

Using Media Foundation

With Media Foundation, it is possible to share your Direct3D11 device between your application and the Media Foundation accelerated decoders. This can be done via the IMFDXGIDeviceManager, which you can create with the MFCreateDXGIDeviceManager function. You can then use IMFDXGIDeviceManager::ResetDevice() to connect your D3D11 device to MediaFoundation. Important is to set your device to multithread-protected via the ID3D10Manager interface first.

This IMFDXGIDeviceManager can then be connected for example to your IMFSourceReader by setting its MF_SOURCE_READER_D3D_MANAGER attribute. As a result, any GPU acceleration done through D3D11 will now be done with your device, and as such, the resources created will belong to your device, and as such can be accessed directly.

A quick-and-dirty way to get to the underlying DXGI buffers is to query the IMFMediaBuffer of a video sample for its IMFDXGIBuffer interface. This interface allows you to get to the underlying ID3D11Texture2D via its GetResource method. And there you are. You have access to the actual D3D11 texture that was used by the GPU-accelerated decoder.

You probably still need to make a copy of this texture to your own texture with the same format, because you need to have a texture that has the D3D11_BIND_SHADER_RESOURCE flag set, if you want to use it in a shader, and the decoder usually does not set that flag. But since it is all done on the GPU, this is reasonably efficient.

Timing on external clock

Another non-standard use of video decoding frameworks is to take matters in your own hand, and output the audio and video frames synchronized to an external clock. By default, the decoder framework will just output the frames in realtime, based on whatever clock source it uses internally. But if you want to output it to a device with an external clock, you need to sync the frames yourself.

With DirectShow and MediaFoundation, this is not that difficult: every audio and video sample that is decoded, is provided with a timestamp, with an accuracy of 100 ns. So you can simply buffer a number of samples, and send them out based on their timestamp, relative to the reference clock of your choice.

For some reason, LibVLC only provides timestamps with the audio samples, not with the video samples it decodes. So that makes it difficult to use LibVLC in this way. Initially it did not have an easy way to decode frames on-demand at all, but recently they added a libvlc_media_player_next_frame() function to skip to the next frame manually. Then it is up to you to figure out what the frame time should be exactly.

One issue here though, is that if you let the library decode the video in realtime, it will also automatically compensate for any performance problems. So it will automatically apply frame skipping when required. If you are decoding manually, at your own speed, then you will need to manually take care of a situation where you may not be able to keep your decode buffer full, when the decoder cannot keep up. You may need to manually skip the playback position in the decoder ahead to keep in sync with the video output speed.

All in all, things aren’t always that straightforward when you don’t just let the video library decode the video by itself, and letting it time and display the output itself.

Posted in Software development | Tagged , , , , , , , , , , , , | 9 Comments

Video codecs and 4k

Recently I was optimizing some code to reliably play 4k video content at 60 fps through a 3D pipeline on low-end hardware. And it gave me a deja-vu of earlier situations with video. It seems that there is this endless cycle of new video formats turning up, followed by a period required for the hardware to catch up. It also reminded me that I had wanted to write a blog about some issues you run into when decoding video. So I think this is finally the time to dive into video codecs.

The endless cycle

At its basis, video playback on computer hardware is a very resource-intensive process. Worst-case you need to update all pixels in memory for every frame. So the performance depends on the number of pixels per frame (resolution), the colour-depth (bits per pixel), and the frame rate (number of frames per second).

If we want to get a bit retro here, convincing video playback on a consumer PC more or less started when hardware cards such as the Video Blaster arrived on the market. This was in 1992, before local bus was a widespread thing. The ISA bus was too slow for anything other than playing video in a really small window in the low 320×200 resolution at 256 colours.

The Creative Video Blaster circumvented this issue by having its own video output on board, and having video encoding/decoding hardware. It uses a Chips & Technologies F82C9001 chip, which supports YUV buffers in various compressed formats (2:1:1, 4:1:1 and 4:2:2), and it can also perform basic scaling. This meant that the CPU could send compressed video over the ISA bus, and it could be decoded on-the-fly on the Video Blaster board, at a relatively high resolution and colour depth. It’s difficult to find exact information on its capabilities, but it appears to be capable of PAL and NTSC resolution, and supports ‘over 2 million colours’, which would indicate 21-bit truecolour, so 7 bits per component. So I think we can say that it is more or less “broadcast quality” for the standards of the day: still in the era of Standard Definition (SD) PAL and NTSC.

The original webpage for the first Video Blaster (model CT6000) is archived here. It apparently requires an “IBM PC-AT and higher compatibles”, but the text also says it is for 386 PCs. So I suppose in theory it will work in a 286, but the software may require a 386 for best performance/compatibility.

Anyway, it should be clear that a 386 with an ISA VGA card could not play video anywhere near that well. You really needed that special hardware. To give an indication… a few years later, CD-ROM games became commonplace, and Full Motion Video (FMV) sequences became common. For example, see the game Need For Speed from 1994, which requires a fast 486 with localbus VGA:

The video quality is clearly not quite broadcast-level. The resolution is lower (320×240), and it also uses only 256 colours. The video runs at 15 fps. This was the best compromise at the time for the CPU and VGA capabilities, without any special hardware such as the Video Blaster.

From there on it was an endless cycle of the CPU and video cards slowly catching up to the current standard, and then new standards, with higher resolutions, more colours, better framerates and better compression would arrive, which again required special hardware to play back the video in realtime.

We moved from SD to HD, from interlaced video to progressive scan, from MPEG-1 to MPEG-2, MPEG-4 and beyond, and now we are at 4k resolution.

I would say that 4k at 60 fps is currently the ‘gold standard’: that is the highest commonly available content at the moment, and it currently requires either a reasonably high-end CPU and video card to play it back without any framedrops, or it requires custom hardware in the form of a Smart TV with built-in SoC and decoder, or a set-top media box with a similar SoC that is optimized for decoding 4k content.

Broadcast vs streaming

I have mentioned ‘broadcast quality’ briefly. I guess it is interesting to point out that in recent years, streaming has overtaken broadcasting. Namely, broadcast TV quality, especially in the analog era, was always far superior to digital video, especially on regular consumer PCs. But when the switch was made to HD quality broadcasting, an analog solution would require too much bandwidth, and would require very high-end and thus expensive receiver circuitry. So for HD quality, broadcasting switched to digital signals (somewhere in the late 90s to early 2000s, depending on which area). They started using MPEG-encoded data, very similar to what you’d find on a DVD, and would broadcast these compressed packets as digital data via ether, satellite or cable. The data was essentially packed digitally into existing analog video channels. The end-user would require a decoder that would decompress the signal into an actual stream of video frames.

At this point, there was little technical difference between playing video on your PC, and watching TV. The main difference was the delivery method: the broadcast solution could offer a lot of bandwidth to your doorstep, so the quality could be very high. Streams of 8 to 12 mbit of data for a single channel were no exception.

At the time, streaming video over the internet was possible, but most consumer internet connections were not remotely capable of these speeds, so video over the internet tended to be much lower quality than regular television. Also, the internet does not offer an actual ‘broadcasting’ method: video is delivered point-to-point. So if 1000 people are watching a 1 mbit video stream at the same time, the video provider will have to deliver 1000 mbit of data. This made high-quality video over the internet very costly.

But that problem was eventually solved, as on the one hand, internet bandwidth kept increasing and cost kept coming down, and on the other hand, newer video codecs would offer better compression, so less bandwidth was required for the same video quality.

This means that a few years ago, we reached the changeover-point where most broadcasters were still broadcasting at HD quality in 720p or 1080i quality, while streaming services such as YouTube or Netflix would offer 1080p or better quality. Today, various streaming services offer 4k UHD quality, while broadcasting is still mostly stuck at HD resolutions. So if you want that ‘gold standard’ of 4k 60 fps video, streaming services is where you’ll find it, rather than broadcasting services.

Interlacing

I really don’t want to spend too much time on the concept of interlacing, but I suppose I’ll have to at least mention it shortly.

As I already mentioned with digital HD broadcasting, bandwidth is a thing, also in the analog realm. The problem with early video is flicker. With film technology, the motion is recorded at 24 frames per second. But if it is displayed at 24 frames per second, the eye will see flickering when the frames are switched. So instead each frame is shown twice, effectively doubling the flicker frequency to 48 Hz, which is less obvious to the naked eye.

The CRT technology used for analog TV has a similar problem. You will want to refresh the screen at about 48 Hz to avoid flicker. So that would require sending an entire frame 48 times per second. If you want to have a reasonable resolution per frame, you will want about 400-500 scanlines in a frame. But the combination of 400-500 scanlines and 48 Hz would require a lot of bandwidth, and would require expensive receivers.

So instead, a trick was applied: each frame was split up in two ‘fields’. A field with the even scanlines, and a field with the odd scanlines. These could then be transmitted at the required refresh speed, which was 50 Hz for PAL and 60 Hz for NTSC. Every field would only require 200-250 scanlines, halving the required bandwidth.

Because the CRT has some afterglow after the ray has scanned a given area, the even field was still visible somewhat as the odd field was drawn. So the two fields would blend somewhat together, giving a visual quality nearly as good as a full 50/60Hz image at 400-500 lines.

Why is this relevant? Well, for a long time, broadcasting standards included interlacing. And as digital video solutions had to be compatible with analog equipment at the signal level, many early video codecs also supported interlaced modes. DVD for example is also an interlaced format, supporting either 480i for NTSC or 576i for PAL.

In fact, for HD video, the two common formats are 720p and 1080i. The interlacing works as simple form of data compression, which means that a 1920×1080 interlaced video stream can be transmitted with about the same bandwidth as a 1280×720 progressive one. 1080i became the most common format for HD broadcasts.

This did cause somewhat of a problem with PCs however. Aside from the early days of super VGA, PCs rarely made use of interlaced modes. And once display technology moved from CRT to LCD displays, interlacing actually became problematic. An LCD screen does not have the same ‘afterglow’ that a CRT has, so there’s no natural ‘deinterlacing’ effect that blends the screens together. You specifically need to perform some kind of digital filtering to deinterlace an image with acceptable quality.

While TVs also adopted LCD technology around the time of HD quality, they would always have a deinterlacer built-in, as they would commonly need to display interlaced content. For PC monitors, this was rare, so PC monitors generally did not have a deinterlacer on board. If you wanted to play back interlaced video on a PC, such as DVD video, the deinterlacing would have to be done in software, and a deinterlaced, progressive frame sent to the monitor.

This also means that streaming video platforms do not support interlacing, and when YouTube adopted HD video some years ago, they would only offer 720p and 1080p formats. With 1080p they effectively surpassed the common broadcast quality of HD, which was only 1080i.

Luckily we can finally put all this behind us now. There are no standardized interlaced broadcast formats for 4k, only progressive ones. Interlacing will soon be a thing of the past, together with all the headaches of deinterlacing the video properly.

Home Video

So far, I have only mentioned broadcast and streaming digital video. For the sake of completeness I should also mention home video. Originally, in the late 70s, there was the videocassette recorder (VCR) that offered analog recording and playback for the consumer at home. This became a popular way of watching movies at home.

One of the earliest applications of digital video for consumers was an alternative for the VCR. Philips developed the CD-i, which could be fitted with a first-generation MPEG decoder module, allowing it to play CD-i digital video. This was a predecessor of the Video CD standard, which used the same MPEG standard, but was not finalized yet. CD-i machines could play both CD-i digital video and Video CD, but other Video CD players could not play the CD-i format.

This early MPEG format aimed to fit a full movie of about 80 minutes at a quality that was roughly equivalent to the common VHS format at the time, on a standard CD with about 700 MB of storage. This analog format did not deliver the full broadcast quality of PAL or NTSC. You had about 250 scanlines per frame, and the chrominance resolution was also rather limited, so effectively you had about 330 ‘pixels’ per scanline.

VideoCD aimed at a similar resolution, and the standard arrived at 352×240 for NTSC and 352×288 for PAL. It did not support any kind of interlacing, so it output progressive frames at 29.97 Hz for NTSC, and 25 Hz for PAL. So in terms of pixel resolution, it was roughly the equivalent of VHS. The framerate was only half that though, but still good enough for smooth motion (most movies were shot at 24 fps anyway).

VideoCD was an interesting pioneer technically, but it never reached the mainstream. Its successor, the DVD-Video, did however become the dominant home video format for many years. By using a disc with a much larger capacity, namely 4.7 GB, and an updated MPEG-2 video codec, the quality could now be bumped up to full broadcast quality PAL or NTSC. That is the full 720×576 resolution for PAL, at 50 Hz interlaced, or 720×480 resolution for NTSC at 60 Hz interlaced.

With the move from SD to HD, another new standard was required, as DVD was limited to SD. The Blu-ray standard won out eventually, which supports a wide range of resolutions and various codecs (which we will get into next), offering 720p and 1080i broadcast quality video playback at home. Later iterations of the standard would also support 4k. But Blu-ray was a bit late to the party. It never found the same popularity that VHS or DVD had, as people were moving towards streaming video services over the internet.

Untangling the confusion of video codec naming

In the early days of the MPEG standard (developed by the Moving Picture Experts Group), things were fairly straightforward. The MPEG-1 standard had a single video codec. The MPEG-2 standard had a single video codec. But with MPEG-4, things got more complicated. In more than one way. Firstly, the MPEG-4 standard introduced a container format that allowed you to use various codecs. This also meant that the MPEG-4 standard evolved over time, and new codecs were added. And secondly, there wasn’t a clear naming scheme for the codecs, so multiple names were used for the same codec, adding to the confusion.

A simple table of the various MPEG codecs should make things more clear:

MPEG standardInternal codenameDescriptive name
MPEG-1
(1993)
H.261
MPEG-2
(1995)
H.262
MPEG-4 Part 2
(1999)
H.263
MPEG-4 Part 10
(2004)
H.264Advanced Video Coding (AVC)
MPEG-H Part 2
(2013)
H.265High Efficiency Video Coding (HEVC)
MPEG-I Part 3
(2020)
H.266Versatile Video Coding (VVC)
MPEG codecs

What is missing? MPEG-3 was meant as a standard for HDTV, but it was never released, as in practice, the updates required were only minor, and could be rolled into an update of the MPEG-2 standard.

H.263 is also not entirely accurate. It was released in 1996. It is somewhat of a predecessor to MPEG-4, aimed mainly at low-bandwidth streaming. MPEG-4 decoders are backwards compatible with the H.263 standard, but the standard is more advanced than the original H.263 from 1996.

With MPEG-1 and MPEG-2, things were straightforward: there was one standard, one video codec, and one name. So nobody had to refer to the internal codename of the codec.

With MPEG-4, it started out like that as well. People could just refer to it as MPEG-4. But in 2004, another codec was added to the standard: the H.264/AVC codec. So now MPEG-4 could be either the legacy codec, or the new codec. The names of the standard were too confusing… MPEG-4 Part 2 vs MPEG-4 Part 10. So instead people referred to the codec name. Some would call it by its codename of H.264, others would call it by the acronym of its descriptive name: AVC. So MPEG-4, H.264 and AVC were three terms that could all mean the same thing.

With with H.265/HEVC, it was again not clear what the preferred name could be, so both H.265 and HEVC were used. What’s more, people would also still call it MPEG-4, even though strictly speaking it was part of the MPEG-H standard.

MPEG-I/H.266/VVC has not reached the mainstream yet, but I doubt that the naming will get any less complicated. The pattern will probably continue. And the MPEG-5 standard was also introduced in 2020 (with EVC and LCEVC codecs), which may make things even more confusing, once that hits the mainstream.

So if you don’t know that H.264 and AVC are equivalent, or H.265 and HEVC for that matter, it’s very confusing when one party uses one name to refer to the codec, and another party uses the other. Once you figured that out, it all clicks.

4k codecs

A special kind of confusion I have found is that it seems that it is often implied that you require special codecs for 4k video. But even MPEG-1 supports a maximum resolution of 4095×4095, and a maximum bandwidth of 100 mbit. So it is technically possible to encode 4k (3840×2160) content even in MPEG-1, at decent quality. In theory anyway. In practice, MPEG-1 has been out of use for so long that you may run into practical problems. A tool like Handbrake does not include support for MPEG-1 at all. It will let you encode 4k content in MPEG-2 however, which ironically it can store in an MPEG-4 container file. VLC actually lets you encode to MPEG-1 in 3820×2160 at 60 fps. You may find that not all video players will actually be able to play back such files, but there it is.

The confusion is probably because newer codecs require less bandwidth for the same level of quality. And if you move from HD resolution to 4k, you have 4 times as many pixels per frame, so roughly 4 times as much data to encode, resulting in roughly 4 times the bandwidth requirement for the same quality. So in practice, streaming video in 4k will generally be done with one of the latest codecs, in order to get the best balance between bandwidth usage and quality, for an optimal experience. Likewise, Blu-ray discs only have limited storage (50 GB being the most common), and were originally developed for HD. In order to fit 4k content on there, better compression is required.

But if you encode your own 4k content, you can choose any of the MPEG codecs. Depending on the hardware you want to target, it may pay off to not choose the latest codec, but the one that is best accelerated by your hardware. On some hardware, AVC may run better than HEVC.

Speaking of codecs, I have only mentioned MPEG so far, because it is the most common family of codecs. But there are various alternatives which also support 4k with acceptable performance on the right hardware. While MPEG is a widely supported standard, and the technology is quite mature and refined, there is at least one non-technical reason why other codecs may sometimes be preferred: MPEG is not free. A license is required for using MPEG. The license fee is usually paid by the manufacturer of a device. But with for example desktop computers this is not always the case. The licensing model also makes MPEG incompatible with certain open source licenses.

One common alternative suitable for 4k video is Google’s VP9 codec, released in 2013. It is similar in capabilities to HEVC. It is open and royalty-free, and it is used by YouTube, among others. As such it is widely supported by browsers and devices.

Another alternative is the Alliance of Open Media‘s Video 1 (AV1), released in 2018. It is also royalty-free, and its license is compatible with open source. This Alliance includes many large industry players, such as Apple, ARM, Intel, Samsung, NVIDIA, Huawei, Microsoft, Google and Netflix. So widespread support is more or less guaranteed. AV1 is a fairly new codec, which is more advanced than HEVC, so it delivers more compression at the same quality. The downside is that because it’s relatively new, and the compression is very advanced, it requires a quite powerful, advanced, modern CPU and GPU to play it back properly. So it is not that well-suited for older and more low-end devices.

In practice, you will have to experiment a bit with encoding for different codecs, at different resolutions, framerates and bitrates, to see which one is supported best, and under which conditions. I suppose the most important advice you should take away here is that you shouldn’t necessarily use the latest-and-greatest codecs for 4k content. There’s nothing wrong with using AVC, if that gives the best results on your hardware.

Hardware acceleration

One last thing I would like to discuss is decoding video inside a (3D) rendering context. That is, you want to use the decoded video as a texture in your own rendering pipeline. In my experience, most video decoding frameworks can decode video with acceleration effectively, if you pass them a window handle, so they can display inside your application directly, and remain in control. However, if you want to capture the video frames into a graphics texture, there often is no standardized way.

The bruteforce way is to just decode each video frame into system memory, and then copy it into the texture yourself. For 1080p video you can generally get away with this appoach. However, for 4k video, each frame is 4 times as large, so copying the data takes 4 times as long. On most systems, the performance impact of this is simply too big, and the video cannot be played in realtime without dropping frames.

For Windows, there is the DirectX Video Acceleration framework (DXVA), which should allow you to use GPU-acceleration with both DirectShow and MediaFoundation. So far I have only been able to get the frames in GPU-memory in MediaFoundation. I can get access to the underlying DirectX 11 buffer, and then copy its contents to my texture (which supports my desired shader views) via the GPU. It’s not perfect, but it is close enough. 4k at 60 fps is doable in practice. It seems to be an unusal use-case, so I have not seen a whole lot in the way of documentation and example code for the exact things I like to do.

With VLC, there should be an interface to access the underlying GPU buffers in the upcoming 4.0 release. I am eagerly awaiting that release, and I will surely give this a try. MediaFoundation gives excellent performance with my current code, but access to codecs is rather limited, and it also does not support network streams very well. If VLC offers a way to keep the frames on the GPU, and I can get 4k at 60 fps working that way, it will be the best of both worlds.

Posted in Software development | Tagged , , , , , , , , , , , , , , , , | 10 Comments

Retro programming, what is it?

As you may have seen, in the comment section of my previous two articles, a somewhat confused individual has left a number of rather lengthy comments. I had already encountered this individual in the comments section of some YouTube videos (also with an Amiga/oldskool/retro theme), and had already more or less given up on having a serious conversation with this person. It is apparent that this person views things from an entirely different perspective, and is not capable of being open to other perspectives, making any kind of conversation impossible, because you simply hit the brick wall of their preconceptions at every turn.

Having said that, it did trigger me to reflect on my own perspective, and as such it may be interesting to formalize what retro/oldskool programming is.

The hardware

Perhaps it’s good to first discuss the wider concept of ‘retro computing’. A dictionary definition of the term ‘retro’ is:

imitative of a style or fashion from the recent past.

This can be interpreted in multiple ways. If we are talking about the computers themselves, the hardware, then there is a class of ‘retro computing’ that imitates machines from the 70s and 80s, that ‘8-bit’ feeling. Examples are the PICO-8 Fantasy Console or the Colour Maximite. These machines did not actually exist back then, but try to capture the style and fashion of machines from that era.

A related class is that of for example the THEC64 Mini and THEA500 Mini. While these are also not exact copies of hardware from the era, they are actually made to be fully compatible with the software from the actual machines. They are basically emulators, in hardware form. Speaking of emulators, of course most machines from the 70s and 80s have been emulated in software, and I already shared my thoughts on this earlier.

Also related to that are peripherals made for older machines, such as the DreamBlaster S2P. These are not necessarily built with components that were available in the 70s and 80s, but they can be used with computers from that era.

In terms of hardware, my interests are focused on actual machines from the 70s and 80s. So actual ‘classic’ hardware, not ‘retro’ hardware; the PICO-8 and Colour Maximite fall outside the scope. I mostly focus on IBM PCs and compatibles, Commodore 64 and Amiga, as I grew up with these machines, and have years of hands-on experience with them.

My interests in emulation are in function of this: I may sometimes use emulation for convenience when developing, reverse-engineering and such. And I may sometimes modify emulators to fix bugs or add new features. I may also sometimes use some ‘retro’ peripherals that make the job easier, or are more readily available than actual ‘classic’ peripherals. Such as the DreamBlaster S2P, or an LCD monitor for example.

The software

My blog is mainly about developing software, and the only software you can develop is new software, so in that sense it is always ‘retro programming’: new software, but targeting machines from a specific bygone era.

There are also people who discuss actual software from the past, more from a user perspective. That can be interesting in and of itself, but that is not for me. I do occasionally discuss software from the past, and sometimes reverse-engineer it a bit, to study its internals and explain what it is doing. But usually the goal of this is to obtain knowledge that can be used for writing new software for that class of hardware.

Anyway, I believe I already said it before, when I started my ‘keeping it real‘ series: I went back to programming old computers because they pose very different programming challenges to modern machines. It’s interesting to think about programming differently from your daily work. Also, it’s interesting that these machines are ‘fixed targets’. A Commodore 64 is always a Commodore 64. It will never be faster, have more capabilities, or anything. It is what it is, and everyone knows what it can and cannot do. So it is interesting to take these fixed limitations and work within them, trying to push the machine as far as it can go.

Why the comments are barking up the wrong tree

Getting back to the comments on the previous articles, this person kept arguing about the capabilities of certain hardware, or lack thereof, and made all sorts of comparisons with other hardware. Within the perspective explained above, it should be obvious why this is irrelevant.

Since I consider the machine I develop for a ‘fixed target’, it is not relevant how good or bad it is. It’s the playground I chose, so these are the rules of the game that I have to work with. And the game is to try and push the machine as far as possible within these rules.

The machines I pick also tend to be fairly common off-the-shelf configurations. Machines exactly as how most people remember them. Machines as people bought and used them, and how software from the era targeted them.

Yes, there may have been esoteric hardware upgrades and such available, which may have made the machines better. But that is irrelevant, as I don’t have these, and do not intend to use them. I prefer the ‘stock’ machines as much as possible.

So I am not all that interested in endless arguments about what hardware was better. I am much more interested in what you can make certain hardware do, no matter how good or bad it may be.

Related to that, as I said, I like to use machines in configurations as how most people remember them. This person kept referencing very high-end and expensive hardware, and then made comparisons to the Amiga, which was in an entirely different price class. I mean, sure, you could assume a limitless budget, and create some kind of theoretical machine on paper, which at a given point in history combined the most advanced and powerful hardware available on the market. But that wouldn’t be a realistic target for what I do: retro programming.

I like to write code that actually works on real machines that most people either still have from when they were young, or which they can buy second-hand easily, because there’s a large supply of these machines at reasonable prices. And in many cases, the code will also work in emulators. If not, then the emulators need to be patched. I will not write my code around shortcomings of compilers. Real hardware will always be the litmus test.

Posted in Oldskool/retro programming | Tagged , , , , , , , , | Leave a comment

An Amiga can’t do Wolfenstein 3D!

Like many people who grew up in the 80s and early 90s, gaming was a 2d affair for me. Scrolling, sprites and smooth animation were key elements of most action games. The Commodore 64 was an excellent gaming machine in the early part of the 80s, because of its hardware capabilities coupled with a low pricetag. In the latter part of the 80s, we moved to 16-bit machines, and the Amiga was the new gaming platform of choice, again offering silky smooth scrolling and animations, but because of advances in technology, we now got higher resolutions, more colours, better sound and all that.

But then, the stars aligned, and Wolfenstein 3D was released on PC. The stars of CPUs becoming ever faster, the PC getting more powerful video and audio hardware, and 3D gaming maturing. A first glimpse of what was to come, was Catacomb 3-D by id Software, released in November 1991:

This game made use of the power of the 16-bit 286 processor, which was starting to become mainstream with PC owners, and the EGA video standard. The PC was not very good at action games, because it had no hardware sprites, and scrolling was very limited. But id Software saw that EGA’s quirky bitplane layout and ALU meant that it was relatively good at certain things. We’ve already seen that it is fast at filling large areas with a single colour, for polygon rendering for example. But it is also good at rendering vertical columns of pixels.

And that is the key to having fast texture-mapped 3D walls. By taking a simple 2D-map with walls, and performing raycasting from the player’s position in the viewing direction, you can make a simple perspective projection of the walls. You use raycasting to determine the distance from the player to the nearest visible wall, and then render a texture-mapped version of that wall by rendering scaled vertical columns based on the distance, with perspective projection.

Catacomb 3-D was a good first step, but the game still felt rather primitive, with the limited EGA palette, and the gameplay not quite having the right speed and feel yet.

But only a few months later, in May 1992, id Software released the successor, where everything really came together. The developers figured out that the EGA trick of rendering scaled vertical columns works nearly the same in the newly discovered ‘mode X‘ of VGA. The big advantage was that you could now use the full 256 colours, which made for more vibrant textures and level design. The game itself was also refined, and now had just the right look-and-feel to become a true milestone in gaming. Things have never been the same since.

Here is an excellent overview of how Wolfenstein 3D evolved:

But… that change did not bode well for the Amiga. Suddenly, everything the Amiga was good at, was no longer relevant for these new 3D action games. What’s worse… these games relied on very specific properties of the EGA and VGA hardware. They did not translate well to the Amiga’s hardware at all.

And to add insult to injury, id Software followed up with DOOM the next year, which again took advantage of ever faster and more powerful PC hardware, and refined the 3D first-person shooter even further.

A few brave attempts were made on the Amiga to try and make Wolfenstein 3D-like or DOOM-like games for the platform, but sadly they could not hold a candle to the real thing:

As a result, the consensus was that the Amiga could not do 3D first-person shooters because its bitplane-oriented hardware was outdated, and unsuitable.

But all that depends on how you look at it. As you know, demosceners/retrocoders tend to look at these situations as a challenge. Sure, your hardware may not have the ideal featureset for a given rendering method… but you can still make the best of it. The key is to stop thinking in terms of EGA and VGA hardware, and instead think of ways to scale and render vertical columns as fast as possible on the Amiga hardware.

One very nice approach was shown at Revision 2019 by Dekadence. It runs on a standard Amiga 500, and achieves decent framerates while rendering with a decent amount of colours and detail:

Another interesting project is a port of the original Wolfenstein 3D-game, which is optimized for a stock Amiga 1200. It achieves good framerates by rendering at only half the horizontal resolution:

The Amiga 1200 has a 14 MHz CPU. We can compare it to the closest 286es, which are 12 MHz and 16 MHz, and those are just about adequate to run Wolfenstein 3D as well, albeit at a slightly reduced window size for better performance. So this is not a bad attempt at all, on the Amiga.

Another touch I really like is that it uses the original PC music for AdLib, and uses an OPL2 emulator to render the music to a set of samples.

Another really nice attempt is this DreadStein3D:

It makes use of the engine for the game Dread, which is currently in development. This game is actually aiming more at DOOM than at Wolfenstein 3D, but it has a very efficient renderer, and rendering Wolfenstein 3D-like levels can be done very well on an Amiga 500, as you can see.

Here is a good impression of what the Dread game actually looks like:

As you can see, it’s not *quite* like DOOM, in the sense that there are no textured floors and ceilings. And it does not support height differences in levels either. But it does offer various other features over Wolfenstein 3D, such as the skybox and the lights and shadows. So it is more of a Wolfenstein 3D++ engine (or a DOOM– engine).

And the performance is very good, even on a stock Amiga 500. So… all these years later, we can now finally prove that the Amiga indeed CAN do Wolfenstein 3D. All it took was to stop thinking in terms of the PC renderer, and making poor conversions of the x86/VGA-optimized routines on Amiga hardware, but instead to develop Amiga-optimized routines directly.

If you look closely, you’ll see that they have a ‘distinct’ look because of the way the rendering is performed. Britelite discussed the technical details of what became Cyberwolf in a thread over at the English Amiga Board. It gives you a good idea of how you have to completely re-think the whole renderer and storage of the data, to make it run efficiently on the Amiga. It has always been possible. It’s just that nobody figured out how until recently.

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , | 17 Comments

Do 8-bit DACs result in 8-bit audio quality?

During a “””discussion””” some weeks ago, I found that apparently some people think that any system that uses 8-bit DACs is therefore ‘8-bit quality’. A comparison was made between the Amiga and a Sound Blaster 1.0. Both may use 8-bit DACs, but the way they use them is vastly different. As a result, the quality is also different.

The Sound Blaster 1.0 is as basic as it gets. It has a single 8-bit DAC, which is mono. There is no volume or panning control. Samples are played via DMA. The sample rate is dictated by the DMA transfer that the DSP microcontroller performs, and can be set between 4 kHz and 23 kHz. In theory, the sample rate can be changed from one DMA transfer to another. But when playing continuous data, you cannot reprogram the rate without an audible glitch (as explained earlier).

I would argue that this is the basic definition of 8-bit audio quality. That is, the bit-depth of a digital signal defines how many discrete levels of amplitude the hardware supports. 8-bit results in 2^8 = 256 discrete levels of audio. This defines various parameters of your sound quality, including the dynamic range, the signal-to-noise ratio, and how large the quantization error/noise is.

This is all that the Sound Blaster does: you put in 8-bit samples, and it will send them out through the DAC at the specified sample rate. It does not modify or process the signal in any way, neither in the digital nor in the analog domain. So the signal remains 8-bit, 256 discrete levels.

The Amiga also uses 8-bit mono DACs. However, it has 4 of them, each driven by its own DMA channel. Two DACs are wired to the left output, and two DACs are wired to the right output, for stereo support. Also, each DAC has its own volume control, with 64 levels (6-bit). And this is where things get interesting. Because this volume control is implemented in a way that does not affect the accuracy of the digital signal. Effectively the volume control gives you additional bits of amplitude: they allow you to output a signal at more than 256 discrete levels.

If you only had one DAC per audio channel (left or right), this would be of limited use. Namely, you can play samples softer, while retaining the same level of quality. But you trade it in for the volume, the output level. However, the Amiga has two DACs per channel, each with their own individual volume control. This means that you can play a soft sample on one DAC, while playing a loud sample on the other DAC. And this means you actually can get more than 8-bit quality out of these 8-bit DACs.

Anyone who has ever used a MOD player or tracker on a Sound Blaster or similar 8-bit PC audio device, will know that it doesn’t quite sound like an Amiga. Why not? Because a Sound Blaster can’t do better than 8-bit quality. If you want to play a softer sample, you actually need to scale down the amplitude of the samples themselves, which means you are effectively using less than 8 bits for the sound, and the quality is reduced (more quantization noise, worse signal-to-noise ratio etc).

Likewise, if you want to play two samples at the same time, you need to add these samples together (adding two 8-bit numbers yields a 9-bit result), and then scale the result back down to 8-bit, meaning you lose some precision/quality.

Another difference with the Amiga is that the Amiga can set the replay rate for each of the 4 DACs individually. So you can play samples at different rates/pitches at the same time, without having to process the sample data at all. Where as mentioned above, the Sound Blaster has a playback rate that is effectively fixed by the looping DMA transfer. This means that to play samples at different pitches, the samples have to be resampled relative to the playback rate. This generally also reduces quality. Especially with early MOD players this was the case, as your average PC still had a relatively slow CPU, and could only resample with a nearest-neighbour approach. This introduced additional aliasing in the resulting sound. Later MOD players would introduce linear interpolation or even more advanced filtering during resampling, which could mostly eliminate this aliasing.

Some clever coder also figured out that you can exploit the capabilities of the Amiga to play back samples of more than 8-bit quality. Namely, since you have two DACs per channel, and you can set one soft and one loud, and they are then mixed together, you can effectively break up samples of higher quality into a high-word and low-word portion, and distribute it over the two DACs. This way you can effectively get 8+6 = 14-bit accurate DACs, so playing a stereo stream of 14-bit quality is possible on an Amiga. The AHI sound system provides standard support for this.

14-bit, now that isn’t quite CD-quality, is it? Well… that depends. The next step up from 8-bit audio is generally assumed to be 16-bit. But that is a giant leap, and with 16-bit you are expected to be able to produce a dynamic range (and therefore signal-to-noise ratio) of about 96 db, as per the CD specification. That requires quite accurate analog components.

Perhaps this is a good moment to give some quick rules-of-thumb when reasoning about digital audio and quality/resolution.

The first is known as the Nyquist-Shannon sampling theorem, which deals with sample rate. It says:

A periodic signal must be sampled at more than twice the highest frequency component of the signal

Which makes sense… If you want to represent a waveform, the most basic representation is its minimum and its maximum value. So that is two samples. So, Nyquist-Shannon basically says that the frequency range of your analog signal is limited to half the sample rate of your digital signal. So if you have 44.1 kHz, the maximum audible frequency you can sample is 44.1/2 = 22.05 kHz. In practice the limit is not quite that hard, and filtering is required to avoid nasty aliasing near the limit. So effectively at 44.1 kHz sampling rate you will get a maximum of about 20 kHz.

The second is the definition of the decibel. Decibel uses a logarithmic scale where every step of ~6.02 db indicates an amplitude that is twice as large. Combine that with binary numbers, where every bit added will double the amount of values that can be represented. This leads to the simple quick-and-dirty formula of: N bits == N*6.02 db dynamic range. So our 8-bit DACs are capable of about 48 db dynamic range (although some sources argue that because audio is signed, you should only take the absolute value, which means you should actually use N-1, and basically get a value that is ~6 db lower. Clearly manufacturers tend to use the higher numbers, because it makes their products look better).

Although the CD has always been specced as 16-bit PCM data, ironically enough the first CD players weren’t always 16-bit. Most notably Philips (one of the inventors, the other being Sony) did not have the means to mass-produce 16-bit DACs for consumers yet. They had developed a stable 14-bit DAC, and wanted the CD to be 14-bit. Sony however did have 16-bit DACs and managed to get 16-bit accepted as the CD standard.

So what did Philips do? Well, they made CD players with their 14-bit DACs instead. However, they would introduce a trick called ‘oversampling’, where they would use the 14-bit DAC and effectively run it 4 times as fast (so at 176.4 kHz), which allowed them to ‘noise shape’ the signal at a high frequency, and then filter it down, to effectively get the full 16-bit quality from their 14-bit DAC (and ironically enough some ‘audiophiles’ now try to mod these old Philips CD players and bypass the oversampling, to listen to the 14-bit DAC directly, which they of course claim to sound better, because oversampling would ‘only be used to get better measurements on paper, but actually sounds worse’. The reality is probably that it does actually sound objectively ‘worse’, because the filters aren’t designed to remove the aliasing you now get, because you removed the oversampling and noise-shaping feedback loop. But perhaps that added aliasing and distortion sounds ‘subjectively’ better to them, just as people say of tube amplifiers, or vinyl records).

In fact, this oversampling trick had an interesting advantage in that it resulted in better linearity. A classic DAC is made using a register ladder (as we know from the Covox), and to get its response as linear as possible, you need VERY low tolerances to get enough accuracy for a full 16-bit resolution. And the resistance may vary depending on temperature. This meant that building high quality 16-bit DACs was expensive. Also, these DACs generally had to be calibrated in the factory, to fine-tune the linearity. This made it even more expensive.

Another advantage of oversampling is that you now ran the DAC at extremely high frequencies, and the audible frequencies from the original source material are now nowhere near the Nyquist limit of the DAC. Which means that the analog filtering stage after the DAC can be far less steep, resulting in less distortion and noise.

So we quickly saw manufacturers taking this idea to the extreme: taking a 1-bit DAC and using a lot of oversampling (like 64 times, or some even 256 times), running it at extremely high frequencies to still get 16-bit quality from the DAC. The advantage: the DAC itself was just 1-bit, it was guaranteed to be linear. No calibration required. This meant we now had cheap DACs that delivered very acceptable sound quality.

By the time the successor to the CD came out, the Super Audio CD, 1-bit oversampling DACs were now so common, that the designers figured they could ‘cut out the middle man’. A Super Audio CD does not encode Pulse Code Modulation-samples (PCM), like a CD and most other digital formats. Instead, it encodes a 1-bit stream at 2.8224 MHz (64 times as high as 44.1 kHz, so ’64 times oversampling’), in what they call ‘Direct Stream Digital‘ (DSD), an implementation of Pulse Density Modulation (PDM). So now you could feed the digital data stream directly to a 1-bit DAC, without any need for converting or oversampling.

Ironically enough, modern DAC designs would eventually move back to using slightly more than 1-bit, to find the best possible compromise between the analog and the digital domain. So some modern DACs would use 2-bit to 6-bit internally. Which means that you would once again need to process the data on a Super Audio CD before sending it to a DAC which uses a different format, in the name of better quality.

Another interesting example of an audio device that isn’t quite 16-bit is the AdLib Gold card. Although it was released in 1992, in a time when 16-bit sound cards were more or less becoming the standard, it only had a 12-bit DAC. Did it matter? Nah, not really. It was an excellent quality 12-bit design, so you actually did get 12-bit quality. Many sound cards may have been 16-bit on paper, but had really cheap components, so you had tons of noise and distortion, and would get nowhere near that 96 db dynamic range anyway. Some of them are closer to 12-bit in practice (which is about 72 db of dynamic range), or actually worse.

See also this excellent article, which explains about bit-depth and quality (and how things aren’t stair-stepped except with really basic DACs… which by now you should understand, since most devices use 1-bit DACs, there’s no way they can have that kind of stair-stepping anyway).

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , , , , | 4 Comments

Migrating to .NET Core: the future of .NET.

More than 20 years ago, Microsoft introduced their .NET Framework. A reaction to Java and the use of virtual machines and dynamic (re)compiling of code for applications and services. Unlike Java, where the Java virtual machine was tied to a single programming language, also by the name of Java, Microsoft opened up their .NET virtual machine to a variety of languages. They also introduced a new language of their own: C#. A modern language which was similar to C++ and Java, which allowed Microsoft to introduce new features easily, because they controlled the standard.

Then, in 2016, Microsoft introduced .NET Core, a sign of things to come (and a sign of confusion, because we used to have only one .NET, and now we had to separate between ‘Framework’ and ‘Core’). Where the original .NET Framework was mainly targeted at Windows and Intel x86 or compatible machines, .NET Core was aimed at multiple platforms and architectures, as Java before it. Microsoft also moved to an open source approach.

This .NET Core was not a drop-in replacement, but a rewrite/redesign. It had some similarities to the classic .NET Framework, but was also different in various ways, and would be developed alongside the classic .NET Framework for the time being, as a more efficient, more portable reimplementation.

On April 18th 2019, Microsoft released version 4.8 of the .NET Framework which would be the last version of the Framework product line. On November 10th 2020, Microsoft announced .NET 5. This is where the Framework and Core branches would be joined. Technically .NET 5 is a Core branch, but Microsoft now considered it mature enough to replace .NET 4.8 for new applications.

As you may know from my earlier blogs, I always say you should keep an eye on new products and technologies developing, so this would be an excellent cue to start looking at .NET Core seriously. In my case I had already used an early version of .NET Core for a newly developed web-based application sometime in late 2016 to early 2017. I had also done some development for Windows Phone/UWP, which is also done with an early variation of the .NET Core environment, rather than the regular .NET Framework.

My early experiences with .NET Core-based environments were that it was .NET, but different. You could develop with the same C# language, but the environment was different. Some libraries were not available at all, and others may be similar to the ones you know from the .NET Framework, but not quite the same, so you may had to use slightly different objects, namespaces, objects, methods or parameters to achieve the same results.

However, with .NET 5, Microsoft claims that it is now ready for prime time, also on the desktop, supporting Windows Forms, WPF and whatnot, with the APIs being nearly entirely overlapping and interchangeable. Combined with that is backward compatibility with existing code, targeting older versions of the .NET Framework. So I figured I would try my hand at converting my existing code.

I was getting on reasonably well, when Microsoft launched .NET 6 in November, together with Visual Studio 2022. This basically makes .NET 5 obsolete. Support for .NET 5 will end in May 2022. .NET 6 on the other hand is an LTS (Long-Term Support) version, so it will probably be supported for at least 5 or 6 years, knowing Microsoft. So, before I could even write this blog on my experiences with .NET 5, I was overtaken by .NET 6. As it turns out, moving from .NET 5 to .NET 6 was as simple as just adjusting the target in the project settings, as .NET 6 just picks up where .NET 5 left off. And that is exactly what I did as well, so we can go straight from .NET 4.8 to .NET 6.

You will need at least Visual Studio 2019 for .NET 5 support, and at least Visual Studio 2022 for .NET 6 support. For the remainder of this blog, I will assume that you are using Visual Studio 2022.

But will it run Cry… I mean .NET?

In terms of support, there are no practical limitations. With .NET 4.7, Microsoft moved the minimum OS support to Windows 7 with SP1, and that is still the same for .NET 6. Likewise, .NET Framework supports both x86 and x64, and .NET 6 does the same. On top of that, .NET 6 offers support for ARM32 and ARM64.

Sure, technically .NET 4 also supports IA64 (although with certain limitations, such as no WPF support), whereas .NET 6 does not, but since Windows XP was the last regular desktop version to be released for Itanium, you could not run the later updates of the framework anyway. If you really wanted, you could get Windows Server 2008 R2 SP1 on your Itanium, as the latest possible OS. Technically that is the minimum for .NET 4.8, but I don’t think it is actually supported. I’ve only ever seen an x86/x64 installer for it. Would make sense, as Microsoft also dropped native support for Itanium after Visual Studio 2010.

So assuming you were targeting a reasonably modern version of Windows with .NET 4.8, either server (Server 2012 or newer) or desktop (Windows 7 SP1 or newer), and targeting either x86 or x64, then your target platforms will run .NET 6 without issue.

Hierarchy of .NET Core wrapping

Probably the first thing you will want to understand about .NET Core is how it handles its backward compatibility. It is possible to mix legacy assemblies with .NET Core assemblies. The .NET 6 environment contains wrapper functionality which can load legacy assemblies and automatically redirect their references to the legacy .NET Framework to the new .NET environment. However, there are strict limitations. There is a strict hierarchy, where .NET Core assemblies can reference legacy assemblies, but not vice versa. So the compatibility only goes one way.

As you probably know, the executable assembly (the .exe file) contains metadata which determines the .NET virtual machine that will be used to load the application. This means that a very trivial conversion to .NET 6 can be done by only converting the project of your solution that generates this executable. This will then mean the application will be run by the .NET 6 environment, and all referenced assemblies will be run via the wrapper for .NET Framework to .NET 6.

In most cases, that will work fine. There are some corner-cases however, where legacy applications may reference .NET Framework objects that do not exist in .NET 6. or use third-party libraries that are not compatible with .NET 6. In that case, you may need to look for alternative libraries. In some cases you may find that there are separate NuGet packages for classic .NET Framework and .NET Core (such as with CefSharp, which has separate CefSharp.*.NETCore packages). Sometimes there are conversions of an old library done by another publisher.

And in the rare case where you can not find a working alternative, there is a workaround, which we will get into later. But in most cases, you will be fine with the standard .NET 6 environment and NuGet packages. So let’s look at how to convert our projects. Microsoft has put up a Migration Guide that gives a high-level overview, and also provides some crude command-line tools to assist you with converting. But I prefer to dig into the actual differences of project files and things under the hood, so we have a proper understanding, and can detect and solve problems by hand.

Converting projects

The most important change is that project files now use an entirely different XML layout, known as “SDK-style projects”. Projects now use ‘sensible defaults’, and you opt-out of things, rather than opt-in. So your most basic project file can look as simple as this:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Library</OutputType>
    <TargetFramework>net6.0</TargetFramework>
  </PropertyGroup>

</Project>

So you merely need to tell Visual Studio what kind of project it is (eg “Library” or “Exe”), and which framework you want to target. This new project type can also be used for .NET 4.8 or older frameworks, so you could convert your projects to the new format first, and worry about the .NET 6-specific issues later.

What happens here is that by default, the project will include all files in the project directory, and any subdirectories, and will automatically recognize standard files such as .cs and .resx, and interpret them as the correct type. While it is possible to set the EnableDefaultItems property, and go back to the old behaviour of having explicit definitions for all files included, I would advise against it, for at least two reasons:

  • Your project files remain simple and clean when all your files are included automatically.
  • When files and folders are automatically included, it will more or less force you to keep your folders clean and not have files remaining in there, which are not relevant to the project, or should not be in the folder containing the code, but should be stored elsewhere.

So this type of project will force you to exclude files and folders that should not be used in the project, rather than including all files you need.

I would recommend just backing up your old project files, and replacing them with this new ’empty’ project file, and just load it in Visual Studio (not right away, you may want to read about some possible issues, like with NuGet packages, below). You will immediately see which files it already includes automatically. If your projects are clean enough (merely consisting of .cs and .resx files), they should be more or less correct automatically. From there on, you simply need to add the references back, to other projects, to other assemblies, and to NuGet packages. And you may need to set ‘copy to output’ settings for non-code files that should also be in the application folder.

As mentioned above, you probably want to start by just converting the project for your EXE, and get the project building and running that way, with all the other projects running via the .NET 4-to-6 compatibility wrapping layer. Then you will want to work your way back, via the references. A good help is to display the project build order, and work from the bottom to the top of the list, converting the projects one by one, and creating a working state of the application at every step. Right-click your project in the Solution Explorer, choose “Build Dependencies->Project Build Order…”:

The solution format has not been modified, so you do not need to do anything there. As long as your new projects have the same path/filename as the old ones, they will be picked up by the solution as-is.

Now to get to some of the details you may run into.

NuGet issues

NuGet packages were originally more or less separate from the project file, and stored in a separate packages.config file. The project would reference them as normal references. NuGet was a separate process that had to be run in advance, in order to import the packages into the NuGet folder, so that the references in the project would be correct.

Not anymore, NuGet packages are now referenced directly in the project, with a PackageReference tag. MSBuild can now also import the NuGet packages itself, so no separate tool is required anymore.

This functionality was also added to the old project format. So I would recommend to first convert your NuGet packages to PackageReference entries in your project, getting rid of the packages.config file.

This also implies that if you build your application not from Visual Studio itself, but via an automated build process via MSBuild, such as a build server (Jenkins, Bamboo, TeamCity or whatnot), that you may need to modify your build process. You may need to replace a NuGet-stage with an MSBuild-stage that restores the packages (running MSBuild with the -t:restore switch).

So I would recommend first converting your projects from packages.config to PackageReference, and getting your build process in order, before converting the projects to the new format. Visual Studio can help you with this. In the Solution Explorer, expand the tree view of your project, go to the References-node, right-click and choose “Migrate packages.config to PackageReference…”:

AssemblyInfo issues

Another major change in the way the new project works, is that by default, it generates the AssemblyInfo from the project, rather than from an included AssemblyInfo.cs file. This will result in compile issues when you also have an AssemblyInfo.cs-file, because a number of attributes will be defined twice.

Again, you have the choice of either deleting your AssemblyInfo.cs file (or at least removing the conflicting attributes), and moving the info into the project file, or you can change the project to restore the old behaviour.

For the latter, you can add the GenerateAssemblyInfo setting to your project, and set it to false, like this:

<PropertyGroup>
   <GenerateAssemblyInfo>false</GenerateAssemblyInfo>
</PropertyGroup>

Limitations of .NET Core

So, .NET is now supported on other platforms than Windows, such as linux and macOS? Well yes and no. It’s not like Java, where your entire application can be written in a platform-agnostic way. No, it’s more like there is a lowest common denominator for the .NET 6 environment, which is supported everywhere. But various additional frameworks/APIs/NuGet packages will only be available on some platforms.

In the project example above, I used “net6.0” as the target framework. This is actually that ‘lowest common denominator’. There are various OS-specific targets. You will need to use those when you want to use OS-specific frameworks, such as WinForms or WPF. In that case, you need to set it to “net6.0-windows”. Note that this target framework will also affect your NuGet packages. You can only install packages that match your target.

There is also a hierarchy for target frameworks: the framework “bubbles up”. So a “net6.0” project can only import projects and NuGet packages that are also “net6.0”. As soon as there is an OS-specific component somewhere, like “net6.0-windows”, then all projects that reference it, also need to be “net6.0-windows”.

This can be made even more restrictive by also adding an OS version at the end. In “net6.0-windows”, version 7 or higher is implied, so it is actually equivalent to “net6.0-windows7.0”. You can also use “net6.0-windows10.0” for example, to target Windows 10 or higher only.

In practice this means that if you want to reuse your code across platforms, you may need to define a platform-agnostic interface-layer with “net6.0”, to abstract the platform differences away. Then you can implement different versions of these interfaces in separate projects, targeting Windows, linux and macOS.

Separate x86 and x64 runtimes

Another difference between .NET 4.8 and .NET 6 is that the runtimes are now separated into two different installers, where .NET 4.8 would just install both the x86 and x64 environment on x64 platforms.

This implies that a 64-bit machine may not have a 32-bit runtime installed, and as such can only run code that specifically targets x64 (or AnyCPU). That may not matter for you if you already had separate builds for 32-bit and 64-bit releases (or had dropped 32-bit already, and target 64-bit exclusively, as we should eventually do). But if you had a mix of 32-bit and 64-bit applications, because you assumed that 64-bit environments could run the 32-bit code anyway, then you may need to go back to the drawing board.

Of course you could just ask the user to install both runtimes, or install both automatically. But I think it’s better to try and keep it clean, and not rely on any x86 code at all for an x64 release.

Use of AnyCPU

While on the subject of CPU architectures, there is another difference with .NET 6, and that relates to the AnyCPU target. In general it still means the same as before: the code is compiled for a neutral target, and can be run on any type of CPU.

There is just one catch, and I’m not sure what the reasoning is behind it. That is, for some reason you cannot run an Exe built for AnyCPU on an x86 installation. The runtime will complain that the binary is not compatible. The same binary will run fine on an x64 installation.

I have found that a simple workaround is to build an Exe that is specifically configured to build for x86. Any assemblies that you include, can be built with AnyCPU, and will work as expected.

It is a small glitch, but easy enough to get around.

Detecting and installing .NET Core

Another problem I ran into, as .NET Core is still quite a fresh platform, is that it may not be supported by the environment that you create your installers with. In my case I had installers built with the WiX toolset. This does not come with out-of-the-box detection and installation of any .NET Core runtimes yet. What’s worse, the installer itself relies on .NET in order to run, and custom code is written against the .NET Framework 4.5.

This means that you would need to install the .NET Framework just to run your installer, while your application will need the .NET 6 installer, and the .NET Framework is not required at all, once it is installed. So that is somewhat sloppy.

Mind you, Microsoft includes a stub in every .NET Core binary that generates a popup for the user, and directs it to the download page automatically:

So, for a simple interactive desktop application, that may be good enough. However, for a clean, automated installation, you will want to take care of the installation yourself.

I have not found a lot of information on how to detect a .NET Core installation programmatically. What I have found, is that Microsoft recommends using the dotnet command-line tool, which has a –list-runtimes switch to report all rutimes installed on the system. Alternatively, they say you can scan the installation folders directly.

As you may know, the .NET Framework could be detected by looking at the relevant registry key. With .NET Core I have not found any relevant registry keys. I suppose Microsoft deliberately chose not to use the registry, in order to have a more platform-agnostic interface. The dotnet tool is available on all platforms.

Also, a quick experiment told me that apparently the dotnet tool also just scans the installation folders. If you rename the folder that lists the version, e.g. changing 6.0.1 to 6.0.2, then the tool will report that version 6.0.2 is installed.

So apparently that is the preferred way to check for an installation. I decided to write a simple routine that executed dotnet –list-runtimes and then just parsed the output into the names of the runtimes and their versions. I wrapped that up in a simple statically linked C++ program (compiled to x86), so it can be executed on a bare bones Windows installation, with no .NET on it at all, neither Framework nor Core. It will then check and install/upgrade the .NET 6 desktop runtime. I also added a simple check via GetNativeSystemInfo() to see if we are on an x86 or x64 system, so it selects the proper runtime for the target OS.

Workarounds with DllExport/DllImport

Lastly, I want to get into some more detail on interfacing with legacy .NET assemblies, which are not directly compatible with .NET 6. I ran into one such library, which I believe made use of the System.Net.Http.HttpClient class. At any rate, it was one of the rare cases where the compatibility wrapper failed, because it could not map the calls of the legacy code onto the equivalent .NET 6 code, since it is not available.

This means that this assembly could really only be run in an actual .NET Framework environment. Since the assembly was a closed-source third-party product, there was no way to modify it. But there are ways around this. What you need is some way to run the assembly inside a .NET Framework environment, and have it communicate with your .NET 6 application.

The first idea I had was to create a .NET Framework command-line tool, which I could execute with some command-line arguments, and parse back its output from stdout. It’s a rather crude interface, but it works.

Then I thought about the UnmanagedExports project by Robert Giesecke, that I had used in the past. It allows you to add [DllExport] attributes to static methods in C# to create DLL exports, basically the opposite of using [DllImport] to import methods from native code. You can use this to call C# code from applications written in non-.NET environments such as C++ or Delphi. The result is that when the assembly is loaded, the proper .NET environment is instantiated, regardless of whether the calling environment is native code or .NET code.

Mind you, that project is no longer maintaned, and there’s a similar project, known as DllExport, by Denis Kuzmin, which is up-to-date (and also supports .NET Core), so I ended up using that instead.

So I figured that if this works when you call a .NET Framework 4.8 assembly from native C++ code, it may also work if you call it from .NET 6 code. And indeed it does. It’s still a bit messy, because you still need a .NET 4.8 installation on the machine, and you will be instantiating two virtual machines, one for the Core code and one for the Framework code. But the interfacing is slightly less clumsy than with a command-line tool.

So in the .NET 4.8 code you will need to write some static functions to export the functionality you need:

class Test
{
    [DllExport]
    public static int TestExport(int left, int right)
    {
        return left + right;
    } 
}

And in the .NET 6 code you will then import these functions, so you can call them directly:

[DllImport("Test.dll")]
public static extern int TestExport(int left, int right);
...
public static void Main()
{
    Console.WriteLine($"{left} + {right} = {TestExport(left, right)}")
}

Conclusion

That should be enough to get you off to a good start with .NET 6. Let me know how you get on in the comments. Please share if you find other problems when converting. Or even better, if you find solutions to problems.

Posted in Software development, Software news | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Running anything Remedy/Futuremark/MadOnion/UL in 2020

There has always been a tie between Futuremark and the demoscene. It all started with the benchmark Final Reality, released by Remedy Entertainment in 1997.

Remedy Entertainment was a gaming company, founded by demosceners from the legendary Future Crew and other demo groups. Remedy developed an early 3D acceleration benchmark tool for VNU European Labs, known as Final Reality, which showed obvious links to the demoscene, both because of the demo-like design of its parts, the name “Final Reality” being a reference to the Future Crew demo Second Reality, and the fact that a variation of Second Reality’s classic city scene was included in Final Reality.

After this first benchmark, a separate company focused on benchmarking was founded, which was to become Futuremark. After releasing 3DMark99, they changed their name to MadOnion.com. Then after releasing 3DMark2001, they changed back to Futuremark Corporation. Eventually, after being acquired in 2018 by UL, they changed the name to UL Benchmarks.

With every major milestone of PC hardware and software, generally more or less coinciding with new versions of the DirectX API and/or new generations of hardware, they released a new benchmark to take advantage of the new features, and push it to the extremes.

Traditionally, each benchmark also included a demo mode, which added a soundtrack, and generally had extended versions of the test scenes, and a more demo-like storytelling/pacing/syncing to music. As a demoscener, I always loved these demos. They often had beautiful graphics and effects, and great music to boot.

But, can you still run them? UL Benchmarks was nice enough to offer all the legacy benchmarks on their website, complete with registration keys: Futuremark Legacy Benchmarks – UL Benchmarks

Or well, they put all of them up there, except for Final Reality (perhaps because it was released by Remedy, not by Futuremark/MadOnion). But I already linked that one above.

I got all of them to run on my modern system with Windows 10 Pro x64 on it. I’ll give a quick rundown of how I got them running, starting from the newest.

3DMark11, 3DMark Vantage, 3DMark06, 3DMark05 and 3DMark03 all installed and ran out-of-the-box.

3DMark2001SE installed correctly, but the demo would not play. Looking at the error log revealed that it had problems playing back sound (which explains why regular tests would work, they have no sound). But when you select the Compatibility mode for Windows 8, that fixes the sound, and the whole demo runs fine.

3DMark2000 was a bit more difficult. On my laptop it installed correctly, but on my desktop, the installer hung after unpacking. The trick is to go to Task Manager, find the setup.exe process in the Details tab, right-click it and select “Analyze wait chain”. It will tell you what the process is waiting for. In my case it was “nvcontainer.exe”. So I killed all processes by that name, and the setup continued automatically.

Now 3DMark2000 was installed properly, but it still did not work correctly. There is a check in there, to see if you have at least 4MB video memory. Apparently on a modern video card with multiple GBs of memory, the check overflows, and thinks you have less than 4MB. It then shows a popup, and immediately closes after you click on it. So I disassembled the code, found the check, and simply patched it out. Now it works fine.

If you want to patch it yourself, use a hex editor and change the following bytes in 3DMark2000.exe:

Offset 69962h: patch 7Dh to EBh
Offset 69979h: patch 7Dh to EBh

XL-R8R dates from the same era as 3DMark2000, and I ran into the same issue of setup.exe getting stuck, and having to analyze the wait chain to make it continue. It did not appear to have a check for video memory, so it ran fine after installation.

3DMark99Max was more difficult still. The installer is packaged with a 16-bit version of a WinZip self-extractor. You cannot run 16-bit Windows programs on a 64-bit version of Windows. Luckily you can just extract the files with a program like 7-Zip or WinRar, by just right-clicking on 3DMark99Max.exe and selecting the option to extract it to a folder. From there, you can just run setup.exe, and it should install properly.

Like 3DMark2000, there’s also a check in 3DMark99Max that prevents it from running on a modern system. In this case, it tries to check for DirectX 6.1 or higher, and the check somehow mistakenly thinks that the DirectX version is too low on a modern system. Again, a simple case of disassembling, finding the check, and patching it out.

If you want to patch it yourself, use a hex editor and change the following byte in 3dmark.exe:

Offset 562CCh: patch 75h to EBh

Final Reality then, the last one. Like 3DMark99Max, it has a 16-bit installer. However, in this case the trick of extracting it does not help us. You can extract the setup files, but in this case the setup.exe is still 16-bit, so it still cannot run. But not to worry, there are ways around that. Initially I just copied the files from an older installation under a 32-bit Windows XP. But an even better solution is otvdm/winevdm.

In short, x86 CPUs can generally only switch between two modes on-the-fly under Windows. So a 32-bit version of Windows can switch between 32-bit and 16-bit environments, which allows a 32-bit version of Windows to run a 16-bit “NTVDM” (NT Virtual DOS Machine) environment, in which it runs DOS and 16-bit Windows programs. For 64-bit versions of Windows, there’s a similar concept, known as Windows-on-Windows (WOW64). This allows you to run 32-bit Windows programs. The original NTVDM for DOS and Win16 programs is no longer available.

Otvdm works around this by using software emulation for a 16-bit x86 CPU, and then uses part of the Wine codebase to translate the calls from 16-bit to 32-bit. This gives you very similar functionality to the real NTVDM environment on a 32-bit system, and allows you to run DOS and Win16 applications on your 64-bit Windows system, albeit with limited performance, since the CPU emulation is not very fast. Because it’s not a sandbox environment like most emulators, but it actually integrates with the host OS via 32-bit calls.

In our case, we can simply run the Final Reality installer via otvdm. Just download the latest stable release from the otvdm Github page, and extract it to a local folder. Then start odvdmw.exe, and browse to your fr101.exe installer file. It will then install correctly, directly onto the host system.

There appear to be no compatibility problems with this oldest benchmark of them all, funny enough, so that rounds it all up.

Here is a video showing all the 3DMark demos:

And here is the XL-R8R demo:

And finally the Final Reality demo:

Posted in Direct3D, Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | 6 Comments

The Trumpians love their children too

After expressing my worries on the development of extreme leftism and Wokeness, I thought I should also express my concerns about the aftermath of the elections.

What worries me is how people responded to Trump’s loss, both in the US and in the rest of the world. I have seen images of people going out on the streets, cheering and chanting, and attacking Trump dolls and such.

There’s also a site “Trump Accountability” that wants to attack Trump supporters.

As I grew up during the Cold War, and I saw the demise of Communism and its dictators, this sort of thing reminds me very much of those days.

The big difference is: the US was not under dictatorship, and although Trump may have lost the elections, a LOT of people voted for him. I suppose this is the result of 4 years of sowing hatred against a president and his politics. And now that Trump is gone, it seems they want to go after his supporters. But for what? It is a democracy, and these people simply cast their democratic vote. That’s how it works. If you start oppressing people with the ‘wrong’ vote, you are actually CREATING a dictatorship, not getting rid of one, oh the irony.

At the time of writing, Trump has received around 71 million votes, and Biden has received around 74 million votes. And that is what troubles me. Are these people serious about persecuting such a large group? There aren’t 71 million fascists, racists, or whatever you think in the US. That just doesn’t make sense at all. Most of these 71 million people are just normal people like you and me. They could be your neighbour, your hair dresser, your plumber, etc.

And that’s where I think things go wrong, badly. As a European, I live in a country that is FAR more leftist than the US. We are at Bernie Sanders level, if that. So theoretically I couldn’t be further removed from Trump/Republican/conservative voters. People who are generally quite religious, pro-life, anti gay-marriage etc. And then they are often patriotic. I’m not even American, let alone a patriot for that country. So in that sense I suppose I have very little if anything in common with these people, and my views are very different.

Nevertheless, I had some interesting talks with some of these people. I recall one discussion where a religious Republican sincerely did not understand how you can value life if you don’t believe in God. That’s interesting, I never even gave that any thought, since I’m not religious, yet I do value life. And I can understand that to them, if God didn’t create life, then they don’t see how life is in any way holy, or however you want to put it. Perhaps it is actually true that non-religious people value life less, who knows?

Thing is, they did make me think about it, and we had a discussion. I suppose my explanation is one of ‘theory of mind’: I know how it feels if I get hurt, and I know that I don’t want to die. So I can understand how that must feel for others as well, so I do not want to do that to them either. Which in some way comes back to what Christians already know: Don’t do unto others what you don’t want done unto you.

But the key here is that we could have this discussion, and we had mutual respect and understanding for our different views.

And I suppose that is also the problem with the people who are now cheering on the Democrat win… or actually Trump’s loss. While as a European, I may be closer to the Democrat political view than the Republican one, this is something that goes COMPLETELY against who I am, and how I want the world to be. I grew up with the value of tolerance and understanding. I suppose political views aren’t everything. I cannot get behind you if I share the basic views, but reject the way in which you actually conduct yourself (which I think is against these very views anyway).

If half of the US cannot tolerate the other half simply for having different ideas on what is best for their country, then that is a recipe for disaster.

Getting back to the Cold War, the song Russians by Sting comes to mind:

Back when this song came out, the Cold War also set up the US against the USSR with lots of propaganda in the media. Not everything you heard or read was true. In this song, Sting makes some very good points. Mostly that Russians are just people like you and me. Their government may have a certain ideology, but most Russians just try to lead their lives and mind their own business, just as we do.

As he says:

“In Europe and America there’s a growing feeling of hysteria”

“There is no monopoly on common sense
On either side of the political fence
We share the same biology, regardless of ideology
Believe me when I say to you
I hope the Russians love their children too”

“There’s no such thing as a winnable war
It’s a lie we don’t believe anymore”

I think these lines still contain a lot of truth. There’s hysteria in the US as well now, fueled by the mainstream media and social media, much like in the Cold War back then.

No monopoly on common sense on either side of the political fence. That’s so true. You can’t say the Democrat voters have all the common sense and the Republican voters have none, just based on who won an election.

And indeed, he says “we share the same biology”, that is of course even more true for Democrats vs Republicans than it was for the US vs USSR situation, as both are Americans. They may even be related.

And the most powerful statement: “I hope the Russians love their children too”. Of course he was referring to nuclear war, and mutually assured destruction. But it is very recognizable: Russians are humans too, of course they love their children, they are just like us. And it’s the same with Democrats and Republicans.

So I hope this also remains only a Cold War between Democrats and Republicans, and both sides will accept the results, and try to find ways to come together again, understand and tolerate eachother, and work together for a better world.

Update: Clearly I am not the only one with such concerns. Douglas Murray has also written an article about his concerns of this polarization, division and possible outcomes. I suggest you read it.

More update: Bret Weinstein and Heather Heying also comment on some of these anti-Trump sentiments and actions. And they make a good point about what the REAL left is, or is supposed to be (and as I said, that is also more or less my political position, I am by no means right-wing, certainly not by American standards), and how these far left people have lost the plot.

And another update: James Lindsay, one of the authors of the book Cynical Theories, which I mentioned before, has actually decided to vote for Donald Trump, despite being a liberal rather than a conservative/Republican. He explains in the video below how he sees Wokeness as possibly the biggest threat to the country, and how Biden is unlikely to stop its rise. So at least some people who voted Trump, aren’t actually Trump/Republican/conservative supporters, they just thought the alternative was worse.

And yet another update: Here Jordan B Peterson talks about how liberals and conservatives should listen to eachother, and keep eachother balanced. One side is not necessarily wrong, the other side is not necessarily right. They each have a different focus in life, and they need each other. Ideas may be good or bad depending on the situation in which they are applied. Very much the message I wanted to give. I will probably return to this in more detail in a future post.

Posted in Science or pseudoscience? | Tagged , , , , | 4 Comments

The Cult of Wokeness, followup

The previous article was just meant as a quick overview and wake-up call. But I would like to say a few more things on the subject.

I have since read the book Cynical Theories by Helen Pluckrose and James Lindsay. I recommend that everyone reads this book, so that they are up-to-speed with the current Woke-mindset. At the very least, I suggest you read a review of the book, to get a rough idea. The review by Simon Jenkins gives a good quick overview of the topics that the book discusses. I will also repeat my recommendation to read some of the articles and background information on the New Discourses site.

I would like to elaborate on two things. Firstly there’s the pseudoscientific nature of it, which is what I am most concerned about, as I said before. Secondly, I also want discuss some forms in which Woke manifested itself in the real world.

Postmodernist philosophy

As you know I’ve done a write-up about the philosophy of science before. At university, this was taught in a number of courses in the first three years. I always took a liking to it. It is important to know what our current methods of science are exactly, and where they came from, how they evolved.

As you may have noticed, I did not cover postmodernism at all. That was not intentional. Postmodernism simply never crossed my path at the time. But now that it has, I went through my old university books and readers again, and indeed, there was no specific coverage of postmodernism at all. It seems that the only postmodernist philosopher that is referenced at all, is Paul Feyerabend.

Feyerabend is actually a somewhat controversial figure, as he wanted a sort of ‘anarchistic’ version of science, and rejected Popper’s falsification, for example. The university material I have, only spends one paragraph on him, merely to state that purely rational science is merely one extreme view, where Feyerabend represents the other extreme. They nuance it by saying that in practice, science operates somewhere in the gray area between these extremes.

And that brings me to the point I want to make. Postmodernism is extremely critical of society in general, and science specifically. There is some value to the ideas that postmodernism brings forward. At the same time, you should not take these ideas to the extreme. Also, the reason why they were not covered in the philosophy of science, is that they did not actually produce new knowledge or useful methods. So they did not add anything ‘tangible’ to science, they merely brought more focus to possible pitfalls of bias, political interest and other ideologies.

There is some merit to their idea of systems that can be ‘rigged’ by having a sort of bias built-in. A bias that you might be able to uncover by looking at the way that people talk about things, the ‘discourses’. That the system and the bias are ‘socially constructed’.

After all, with ‘politically correct’ language we are basically doing exactly that: we choose to use certain words, and avoid certain other words, to shift the perception (bias) of certain issues. So in that sense it is certainly possible to create certain ‘biases’ socially, and language is indeed the tool to do this.

However, they see everything as systems of power and hierarchy, and the goal of the system is always to maintain the position of power at the cost of the lesser groups (basically a very dystopian view, like in the book 1984 by George Orwell). That is not necessarily always the case. For example, science is not a system of social power. Its goal is to obtain (objective and universal) knowledge, not to benefit certain groups at the cost of others. Heck, if anything proves that beyond a shadow of a doubt, then it must be the main topic I normally cover on this blog: hardware and software. Scientists have developed digital circuits, transistors, computer chips, CPUs etc., and developed many tools, algorithms etc. to put this hardware to use. As a result, digital circuits and/or computers are now embedded in tons of devices all around you in everyday life, making life for everyone easier and better. Many people have jobs that exist solely because of these inventions. Everyone benefits in various ways from all this technology.

And I think that’s where the cynicism comes in. Postmodernists try to find problems of power-play and ‘oppression’ in every situation. That is indeed a ‘critical’ and ‘skeptical’ way of looking at things, but it’s not critical and skeptical in the scientific sense.

Where it goes wrong is when you assume that the possible problems you unearth in your close-reading of discourses, is the only possible explanation, and therefore you accept it as the truth. I am not sure if the original postmodern philosophers such as Foucault and Derrida actually meant to take their ‘Theory’ this far. But their successors certainly have.

This is most clear in the Critical Race Theory, which introduces the concept of ‘intersectionality’ (in Kimberlé Crenshaw’s book by the same name). The basic assumption here is that the postmodern ‘Theory’ of a racist system is the actual, real state of the world. Therefore all discourses must be a power-play between races. That assumption is certainly not correct in every situation, and most probably not even in the majority of situations.

The concept of intersectionality itself is another idea that may have some merit, but like the ‘social construct theories’, it does not apply as an absolute truth. As I already said in the previous post, in short intersectionality says that every person is part of any number of groups (such as gender, sex, sexual preference, race, etc). Therefore the prejudice against a person is also a combination of prejudice against these groups. For example, a black woman is both black and a woman. Therefore she may receive prejudice for being black and for being a woman. But crucially, she will also receive prejudice for a black woman. So intersectionality claims that prejudice against people is more than just the sum of the parts of groups that they are part of. At the ‘intersections’ between groups, there are ‘unique’ types of prejudice felt only by people that are part of both groups.

So far, the concept of intersectionality makes sense. People can indeed be ‘categorized’ into various groups, and will be members of a collection of groups at a time. And some combinations of groups may lead to specific kinds of prejudice, discrimination and whatnot.

However, the problem with intersectionality and Critical (Race) Theory arises when you start viewing this intersectionality as the absolute truth, the entire reality, the one and only system. That is an oversimplification of reality. The common way of viewing people was as individuals: they may be part of certain groups, and may share commonalities with others, but they are still unique individuals, who have their own thoughts and make their own decisions. But viewing people through an intersectional lens turns into identity politics: people are essentially reduced to the stereotype of their intersectional position, and are all expected to think and act alike. And that obviously is taking things a step too far.

Another very serious problem is that instead of looking for rationality, objectivity, or fact, these concepts are denounced. The focus is put on the ‘lived experiences’ (anecdotal evidence) of these groups instead. In the intersectional hierarchy, the ‘lived experience’ of an oppressed group always takes precedence over an oppressing group. Therefore, a woman’s word is always to believed over a man’s word, and a black person’s word is always to be believed over a white person’s word. If a woman says she experienced sexism, then it is considered a fact that there was sexism. If a black person says she experienced racism, then it is considered a fact that there was racism. Again, it is obvious how this can lead to false positives or exploitation of the system.

This is also where the system shows some of its obvious flaws and inconsistencies. Namely, these ‘lived experiences’ are subjective by definition, and as such, are viewed through the biased lens of the subject. This is exactly what caused people to develop the scientific method, to try and avoid bias, and reach objective views and rational explanations.

Postmodernism itself is supposed to be highly critical of biased discourses, but apparently bias is suddenly perfectly acceptable, and biased anecdotes are actually considered ‘true’ as long as the biased party is the one that is (subjectively) being ‘oppressed’. You just can’t make sense of this in any way. Intersectionality and Critical Race Theory are built on intellectual quicksand. It doesn’t make sense, and you can’t make sense of it, no matter how hard you try.

A nice example of how this ‘Theory’ can go wrong in practice can be found here, on this chart from New Discourses, under point 3:

Image

As you can see, there are only two possible choices to make, and both can be problematized into a racist situation under Critical Race Theory. While these may be *possible* explanations, they aren’t necessarily correct. There are plenty of alternative, non-racist explanations possible. But not under Critical Race Theory.

And that is a huge problem: CRT sees racism everywhere, so you will run into a number of false positives. That does not seem very scientific. The only scientific value that postmodernism approaches could have, is to search for possible hypotheses. But you would still need to actually scientifically research these, in order to find out if they are correct. Instead, they are ‘reified’: assumed to be true. CRT assumes that “the system” is racist, and white people have all the power, by definition. An assumption, not a proven hypothesis. An assumption, that you are unable to prove scientifically, because the evidence simply is not there.

Woke in practice

First of all, perhaps I should define ‘Woke’ as an extreme form of political correctness. A lot of things are ‘whitewashed’ in the media by either not reporting them at all, or reporting them in a very biased way with ‘coded language’. On the other hand, some things are ‘blackwashed’ (is that even a term?) by grossly overstating things, or downright nefarious framing of things.

Now, one thing that really rubs me the wrong way, to say the least, is the way World War II, Hitler, Nazi’s, fascism etc. are being used in today’s discourse. And it only strengthens the view that we in Europe already had of the US: these people seem to have little or no clue about history or the rest of the world.

And I say “Europe” because that’s how they look at us. As if we’re just one country, like the US, and the actual countries in Europe are more like different ‘states’. In this Woke-era, it’s important to note that Europe is nothing like that. For starters, nearly every country has its own language. So as soon as I cross a border, it immediately becomes difficult to even talk to other people. And there are far more differences. Countries in Europe still have their own unique national identities, ethnicities if you like. And Europe is a very old continent, like Africa. So long before there were ‘countries’ and ‘borders’, there were different tribes, that each had their own unique languages and identities, ethnicities. There’s even a Wikipedia page on the subject (and also for Africa).

Of course, this also leads to people having stereotypes of these different countries, and making fun of them, or there being some kind of rivalry between them. Things that the Woke would probably call ‘racism’. Except, to the Woke, they’re all ‘white’ and ‘European’, or ‘black’ and ‘African’. So apparently there is a complexity to the real world that they just don’t understand. Probably because their country is only a few hundred years old, and only has a single language, and (aside from Native Americans) never had any tribes to speak of. All ethnicities just mostly blended together as they came from Europe and Africa, and settled in America, taking on the new American identity.

Speaking of getting things completely wrong… Apparently Americans refer to white as ‘Caucasian’. The first time I heard that was on some TV show, I suppose a description of a suspect or such: “Middle-aged male, Caucasian…” So I was surprised. What did they mean by ‘Caucasian’? I thought they meant he was probably of Russian descent or such, because it referred to the Caucasus, a mountain region in Russia. But when I looked it up, apparently it was a name used for ALL white people. Which NOBODY else uses.

If you look into the history of the term ‘Caucasian’, things get interesting. Apparently somewhere in the 18th century, anthropologists thought that there were 3 main races: ‘Caucasian’, ‘Mongoloid’ and ‘Negroid’. This theory is long considered outdated, but apparently that didn’t stop Americans from using the term. And in fact, aside from wrongly using the term ‘Caucasian’ to denote ‘white skin colour’, there is some connotation attached to the term as well. Caucasians, or more specifically the ‘Circassian’ subtype of Caucasian people was seen as the ‘most beautiful humans’ in some pseudoscientific racial theory. Well, from that sort of crazy stereotype, it’s only a small step towards ‘white supremacy’ I suppose.

Because, let me get this clear… To me, the only race that exists is the ‘human race’. As someone with a background in science/academia, clearly I support Darwin’s theory of evolution as the most plausible explanation we have (as does a large part of the Western world. The US perhaps being an exception, because it’s still quite religious, and people still believe in creationism, making evolution controversial. It is not even remotely controversial in Western European countries). Combining archaeological findings of human fossils and evolution, the history of human life goes back to Africa, where humans evolved from apes.

Over time, these humans spread across the entire globe, and groups of humans in different parts of the world would continue to evolve independently. This led them to adapt to their local environment, which explains why humans in the north developed lighter skin. In the north, there were different levels of sun, therefore different levels of UV exposure and vitamin D production. This meant that less melanin was required. So evolution in Africa prevented genetic variations with less melanin from being successful. But in different areas in the world, this constraint no longer held. Variation in eye and hair colour can be explained in a similar way, as these are also dependent on genetic variations and melanin levels.

So, this means that we are all descendent from African people. It also means that skin colour variations are purely an adaptation to the environment, which can in no way be linked to any kind of perceived ‘superiority’ in any way, in terms of intelligence, behaviour or anything else. Skin colour is just that: skin colour.

What’s more, as humans developed better ways to travel, different groups that had evolved independently for many years, would interact with eachother again, so these separate evolutionary gene pools were mixed together again. So aside from any kind of ‘race’ based on skin colour being just some arbitrary point in evolution, even if you were to take such an arbitrary point in history, in practice most people would be a blend of these various arbitrary race definitions. For example, although the Neanderthal people are extinct, they have mixed with ‘modern’ humans, so various groups of people, mainly in Europe and Asia today, still carry certain Neanderthal-specific genes. It is believed that a genetic risk of Covid-19 can be led back to these Neanderthal-genes, for example.

The Neanderthals were a more primitive species of humans. It is not even clear whether they were capable of speech at all. Modern man is of the species of Homo Sapiens. And since Neanderthals never lived in Africa, they never mixed with African Homo Sapiens. So African (‘black’) people are genetically the most ‘pure’ modern humans. European (‘white’), Asian and even Native American people carry the more primitive Neanderthal genes. So if you want to make any kind of ‘racial argument’, then based on the gene-pool, ‘white superiority’ is a strange argument to make. After all, white people carry genes from a more primitive, archaic, extinct human species. Being extinct is hardly ‘superior’.

But there’s also a lot more recent mixing of genes. Because what some people call ‘white’ is basically everyone with a light skin colour. But that includes people with all different sorts of eye colours, hair colours, and also hair styles (straight, curly, frizzy etc). Which indicates that various gene pools, presumably from groups of people that evolved individually have been mixed. To give a recent example, take the recently deceased guitar legend Eddie van Halen. People may judge him as ‘white’, based on his appearance. But actually his mother was from Indonesia, so Asian. You see how quickly this whole ‘race’ thing goes bad. If you can’t even tell from the appearance of a ‘white’ person that one of his parents was of a different so-called ‘race’, then imagine how hard it is to tell whether a ‘white’ person had any ancestry of different ‘race’ two or more generations back.

So this whole idea of ‘race’ is just pseudoscience. It’s a social construct. Which is quite ironic, given that currently the Woke ‘antiracists’ are pushing a racial ideology. Which brings me closer to what I wanted to discuss. Because who were the last major group to push a pseudoscientific racial ideology? That’s right, the Nazis. They somehow believed that the “Aryan race” was superior to all others, and the Jews were the worst. Their interpretation of what ‘Aryan’ was, was basically white European people, ideally with blue eyes and blond hair. So in other words, it was basically a form of ‘white supremacy’. The Nazi Germans thought they were the ‘chosen people’, and since they considered themselves superior, obviously they had to take over the world.

Now what the Americans need to understand is that although most of Europe was white, and a large part of the population could pass for their idea of ‘Aryan’, they certainly were not interested in these ideas. The Germans went along because of years of propaganda and indoctrination by the Nazis. And even then many Germans only went along because they were under a totalitarian regime, and they had little choice. It is unclear how many Germans outside the Nazi party itself actually subscribed to the Nazi ideology. Germany also didn’t have a lot of allies in WWII (and even though Italy was also fascist, and was an ally, they were actually reluctant to adopt the racist ideology. Racism was not originally part of fascism. It was Hitler who added the racist element, and pressured Mussolini in adopting it).

Which explains why WWII was a war: Germany actually had to invade most countries, in order to push their Nazi ideology and get on with the Holocaust. Even then, there was an active resistance in many occupied countries, who tried to hide Jews and sabotage the Germans.

My country was one of those, and it still bears the scars of the war. Various cities had parts bombed. My mother lived in a relatively large house, which led to a German soldier being stationed there for a while (presumably to make sure they were not trying to hide Jews in the house). Concentration camps were built here, some of which are still preserved today, lest we forget.

And obviously WWII was not won by the Nazis. The Allies, who were again mostly white Western nations, clearly did not approve of the Nazis and their genocide.

So, given this short European perspective on WWII-related history, hopefully you might understand that terms like ‘Nazi’, ‘fascist’, ‘white supremacy’ and antisemitism resonate deeply with us, in a bad way.

And these days, a lot of people just use these terms gratuitously, mainly to insult people they don’t agree with, and dehumanize them (which is rather ironic, as this is exactly what the Nazis did to the Jews). Hopefully you understand that we take considerable offense at this.

And if you think that’s just extreme, activist people, guess again. It even includes people who should know better, and should be capable of balanced, rational thought. Such as Alec Watson of Technology Connections.

I give you this Twitter discussion:

This was related to the ‘mostly peaceful protests’ in Portland, as you can see. Clearly I did not agree with the quoted tweet, because it presented a false dichotomy: yes, government should be serving the people, but there are certain cases where it may be justified to beat people up on city streets (in order to serve the people). Namely, to stop rioters/domestic terrorists or otherwise violent groups. In Europe we are very familiar with this sort of thing, mostly with the removal of squatters from occupied buildings (who tend to put up quite violent protests) or when groups of fans from different sports teams attack eachother before, during or after a game.

After all, that is the concept of the ‘monopoly on violence‘ that the government has, through organizations such as the police and the army. We have very strict laws on guns and other arms, so we actually NEED the government to protect law-abiding citizens from violent/criminal people. Therefore, beating people up on the streets is perfectly fine, if that is what it takes to stop and arrest these people, in order to protect the rest.

So what I saw happening in Portland was a perfectly obvious situation where the government should stop these riots with force. Nothing wrong with beating up people who were trying to set a police station on fire, and throwing fireworks at the police etc. They were being violent and destroying property.

But debate ensued about that as well. Apparently Alec and other people did not consider destruction of property to be ‘violence’. That is funny, since you can find dictionary definitions that do. Apparently the meaning of words is being redefined here. Postmodernism/Wokeism at play. Aside from that, there are laws that state that the government needs to protect the people AND their property.

They were in denial about the destruction anyway, so I had to link to some Twitter feeds from people who reported on it, such as Andy Ngo and Elijah Schaffer. But as you can see, even then they were reluctant.

The conversation turned to Antifa and how they were fighting ‘fascists’. This is perhaps a good place for the second episode of Western European history. The history of Marxism and communism.

Because as you might know, Marxism was developed by Karl Marx and Friedrich Engels in Germany in the 19th century, most notably by publishing The Communist Manifesto and the book Das Kapital. Various communist parties in various European countries were formed, who aimed to introduce communism by means of a revolution. The first successful revolution occurred in 1917 in Russia by the Bolsheviks, led by Vladimir Lenin. In 1922 they formed the Soviet Union, which expanded communism to other countries gradually, most notably after WWII. Namely, after Germany tried to invade the Soviet Union, Stalin pushed back hard, and eventually moved all the way up to Berlin, causing Hitler to commit suicide and forcing the Nazis to capitulate, before the Allies arrived.

Effectively, Soviet forces now occupied large parts of Eastern Europe, including a large part of Germany itself. Stalin converted these parts to communism and made them into satellite states of the Soviet Union. This also led to Germany being split up into the Western Bundesrepublik Deutschland and the Eastern Deutsche Demokratische Republik (the communist satellite state).

This lasted up to the early 90s. Which means that a considerable amount of European people either lived under communism, or lived near countries under communism. These communist countries were sealed off from the outside world, with the most notable example being the Berlin Wall. They were totalitarian states.

After this short introduction, now to get back to Antifa, which originally started in the 1920s in Germany. Which was around the same time that fascism arose in Europe.

Fascism started in Italy, under Mussolini, and was later adopted by Hitler. They had political parties that had their own mobs/paramilitary groups, like a sort of ‘private army’ to intimidate political opponents, and eventually get into power. Also of note is that they initially identified themselves as leftist/socialist (Nazi is short for NationalSozialismus, the political identity of the NSDAP party, Nationalsozialistische Deutsche Arbeiterpartei). They were later classified as far-right, mainly because of their extreme nationalism, but nothing to do with their economic policies.

Communist parties used similar mob/paramilitary tactics, in order to organize their revolution and overthrow the government. Essentially both are domestic terrorists. This more or less made communists and fascists ‘natural enemies’. They also bear remarkable resemblance in many ways. Not only the mob tactics, but also the use of propaganda, and eventually establishing a totalitarian state, without much room for individuals and their opinions. Everything had to be regulated, including the media, arts, music etc.

Cynically one could say that communists and fascists are two sides of the same coin. Their tactics and goals are mostly the same, they only apply a slightly different ideology, either Marxism or Nazism. Both types of regimes caused millions of deaths. Communism even far more than Nazism, because it was more widespread and lasted longer. And not just in Russia either. The same happened in China or Cambodia for example. Dissidents had to be eliminated, which led to genocide.

The original German Antifa was ended in 1933 when the Nazis rose to power. Nazism ended in 1945, when WWII ended. Interestingly enough, the totalitarian regime in the communist states kept the idea alive that fascism was still alive in the Western states. And while the actual goal of the Berlin Wall was to keep people from escaping the dreadful DDR and reach the free BRD, they fed the people propaganda that the wall was put up in order to keep the fascists out (who, as already stated, didn’t exist anymore. But since the state controlled the media, their citizens had no idea about that, and only ‘knew’ what propaganda they were fed by the state).

And that brings me back to the current Antifa, which started in Portland. Ever since Trump started running for president, his opponents tried to frame him as far-right, racist, white supremacist, fascist and whatnot. Technically, he is none of these things. The only thing that is somewhat accurate is that he is clearly a right-wing politician. Both economically, and he also has a nationalist focus. To what extent that is actually ‘far-right’, is debatable.

But everything else just seems to be propaganda and gaslighting. He neither says nor does racist things, no signs of white supremacy either, and clearly he’s not a fascist. Mussolini and Hitler were ‘technically’ chosen democratically, but actually used mobs to intimidate political opponents (and in Hitler’s case, there were also a number of assassinations, in the Night of the Long Knives). Trump did none of these things. He was democratically chosen by the people, without any kind of intimidation, he hasn’t had anyone assassinated in order to get to power, or expand his power, or anything. He merely tries to implement his policies on healthcare, the economy, the environment and such. That is what presidents do.

He may be a lot of things (a populist, narcissistic, rude, anti-scientific etc), but he is not ‘the new Hitler’ or anything. He certainly hasn’t pushed any kind of racist ideology, let alone changed laws to that effect. He also has not made major changes to the law to create a totalitarian regime or anything (if he did, Antifa would have been eliminated quickly. Instead, most rioters are not even arrested at all, and the ones that do, tend to get little or no sentence. Fascism is far more deadly than that, idiots. You wouldn’t live to tell). So in no way does it look anywhere like fascism. What fascists is Antifa fighting? None, they’re gaslighting you.

Getting back to the discussion with Alec… I tried to make the point that Antifa (based on communism and fascism being two sides of the same coin) was acting far more fascist than any other group in the US at this time. They are the ones going out on the streets in large mobs, intimidating people with ‘the wrong opinion’, destroying property, looting, arson etc. Look up what fascists did in Italy and Germany, or what communist revolutionaries did in Russia, China etc. That looks nothing like what the Trump administration is doing, and everything like what Antifa is doing.

You’d have to be really stupid to not be able to look beyond the obvious ploy of calling an organization “Anti-Fascism”. It’s called Anti-Fascism, so it can’t be fascism, right? Wrong. It can, and it is. This is domestic terrorism, by the book. And like many terrorist organizations, they aren’t officially organized, but operate more in individual ‘cells’, making them harder to track.

But apparently Alec was so gaslit that he claimed that fascism didn’t mean what I think it meant (as in: the proper definition found in many history books, encyclopedia etc). Because ‘words can change meaning over time’. There we are, postmodernist/Wokeist word games again. Words have meaning, you can’t just change them. Fascism clearly describes a movement that historically started with Mussolini, and has pretty much ended after WWII. The term ‘fascism’ has since mainly been used politically/strategically, to undermine political opponents. Basically applying a Godwin. ‘Fascist’ has now come to mean “anyone that Antifa disagrees with”, or even “anyone that left-wing oriented people disagree with”.

Nobody has referred to themselves as ‘fascist’ since, and no regime or political movement has officially been labeled ‘fascist’ by anyone. We certainly don’t label the Trump administration a fascist government in Europe (or totalitarian, dictatorial, racist, or whatever else). But such labels are apparently in the US itself by the left (even including prominent Democrats, all the way up to Biden), in order to take down the Trump administration. I think we are in a better position to judge that from the outside, than the people who’ve been under the influence of the propaganda machine for years.

And of course, no actual debate was possible, so when I didn’t fall for the superficial word games, he just blocked me. Possibly because the ideas of Critical Race Theory and intersectionality have become mainstream, it appears that nuance has disappeared from debate. Instead, everything is very polarized. It is all black-and-white, nothing in between. It is all or nothing. Debates rarely go into actual substance and arguments. Messengers are shot and people are labeled as horrible persons for simply having a different opinion.

This exchange is what originally got me to write the previous blog. I wasn’t expecting even people from ‘my neck of the woods’ (techy/nerdy/science-minded people) to buy into this nonsense. In fact, I actually said that at some point during the exchange, that I thought he would be more rational about this, as his videos show a very rational guy. He actually tried to deny that the videos he makes require rationality, as you can see.

At the time I thought that was rather strange, but now I think I may understand why. Critical Race Theory places things such as ‘rationality’, ‘objectivity’, and science in general under ‘whiteness’. So perhaps that’s why he was trying to deny it. He may have actually believed that he would be a ‘white supremacist’ or ‘racist’ or whatever if he were to admit that he is generally a rational person.

And he wasn’t the only one who ‘went Woke’. There’s someone else in ‘my neck of the woods’. I will not say who it is, because it was a private conversation, where the exchange with Alec was public, on Twitter, and is still available to read for everyone. But I can be sure that it is someone that most people who read this blog will be familiar with.

I can only say: you people are on the wrong side of history. This Woke nonsense is destroying our freedom and our communities. The Woke will force their opinions on you, as a totalitarian system, and if you do not comply, they will shut you out. There is no debate possible, your arguments will not be heard, there is no room for any kind of nuance or anything. Not even with people who you’ve known for years, and who should know better than to think you’re anywhere near a racist, fascist, sexist, homophobe, transphobe or whatever other superficial label they use to deflect any other opinions and shut people out. We are ‘dissidents’, and we must be ‘eliminated’.

Communism failed because it was based on an overly simplified view of the world, that mainly saw the world as a struggle between classes. It ignored the fact that humans are individuals, and individuals have their flaws and weaknesses. People aren’t all equal, and you can’t force them to be.

The Woke are making a very similar mistake, where Critical Race Theory/Intersectionality is again a very simplified view of the world, only marginally different from the communist one. This time it is seen as a power struggle between various ‘characteristics’ on the intersectional grid (such as gender, race, sexual preference and whatnot). And they again want to make all people equal, this time by forcing equity between groups. Again, this can only be done by force, and will fail, because the view of people is oversimplified, and the intersectional grid is a flawed view of society and humanity.

And I hope I explained why things like ‘white supremacy’ are completely foreign to us Europeans. And how totalitarian regimes, both fascist and communist, are far closer to home with us. So how things like ‘fascist’, ‘white supremacist’, ‘racist’, ‘Nazi’ etc. are deeply insulting to us. They also are highly disrespectful to the millions of victims of those regimes. In Europe there are still many people who lost a lot of family because of the Nazis or the communists. If you really were empathic, as you claim to be, and really were about respect and tolerance, I wouldn’t even have to tell you, because your common sense would have already made you understand how terrible that kind of behaviour would be. But you aren’t. You’re insensitive, ignorant, intolerant excuses for human beings.

Sargon of Akkad (who is also European) also did a similar video on that by the way:

Posted in Science or pseudoscience? | 8 Comments

The Cult of Wokeness

As you may know, I do not normally want to engage in any kind of political talk. I’m not entirely sure if you can even call this topic ‘political’, because free speech, science, rationality and objectivity are cornerstones of the Western world, and form the basis of the constitution of most Western countries.

And as you may know, I have spoken out against pseudoscience before. And I have also been critical of deceptive marketing claims and hypes from hardware and software vendors, somewhat closer to home for me, as a software engineer. I value honesty, objectivity, rationality and science, because they have brought us so much over the course of history, and they can bring us so much more in the future (and with ‘us’ I mean all of humanity, because I am a humanist).

However, in current times it seems that these values have come under pressure, from a thing known as cancel culture. To make a very long and complicated story short, currently there is a “Woke” cult, which bases itself on identity politics and Critical race theory. In short, they think within a hierarchy of oppressors and oppressed identity groups. Any ‘oppressing group’ is not allowed to have any say or opinion on any ‘oppressed group’. That is their simplistic view of ‘social justice’, ‘racism’, ‘sexism’ and related topics.

It is somewhat of a combination of postmodernist thinking and neo-Marxism. It is rather difficult to explain it all in just a few sentences, but the basic concept is that they see everything as a ‘social construct’. So man-made. Which also means that they can ‘deconstruct’ these things. They see language as a way to construct and deconstruct things. Basically, society works a certain way because of human behaviour, and language is a big part of that. By redefining language, you can ‘deconstruct’ certain behaviour, if that makes sense. It is pseudoscience of course.

A common example is the redefinition of ‘racism’, into something that is defined by what the ‘victim’ experiences. By turning this definition around, they can now argue that you can be racist even if you didn’t intend to, because that no longer matters. If someone claims they have ‘experienced racism’, then it is true, and you must be a racist. They extend this to a concept of ‘institutional racism’, where just as with racism, it’s never entirely clear what an ‘institution’ is, but again it does not matter, because as long as a ‘victim’ has ‘experienced institutional racism’, then it must be true, and therefore institutional racism must exist, even if it can’t or won’t be defined.

In general that is the modus operandi of this Woke cult: they favour feelings and emotions over facts. In other words, they value subjectivity over objectivity. I suppose you understand how that affects the world as we know it, especially science and technology. This can go as far as them not accepting facts, because since objectivity does not exist, facts are always subjective, they are a ‘social construct’ as well. They claim that other people can have ‘other ways of knowing’ (which is basically a way of saying ‘magic’). Recently, there even was a discussion of how “2+2=4″ is not always true. For some people it could be “2+2=5”.

This is just a short introduction, but I urge you to dig into this more. There are various online sources. A good starting point is the site New Discourses. Another good source is Dr. Jordan B. Peterson. He has put up a short page on postmodernism and Marxism on his website. You can also find various of his talks on the subject on YouTube and such.

Online there are many Social Justice Warriors who will attack anyone with a wrong ‘opinion’. They don’t do this by using free speech, as in engaging in a conversation and exchanging viewpoints. They do this by basically drowning out these people. A mob mentality. They try to ‘cancel’ these people, to deplatform them.

This also leads to virtue signalling, where people post certain opinions for no apparent reason, to show they’re ‘on the good side’ (probably because they’re afraid to get cancelled themselves).

I started noticing that last thing on Twitter over the past year or so. I mostly follow tech-related people. And it occurred to me that quite a few people would post rainbow flags, and discuss trans rights and things. So I started wondering “why are they doing this? Are there so many gay/trans people in tech? I have been following this person for quite a while, and afaik they’re neither gay nor trans or anything, so what gives?”

Apparently this Woke-cult has been growing in the liberal arts colleges in the USA for many years, and it is now coming out, and trying to take over the world (some ‘academics’ are part of this, they have the credentials, but their work does not meet scientific standards, such as Robin DiAngelo and her book “White Fragility”). The Black Lives Matter movement and Antifa are the most visible manifestations of this cult at the moment. And they are trying to deconstruct many parts of society.

They want to ‘decolonize’ society, and are even attacking things like mathematics now. They claim it is a ‘social construct’ to manifest white supremacy. They want to remove the objectivity and ‘rehumanize’ mathematics. Does that sound crazy? Yes, it does. But I’m not making this up, as you can see.

Mathematics is perhaps the most abstract phenomenon you can think of, and is completely unbiased to any human. It is just pure logic and facts. It led to computers, who use mathematics to perform all sorts of tasks, again, purely with logic (arithmetic) and facts (data). Entirely unbiased to any human. And now you are proposing to look at the race and/or (ethnic) background of children to somehow teach them different kinds of mathematics? Firstly, that’s a racist thing to do. Secondly, it destroys mathematics, because it will no longer be a universal, unbiased language. The paper claims that it is merely a myth that mathematics is objective and culture-free. Yet it gives no explanation whatsoever, let alone a proof that this would be a myth.

If anything, I’d say there’s plenty of proof around. So much of our technology works on the basis of mathematic principles. And that same technology works all over the world. There are people all over the world who understand these same mathematic principles, regardless of their race, background, culture or anything.

The issue with these things is that from a distance, they sound noble, but when you dig deeper, things are not quite what they seem. Eventually, most people will (hopefully) reach their Woke breaking point. Make sure you know your boundaries, and know when those lines are crossed, and act accordingly.

Anyway, there are many different instances of this Woke-cult, and we have to stop it. We have to prevent it from taking over our world, and destroy everything we’ve worked so hard for to create. So, if you were not aware yet, then hopefully you are now, and hopefully you understand that you need to get to grips with what this Woke-cult is, so that you can recognize it. Note that it is also very much in the mainstream media these days. Look out for things like ‘diversity’ and ‘inclusivity’.

The New York Times for example, is a very clear example of a media outlet that is taken over entirely by the Woke-cult. Bari Weiss resigned there recently, and she published her resignation letter, which speaks volumes. You can also find it in the Washington Post and many other papers. It’s also with CNN, for example. Once you get a feeling of what to look for, you should pick up on Woke-media quickly. They basically all have a single viewpoint, and their articles are completely interchangeable. There are no real opinion pieces anymore, just propaganda.

It’s gone so far that some media, most notably the Australian Spectator, are actually promoting themselves as “Woke-free” media:

So, let us fight the good fight, for all of humanity!

Posted in Science or pseudoscience? | Tagged , , , , , , , , | 17 Comments