Migrating to .NET Core: the future of .NET.

More than 20 years ago, Microsoft introduced their .NET Framework. A reaction to Java and the use of virtual machines and dynamic (re)compiling of code for applications and services. Unlike Java, where the virtual machine was tied to a single programming language, also by the name of Java, Microsoft opened up their .NET virtual machine to a variety of languages. They also introduced a new language of their own: C#. A modern language which was similar to C++ and Java, which allowed Microsoft to introduce new features easily, because they controlled the standard.

Then, in 2016, Microsoft introduced .NET Core, a sign of things to come (and a sign of confusion, because we used to have only one .NET, and now we had to separate between ‘Framework’ and ‘Core’). Where the original .NET Framework was mainly targeted at Windows and Intel x86 or compatible machines, .NET Core was aimed at multiple platforms and architetactures, as Java before it. Microsoft also moved to an open source approach.

This .NET Core was not a drop-in replacement, but a rewrite/redesign. It had some similarities to the classic .NET Framework, but was also different in various ways, and would be developed alongside the classic .NET Framework for the time being, as a more efficient, more portable reimplementation.

On April 18th 2019, Microsoft released version 4.8 of the .NET Framework which would be the last version of the Framework product line. On November 10th 2020, Microsoft announced .NET 5. This is where the Framework and Core branches would be joined. Technically .NET 5 is a Core branch, but Microsoft now considered it mature enough to replace .NET 4.8 for new applications.

As you may know from my earlier blogs, I always say you should keep an eye on new products and technologies developing, so this would be an excellent cue to start looking at .NET Core seriously. In my case I had already used an early version of .NET Core for a newly developed web-based application sometime in late 2016 to early 2017. I had also done some development for Windows Phone/UWP, which is also done with an early variation of the .NET Core environment, rather than the regular .NET Framework.

My early experiences with .NET Core-based environments were that it was .NET, but different. You could develop with the same C# language, but the environment was different. Some libraries were not available at all, and others may be similar to the ones you know from the .NET Framework, but not quite the same, so you may had to use slightly different objects, namespaces, objects, methods or parameters to achieve the same results.

However, with .NET 5, Microsoft claims that it is now ready for prime time, also on the desktop, supporting Windows Forms, WPF and whatnot, with the APIs being nearly entirely overlapping and interchangeable. Combined with that is backward compatibility with existing code, targeting older versions of the .NET Framework. So I figured I would try my hand at converting my existing code.

I was getting on reasonably well, when Microsoft launched .NET 6 in November, together with Visual Studio 2022. This basically makes .NET 5 obsolete. Support for .NET 5 will end in May 2022. .NET 6 on the other hand is an LTS (Long-Term Support) version, so it will probably be supported for at least 5 or 6 years, knowing Microsoft. So, before I could even write this blog on my experiences with .NET 5, I was overtaken by .NET 6. As it turns out, moving from .NET 5 to .NET 6 was as simple as just adjusting the target in the project settings, as .NET 6 just picks up where .NET 5 left off. And that is exactly what I did as well, so we can go straight from .NET 4.8 to .NET 6.

You will need at least Visual Studio 2019 for .NET 5 support, and at least Visual Studio 2022 for .NET 6 support. For the remainder of this blog, I will assume that you are using Visual Studio 2022.

But will it run Cry… I mean .NET?

In terms of support, there are no practical limitations. With .NET 4.7, Microsoft moved the minimum OS support to Windows 7 with SP1, and that is still the same for .NET 6. Likewise, .NET Framework supports both x86 and x64, and .NET 6 does the same. On top of that, .NET 6 offers support for ARM32 and ARM64.

Sure, technically .NET 4 also supports IA64 (although with certain limitations, such as no WPF support), whereas .NET 6 does not, but since Windows XP was the last regular desktop version to be released for Itanium, you could not run the later updates of the framework anyway. If you really wanted, you could get Windows Server 2008 R2 SP1 on your Itanium, as the latest possible OS. Technically that is the minimum for .NET 4.8, but I don’t think it is actually supported. I’ve only ever seen an x86/x64 installer for it. Would make sense, as Microsoft also dropped native support for Itanium after Visual Studio 2010.

So assuming you were targeting a reasonably modern version of Windows with .NET 4.8, either server (Server 2012 or newer) or desktop (Windows 7 SP1 or newer), and targeting either x86 or x64, then your target platforms will run .NET 6 without issue.

Hierarchy of .NET Core wrapping

Probably the first thing you will want to understand about .NET Core is how it handles its backward compatibility. It is possible to mix legacy assemblies with .NET Core assemblies. The .NET 6 environment contains wrapper functionality which can load legacy assemblies and automatically redirect their references to the legacy .NET Framework to the new .NET environment. However, there are strict limitations. There is a strict hierarchy, where .NET Core assemblies can reference legacy assemblies, but not vice versa. So the compatibility only goes one way.

As you probably know, the executable assembly (the .exe file) contains metadata which determines the .NET virtual machine that will be used to load the application. This means that a very trivial conversion to .NET 6 can be done by only converting the project of your solution that generates this executable. This will then mean the application will be run by the .NET 6 environment, and all referenced assemblies will be run via the wrapper for .NET Framework to .NET 6.

In most cases, that will work fine. There are some corner-cases however, where legacy applications may reference .NET Framework objects that do not exist in .NET 6. or use third-party libraries that are not compatible with .NET 6. In that case, you may need to look for alternative libraries. In some cases you may find that there are separate NuGet packages for classic .NET Framework and .NET Core (such as with CefSharp, which has separate CefSharp.*.NETCore packages). Sometimes there are conversions of an old library done by another publisher.

And in the rare case where you can not find a working alternative, there is a workaround, which we will get into later. But in most cases, you will be fine with the standard .NET 6 environment and NuGet packages. So let’s look at how to convert our projects. Microsoft has put up a Migration Guide that gives a high-level overview, and also provides some crude command-line tools to assist you with converting. But I prefer to dig into the actual differences of project files and things under the hood, so we have a proper understanding, and can detect and solve problems by hand.

Converting projects

The most important change is that project files now use an entirely different XML layout, known as “SDK-style projects”. Projects now use ‘sensible defaults’, and you opt-out of things, rather than opt-in. So your most basic project file can look as simple as this:

<Project Sdk="Microsoft.NET.Sdk">



So you merely need to tell Visual Studio what kind of project it is (eg “Library” or “Exe”), and which framework you want to target. This new project type can also be used for .NET 4.8 or older frameworks, so you could convert your projects to the new format first, and worry about the .NET 6-specific issues later.

What happens here is that by default, the project will include all files in the project directory, and any subdirectories, and will automatically recognize standard files such as .cs and .resx, and interpret them as the correct type. While it is possible to set the EnableDefaultItems property, and go back to the old behaviour of having explicit definitions for all files included, I would advise against it, for at least two reasons:

  • Your project files remain simple and clean when all your files are included automatically.
  • When files and folders are automatically included, it will more or less force you to keep your folders clean and not have files remaining in there, which are not relevant to the project, or should not be in the folder containing the code, but should be stored elsewhere.

So this type of project will force you to exclude files and folders that should not be used in the project, rather than including all files you need.

I would recommend just backing up your old project files, and replacing them with this new ’empty’ project file, and just load it in Visual Studio (not right away, you may want to read about some possible issues, like with NuGet packages, below). You will immediately see which files it already includes automatically. If your projects are clean enough (merely consisting of .cs and .resx files), they should be more or less correct automatically. From there on, you simply need to add the references back, to other projects, to other assemblies, and to NuGet packages. And you may need to set ‘copy to output’ settings for non-code files that should also be in the application folder.

As mentioned above, you probably want to start by just converting the project for your EXE, and get the project building and running that way, with all the other projects running via the .NET 4-to-6 compatibility wrapping layer. Then you will want to work your way back, via the references. A good help is to display the project build order, and work from the bottom to the top of the list, converting the projects one by one, and creating a working state of the application at every step. Right-click your project in the Solution Explorer, choose “Build Dependencies->Project Build Order…”:

The solution format has not been modified, so you do not need to do anything there. As long as your new projects have the same path/filename as the old ones, they will be picked up by the solution as-is.

Now to get to some of the details you may run into.

NuGet issues

NuGet packages were originally more or less separate from the project file, and stored in a separate packages.config file. The project would reference them as normal references. NuGet was a separate process that had to be run in advance, in order to import the packages into the NuGet folder, so that the references in the project would be correct.

Not anymore, NuGet packages are now referenced directly in the project, with a PackageReference tag. MSBuild can now also import the NuGet packages itself, so no separate tool is required anymore.

This functionality was also added to the old project format. So I would recommend to first convert your NuGet packages to PackageReference entries in your project, getting rid of the packages.config file.

This also implies that if you build your application not from Visual Studio itself, but via an automated build process via MSBuild, such as a build server (Jenkins, Bamboo, TeamCity or whatnot), that you may need to modify your build process. You may need to replace a NuGet-stage with an MSBuild-stage that restores the packages (running MSBuild with the -t:restore switch).

So I would recommend first converting your projects from packages.config to PackageReference, and getting your build process in order, before converting the projects to the new format. Visual Studio can help you with this. In the Solution Explorer, expand the tree view of your project, go to the References-node, right-click and choose “Migrate packages.config to PackageReference…”:

AssemblyInfo issues

Another major change in the way the new project works, is that by default, it generates the AssemblyInfo from the project, rather than from an included AssemblyInfo.cs file. This will result in compile issues when you also have an AssemblyInfo.cs-file, because a number of attributes will be defined twice.

Again, you have the choice of either deleting your AssemblyInfo.cs file (or at least removing the conflicting attributes), and moving the info into the project file, or you can change the project to restore the old behaviour.

For the latter, you can add the GenerateAssemblyInfo setting to your project, and set it to false, like this:


Limitations of .NET Core

So, .NET is now supported on other platforms than Windows, such as linux and macOS? Well yes and no. It’s not like Java, where your entire application can be written in a platform-agnostic way. No, it’s more like there is a lowest common denominator for the .NET 6 environment, which is supported everywhere. But various additional frameworks/APIs/NuGet packages will only be available on some platforms.

In the project example above, I used “net6.0” as the target framework. This is actually that ‘lowest common denominator’. There are various OS-specific targets. You will need to use those when you want to use OS-specific frameworks, such as WinForms or WPF. In that case, you need to set it to “net6.0-windows”. Note that this target framework will also affect your NuGet packages. You can only install packages that match your target.

There is also a hierarchy for target frameworks: the framework “bubbles up”. So a “net6.0” project can only import projects and NuGet packages that are also “net6.0”. As soon as there is an OS-specific component somewhere, like “net6.0-windows”, then all projects that reference it, also need to be “net6.0-windows”.

This can be made even more restrictive by also adding an OS version at the end. In “net6.0-windows”, version 7 or higher is implied, so it is actually equivalent to “net6.0-windows7.0”. You can also use “net6.0-windows10.0” for example, to target Windows 10 or higher only.

In practice this means that if you want to reuse your code across platforms, you may need to define a platform-agnostic interface-layer with “net6.0”, to abstract the platform differences away. Then you can implement different versions of these interfaces in separate projects, targeting Windows, linux and macOS.

Separate x86 and x64 runtimes

Another difference between .NET 4.8 and .NET 6 is that the runtimes are now separated into two different installers, where .NET 4.8 would just install both the x86 and x64 environment on x64 platforms.

This implies that a 64-bit machine may not have a 32-bit runtime installed, and as such can only run code that specifically targets x64 (or AnyCPU). That may not matter for you if you already had separate builds for 32-bit and 64-bit releases (or had dropped 32-bit already, and target 64-bit exclusively, as we should eventually do). But if you had a mix of 32-bit and 64-bit applications, because you assumed that 64-bit environments could run the 32-bit code anyway, then you may need to go back to the drawing board.

Of course you could just ask the user to install both runtimes, or install both automatically. But I think it’s better to try and keep it clean, and not rely on any x86 code at all for an x64 release.

Use of AnyCPU

While on the subject of CPU architectures, there is another difference with .NET 6, and that relates to the AnyCPU target. In general it still means the same as before: the code is compiled for a neutral target, and can be run on any type of CPU.

There is just one catch, and I’m not sure what the reasoning is behind it. That is, for some reason you cannot run an Exe built for AnyCPU on an x86 installation. The runtime will complain that the binary is not compatible. The same binary will run fine on an x64 installation.

I have found that a simple workaround is to build an Exe that is specifically configured to build for x86. Any assemblies that you include, can be built with AnyCPU, and will work as expected.

It is a small glitch, but easy enough to get around.

Detecting and installing .NET Core

Another problem I ran into, as .NET Core is still quite a fresh platform, is that it may not be supported by the environment that you create your installers with. In my case I had installers built with the WiX toolset. This does not come with out-of-the-box detection and installation of any .NET Core runtimes yet. What’s worse, the installer itself relies on .NET in order to run, and custom code is written against the .NET Framework 4.5.

This means that you would need to install the .NET Framework just to run your installer, while your application will need the .NET 6 installer, and the .NET Framework is not required at all, once it is installed. So that is somewhat sloppy.

Mind you, Microsoft includes a stub in every .NET Core binary that generates a popup for the user, and directs it to the download page automatically:

So, for a simple interactive desktop application, that may be good enough. However, for a clean, automated installation, you will want to take care of the installation yourself.

I have not found a lot of information on how to detect a .NET Core installation programmatically. What I have found, is that Microsoft recommends using the dotnet command-line tool, which has a –list-runtimes switch to report all rutimes installed on the system. Alternatively, they say you can scan the installation folders directly.

As you may know, the .NET Framework could be detected by looking at the relevant registry key. With .NET Core I have not found any relevant registry keys. I suppose Microsoft deliberately chose not to use the registry, in order to have a more platform-agnostic interface. The dotnet tool is available on all platforms.

Also, a quick experiment told me that apparently the dotnet tool also just scans the installation folders. If you rename the folder that lists the version, e.g. changing 6.0.1 to 6.0.2, then the tool will report that version 6.0.2 is installed.

So apparently that is the preferred way to check for an installation. I decided to write a simple routine that executed dotnet –list-runtimes and then just parsed the output into the names of the runtimes and their versions. I wrapped that up in a simple statically linked C++ program (compiled to x86), so it can be executed on a bare bones Windows installation, with no .NET on it at all, neither Framework nor Core. It will then check and install/upgrade the .NET 6 desktop runtime. I also added a simple check via GetNativeSystemInfo() to see if we are on an x86 or x64 system, so it selects the proper runtime for the target OS.

Workarounds with DllExport/DllImport

Lastly, I want to get into some more detail on interfacing with legacy .NET assemblies, which are not directly compatible with .NET 6. I ran into one such library, which I believe made use of the System.Net.Http.HttpClient class. At any rate, it was one of the rare cases where the compatibility wrapper failed, because it could not map the calls of the legacy code onto the equivalent .NET 6 code, since it is not available.

This means that this assembly could really only be run in an actual .NET Framework environment. Since the assembly was a closed-source third-party product, there was no way to modify it. But there are ways around this. What you need is some way to run the assembly inside a .NET Framework environment, and have it communicate with your .NET 6 application.

The first idea I had was to create a .NET Framework command-line tool, which I could execute with some command-line arguments, and parse back its output from stdout. It’s a rather crude interface, but it works.

Then I thought about the UnmanagedExports project by Robert Giesecke, that I had used in the past. It allows you to add [DllExport] attributes to static methods in C# to create DLL exports, basically the opposite of using [DllImport] to import methods from native code. You can use this to call C# code from applications written in non-.NET environments such as C++ or Delphi. The result is that when the assembly is loaded, the proper .NET environment is instantiated, regardless of whether the calling environment is native code or .NET code.

Mind you, that project is no longer maintaned, and there’s a similar project, known as DllExport, by Denis Kuzmin, which is up-to-date (and also supports .NET Core), so I ended up using that instead.

So I figured that if this works when you call a .NET Framework 4.8 assembly from native C++ code, it may also work if you call it from .NET 6 code. And indeed it does. It’s still a bit messy, because you still need a .NET 4.8 installation on the machine, and you will be instantiating two virtual machines, one for the Core code and one for the Framework code. But the interfacing is slightly less clumsy as with a command-line tool.

So in the .NET 4.8 code you will need to write some static functions to export the functionality you need:

class Test
    public static int TestExport(int left, int right)
        return left + right;

And in the .NET 6 code you will then import these functions, so you can call them directly:

public static extern int TestExport(int left, int right);
public static void Main()
    Console.WriteLine($"{left} + {right} = {TestExport(left, right)}")


That should be enough to get you off to a good start with .NET 6. Let me know how you get on in the comments. Please share if you find other problems when converting. Or even better, if you find solutions to problems.

Posted in Software development, Software news | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Running anything Remedy/Futuremark/MadOnion/UL in 2020

There has always been a tie between Futuremark and the demoscene. It all started with the benchmark Final Reality, released by Remedy Entertainment in 1997.

Remedy Entertainment was a gaming company, founded by demosceners from the legendary Future Crew and other demo groups. Remedy developed an early 3D acceleration benchmark tool for VNU European Labs, known as Final Reality, which showed obvious links to the demoscene, both because of the demo-like design of its parts, the name “Final Reality” being a reference to the Future Crew demo Second Reality, and the fact that a variation of Second Reality’s classic city scene was included in Final Reality.

After this first benchmark, a separate company focused on benchmarking was founded, which was to become Futuremark. After releasing 3DMark99, they changed their name to MadOnion.com. Then after releasing 3DMark2001, they changed back to Futuremark Corporation. Eventually, after being acquired in 2018 by UL, they changed the name to UL Benchmarks.

With every major milestone of PC hardware and software, generally more or less coinciding with new versions of the DirectX API and/or new generations of hardware, they released a new benchmark to take advantage of the new features, and push it to the extremes.

Traditionally, each benchmark also included a demo mode, which added a soundtrack, and generally had extended versions of the test scenes, and a more demo-like storytelling/pacing/syncing to music. As a demoscener, I always loved these demos. They often had beautiful graphics and effects, and great music to boot.

But, can you still run them? UL Benchmarks was nice enough to offer all the legacy benchmarks on their website, complete with registration keys: Futuremark Legacy Benchmarks – UL Benchmarks

Or well, they put all of them up there, except for Final Reality (perhaps because it was released by Remedy, not by Futuremark/MadOnion). But I already linked that one above.

I got all of them to run on my modern system with Windows 10 Pro x64 on it. I’ll give a quick rundown of how I got them running, starting from the newest.

3DMark11, 3DMark Vantage, 3DMark06, 3DMark05 and 3DMark03 all installed and ran out-of-the-box.

3DMark2001SE installed correctly, but the demo would not play. Looking at the error log revealed that it had problems playing back sound (which explains why regular tests would work, they have no sound). But when you select the Compatibility mode for Windows 8, that fixes the sound, and the whole demo runs fine.

3DMark2000 was a bit more difficult. On my laptop it installed correctly, but on my desktop, the installer hung after unpacking. The trick is to go to Task Manager, find the setup.exe process in the Details tab, right-click it and select “Analyze wait chain”. It will tell you what the process is waiting for. In my case it was “nvcontainer.exe”. So I killed all processes by that name, and the setup continued automatically.

Now 3DMark2000 was installed properly, but it still did not work correctly. There is a check in there, to see if you have at least 4MB video memory. Apparently on a modern video card with multiple GBs of memory, the check overflows, and thinks you have less than 4MB. It then shows a popup, and immediately closes after you click on it. So I disassembled the code, found the check, and simply patched it out. Now it works fine.

If you want to patch it yourself, use a hex editor and change the following bytes in 3DMark2000.exe:

Offset 69962h: patch 7Dh to EBh
Offset 69979h: patch 7Dh to EBh

XL-R8R dates from the same era as 3DMark2000, and I ran into the same issue of setup.exe getting stuck, and having to analyze the wait chain to make it continue. It did not appear to have a check for video memory, so it ran fine after installation.

3DMark99Max was more difficult still. The installer is packaged with a 16-bit version of a WinZip self-extractor. You cannot run 16-bit Windows programs on a 64-bit version of Windows. Luckily you can just extract the files with a program like 7-Zip or WinRar, by just right-clicking on 3DMark99Max.exe and selecting the option to extract it to a folder. From there, you can just run setup.exe, and it should install properly.

Like 3DMark2000, there’s also a check in 3DMark99Max that prevents it from running on a modern system. In this case, it tries to check for DirectX 6.1 or higher, and the check somehow mistakenly thinks that the DirectX version is too low on a modern system. Again, a simple case of disassembling, finding the check, and patching it out.

If you want to patch it yourself, use a hex editor and change the following byte in 3dmark.exe:

Offset 562CCh: patch 75h to EBh

Final Reality then, the last one. Like 3DMark99Max, it has a 16-bit installer. However, in this case the trick of extracting it does not help us. You can extract the setup files, but in this case the setup.exe is still 16-bit, so it still cannot run. But not to worry, there are ways around that. Initially I just copied the files from an older installation under a 32-bit Windows XP. But an even better solution is otvdm/winevdm.

In short, x86 CPUs can generally only switch between two modes on-the-fly under Windows. So a 32-bit version of Windows can switch between 32-bit and 16-bit environments, which allows a 32-bit version of Windows to run a 16-bit “NTVDM” (NT Virtual DOS Machine) environment, in which it runs DOS and 16-bit Windows programs. For 64-bit versions of Windows, there’s a similar concept, known as Windows-on-Windows (WOW64). This allows you to run 32-bit Windows programs. The original NTVDM for DOS and Win16 programs is no longer available.

Otvdm works around this by using software emulation for a 16-bit x86 CPU, and then uses part of the Wine codebase to translate the calls from 16-bit to 32-bit. This gives you very similar functionality to the real NTVDM environment on a 32-bit system, and allows you to run DOS and Win16 applications on your 64-bit Windows system, albeit with limited performance, since the CPU emulation is not very fast. Because it’s not a sandbox environment like most emulators, but it actually integrates with the host OS via 32-bit calls.

In our case, we can simply run the Final Reality installer via otvdm. Just download the latest stable release from the otvdm Github page, and extract it to a local folder. Then start odvdmw.exe, and browse to your fr101.exe installer file. It will then install correctly, directly onto the host system.

There appear to be no compatibility problems with this oldest benchmark of them all, funny enough, so that rounds it all up.

Here is a video showing all the 3DMark demos:

And here is the XL-R8R demo:

And finally the Final Reality demo:

Posted in Direct3D, Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | 6 Comments

The Trumpians love their children too

After expressing my worries on the development of extreme leftism and Wokeness, I thought I should also express my concerns about the aftermath of the elections.

What worries me is how people responded to Trump’s loss, both in the US and in the rest of the world. I have seen images of people going out on the streets, cheering and chanting, and attacking Trump dolls and such.

There’s also a site “Trump Accountability” that wants to attack Trump supporters.

As I grew up during the Cold War, and I saw the demise of Communism and its dictators, this sort of thing reminds me very much of those days.

The big difference is: the US was not under dictatorship, and although Trump may have lost the elections, a LOT of people voted for him. I suppose this is the result of 4 years of sowing hatred against a president and his politics. And now that Trump is gone, it seems they want to go after his supporters. But for what? It is a democracy, and these people simply cast their democratic vote. That’s how it works. If you start oppressing people with the ‘wrong’ vote, you are actually CREATING a dictatorship, not getting rid of one, oh the irony.

At the time of writing, Trump has received around 71 million votes, and Biden has received around 74 million votes. And that is what troubles me. Are these people serious about persecuting such a large group? There aren’t 71 million fascists, racists, or whatever you think in the US. That just doesn’t make sense at all. Most of these 71 million people are just normal people like you and me. They could be your neighbour, your hair dresser, your plumber, etc.

And that’s where I think things go wrong, badly. As a European, I live in a country that is FAR more leftist than the US. We are at Bernie Sanders level, if that. So theoretically I couldn’t be further removed from Trump/Republican/conservative voters. People who are generally quite religious, pro-life, anti gay-marriage etc. And then they are often patriotic. I’m not even American, let alone a patriot for that country. So in that sense I suppose I have very little if anything in common with these people, and my views are very different.

Nevertheless, I had some interesting talks with some of these people. I recall one discussion where a religious Republican sincerely did not understand how you can value life if you don’t believe in God. That’s interesting, I never even gave that any thought, since I’m not religious, yet I do value life. And I can understand that to them, if God didn’t create life, then they don’t see how life is in any way holy, or however you want to put it. Perhaps it is actually true that non-religious people value life less, who knows?

Thing is, they did make me think about it, and we had a discussion. I suppose my explanation is one of ‘theory of mind’: I know how it feels if I get hurt, and I know that I don’t want to die. So I can understand how that must feel for others as well, so I do not want to do that to them either. Which in some way comes back to what Christians already know: Don’t do unto others what you don’t want done unto you.

But the key here is that we could have this discussion, and we had mutual respect and understanding for our different views.

And I suppose that is also the problem with the people who are now cheering on the Democrat win… or actually Trump’s loss. While as a European, I may be closer to the Democrat political view than the Republican one, this is something that goes COMPLETELY against who I am, and how I want the world to be. I grew up with the value of tolerance and understanding. I suppose political views aren’t everything. I cannot get behind you if I share the basic views, but reject the way in which you actually conduct yourself (which I think is against these very views anyway).

If half of the US cannot tolerate the other half simply for having different ideas on what is best for their country, then that is a recipe for disaster.

Getting back to the Cold War, the song Russians by Sting comes to mind:

Back when this song came out, the Cold War also set up the US against the USSR with lots of propaganda in the media. Not everything you heard or read was true. In this song, Sting makes some very good points. Mostly that Russians are just people like you and me. Their government may have a certain ideology, but most Russians just try to lead their lives and mind their own business, just as we do.

As he says:

“In Europe and America there’s a growing feeling of hysteria”

“There is no monopoly on common sense
On either side of the political fence
We share the same biology, regardless of ideology
Believe me when I say to you
I hope the Russians love their children too”

“There’s no such thing as a winnable war
It’s a lie we don’t believe anymore”

I think these lines still contain a lot of truth. There’s hysteria in the US as well now, fueled by the mainstream media and social media, much like in the Cold War back then.

No monopoly on common sense on either side of the political fence. That’s so true. You can’t say the Democrat voters have all the common sense and the Republican voters have none, just based on who won an election.

And indeed, he says “we share the same biology”, that is of course even more true for Democrats vs Republicans than it was for the US vs USSR situation, as both are Americans. They may even be related.

And the most powerful statement: “I hope the Russians love their children too”. Of course he was referring to nuclear war, and mutually assured destruction. But it is very recognizable: Russians are humans too, of course they love their children, they are just like us. And it’s the same with Democrats and Republicans.

So I hope this also remains only a Cold War between Democrats and Republicans, and both sides will accept the results, and try to find ways to come together again, understand and tolerate eachother, and work together for a better world.

Update: Clearly I am not the only one with such concerns. Douglas Murray has also written an article about his concerns of this polarization, division and possible outcomes. I suggest you read it.

More update: Bret Weinstein and Heather Heying also comment on some of these anti-Trump sentiments and actions. And they make a good point about what the REAL left is, or is supposed to be (and as I said, that is also more or less my political position, I am by no means right-wing, certainly not by American standards), and how these far left people have lost the plot.

And another update: James Lindsay, one of the authors of the book Cynical Theories, which I mentioned before, has actually decided to vote for Donald Trump, despite being a liberal rather than a conservative/Republican. He explains in the video below how he sees Wokeness as possibly the biggest threat to the country, and how Biden is unlikely to stop its rise. So at least some people who voted Trump, aren’t actually Trump/Republican/conservative supporters, they just thought the alternative was worse.

And yet another update: Here Jordan B Peterson talks about how liberals and conservatives should listen to eachother, and keep eachother balanced. One side is not necessarily wrong, the other side is not necessarily right. They each have a different focus in life, and they need each other. Ideas may be good or bad depending on the situation in which they are applied. Very much the message I wanted to give. I will probably return to this in more detail in a future post.

Posted in Science or pseudoscience? | Tagged , , , , | 4 Comments

The Cult of Wokeness, followup

The previous article was just meant as a quick overview and wake-up call. But I would like to say a few more things on the subject.

I have since read the book Cynical Theories by Helen Pluckrose and James Lindsay. I recommend that everyone reads this book, so that they are up-to-speed with the current Woke-mindset. At the very least, I suggest you read a review of the book, to get a rough idea. The review by Simon Jenkins gives a good quick overview of the topics that the book discusses. I will also repeat my recommendation to read some of the articles and background information on the New Discourses site.

I would like to elaborate on two things. Firstly there’s the pseudoscientific nature of it, which is what I am most concerned about, as I said before. Secondly, I also want discuss some forms in which Woke manifested itself in the real world.

Postmodernist philosophy

As you know I’ve done a write-up about the philosophy of science before. At university, this was taught in a number of courses in the first three years. I always took a liking to it. It is important to know what our current methods of science are exactly, and where they came from, how they evolved.

As you may have noticed, I did not cover postmodernism at all. That was not intentional. Postmodernism simply never crossed my path at the time. But now that it has, I went through my old university books and readers again, and indeed, there was no specific coverage of postmodernism at all. It seems that the only postmodernist philosopher that is referenced at all, is Paul Feyerabend.

Feyerabend is actually a somewhat controversial figure, as he wanted a sort of ‘anarchistic’ version of science, and rejected Popper’s falsification, for example. The university material I have, only spends one paragraph on him, merely to state that purely rational science is merely one extreme view, where Feyerabend represents the other extreme. They nuance it by saying that in practice, science operates somewhere in the gray area between these extremes.

And that brings me to the point I want to make. Postmodernism is extremely critical of society in general, and science specifically. There is some value to the ideas that postmodernism brings forward. At the same time, you should not take these ideas to the extreme. Also, the reason why they were not covered in the philosophy of science, is that they did not actually produce new knowledge or useful methods. So they did not add anything ‘tangible’ to science, they merely brought more focus to possible pitfalls of bias, political interest and other ideologies.

There is some merit to their idea of systems that can be ‘rigged’ by having a sort of bias built-in. A bias that you might be able to uncover by looking at the way that people talk about things, the ‘discourses’. That the system and the bias are ‘socially constructed’.

After all, with ‘politically correct’ language we are basically doing exactly that: we choose to use certain words, and avoid certain other words, to shift the perception (bias) of certain issues. So in that sense it is certainly possible to create certain ‘biases’ socially, and language is indeed the tool to do this.

However, they see everything as systems of power and hierarchy, and the goal of the system is always to maintain the position of power at the cost of the lesser groups (basically a very dystopian view, like in the book 1984 by George Orwell). That is not necessarily always the case. For example, science is not a system of social power. Its goal is to obtain (objective and universal) knowledge, not to benefit certain groups at the cost of others. Heck, if anything proves that beyond a shadow of a doubt, then it must be the main topic I normally cover on this blog: hardware and software. Scientists have developed digital circuits, transistors, computer chips, CPUs etc., and developed many tools, algorithms etc. to put this hardware to use. As a result, digital circuits and/or computers are now embedded in tons of devices all around you in everyday life, making life for everyone easier and better. Many people have jobs that exist solely because of these inventions. Everyone benefits in various ways from all this technology.

And I think that’s where the cynicism comes in. Postmodernists try to find problems of power-play and ‘oppression’ in every situation. That is indeed a ‘critical’ and ‘skeptical’ way of looking at things, but it’s not critical and skeptical in the scientific sense.

Where it goes wrong is when you assume that the possible problems you unearth in your close-reading of discourses, is the only possible explanation, and therefore you accept it as the truth. I am not sure if the original postmodern philosophers such as Foucault and Derrida actually meant to take their ‘Theory’ this far. But their successors certainly have.

This is most clear in the Critical Race Theory, which introduces the concept of ‘intersectionality’ (in Kimberlé Crenshaw’s book by the same name). The basic assumption here is that the postmodern ‘Theory’ of a racist system is the actual, real state of the world. Therefore all discourses must be a power-play between races. That assumption is certainly not correct in every situation, and most probably not even in the majority of situations.

The concept of intersectionality itself is another idea that may have some merit, but like the ‘social construct theories’, it does not apply as an absolute truth. As I already said in the previous post, in short intersectionality says that every person is part of any number of groups (such as gender, sex, sexual preference, race, etc). Therefore the prejudice against a person is also a combination of prejudice against these groups. For example, a black woman is both black and a woman. Therefore she may receive prejudice for being black and for being a woman. But crucially, she will also receive prejudice for a black woman. So intersectionality claims that prejudice against people is more than just the sum of the parts of groups that they are part of. At the ‘intersections’ between groups, there are ‘unique’ types of prejudice felt only by people that are part of both groups.

So far, the concept of intersectionality makes sense. People can indeed be ‘categorized’ into various groups, and will be members of a collection of groups at a time. And some combinations of groups may lead to specific kinds of prejudice, discrimination and whatnot.

However, the problem with intersectionality and Critical (Race) Theory arises when you start viewing this intersectionality as the absolute truth, the entire reality, the one and only system. That is an oversimplification of reality. The common way of viewing people was as individuals: they may be part of certain groups, and may share commonalities with others, but they are still unique individuals, who have their own thoughts and make their own decisions. But viewing people through an intersectional lens turns into identity politics: people are essentially reduced to the stereotype of their intersectional position, and are all expected to think and act alike. And that obviously is taking things a step too far.

Another very serious problem is that instead of looking for rationality, objectivity, or fact, these concepts are denounced. The focus is put on the ‘lived experiences’ (anecdotal evidence) of these groups instead. In the intersectional hierarchy, the ‘lived experience’ of an oppressed group always takes precedence over an oppressing group. Therefore, a woman’s word is always to believed over a man’s word, and a black person’s word is always to be believed over a white person’s word. If a woman says she experienced sexism, then it is considered a fact that there was sexism. If a black person says she experienced racism, then it is considered a fact that there was racism. Again, it is obvious how this can lead to false positives or exploitation of the system.

This is also where the system shows some of its obvious flaws and inconsistencies. Namely, these ‘lived experiences’ are subjective by definition, and as such, are viewed through the biased lens of the subject. This is exactly what caused people to develop the scientific method, to try and avoid bias, and reach objective views and rational explanations.

Postmodernism itself is supposed to be highly critical of biased discourses, but apparently bias is suddenly perfectly acceptable, and biased anecdotes are actually considered ‘true’ as long as the biased party is the one that is (subjectively) being ‘oppressed’. You just can’t make sense of this in any way. Intersectionality and Critical Race Theory are built on intellectual quicksand. It doesn’t make sense, and you can’t make sense of it, no matter how hard you try.

A nice example of how this ‘Theory’ can go wrong in practice can be found here, on this chart from New Discourses, under point 3:


As you can see, there are only two possible choices to make, and both can be problematized into a racist situation under Critical Race Theory. While these may be *possible* explanations, they aren’t necessarily correct. There are plenty of alternative, non-racist explanations possible. But not under Critical Race Theory.

And that is a huge problem: CRT sees racism everywhere, so you will run into a number of false positives. That does not seem very scientific. The only scientific value that postmodernism approaches could have, is to search for possible hypotheses. But you would still need to actually scientifically research these, in order to find out if they are correct. Instead, they are ‘reified’: assumed to be true. CRT assumes that “the system” is racist, and white people have all the power, by definition. An assumption, not a proven hypothesis. An assumption, that you are unable to prove scientifically, because the evidence simply is not there.

Woke in practice

First of all, perhaps I should define ‘Woke’ as an extreme form of political correctness. A lot of things are ‘whitewashed’ in the media by either not reporting them at all, or reporting them in a very biased way with ‘coded language’. On the other hand, some things are ‘blackwashed’ (is that even a term?) by grossly overstating things, or downright nefarious framing of things.

Now, one thing that really rubs me the wrong way, to say the least, is the way World War II, Hitler, Nazi’s, fascism etc. are being used in today’s discourse. And it only strengthens the view that we in Europe already had of the US: these people seem to have little or no clue about history or the rest of the world.

And I say “Europe” because that’s how they look at us. As if we’re just one country, like the US, and the actual countries in Europe are more like different ‘states’. In this Woke-era, it’s important to note that Europe is nothing like that. For starters, nearly every country has its own language. So as soon as I cross a border, it immediately becomes difficult to even talk to other people. And there are far more differences. Countries in Europe still have their own unique national identities, ethnicities if you like. And Europe is a very old continent, like Africa. So long before there were ‘countries’ and ‘borders’, there were different tribes, that each had their own unique languages and identities, ethnicities. There’s even a Wikipedia page on the subject (and also for Africa).

Of course, this also leads to people having stereotypes of these different countries, and making fun of them, or there being some kind of rivalry between them. Things that the Woke would probably call ‘racism’. Except, to the Woke, they’re all ‘white’ and ‘European’, or ‘black’ and ‘African’. So apparently there is a complexity to the real world that they just don’t understand. Probably because their country is only a few hundred years old, and only has a single language, and (aside from Native Americans) never had any tribes to speak of. All ethnicities just mostly blended together as they came from Europe and Africa, and settled in America, taking on the new American identity.

Speaking of getting things completely wrong… Apparently Americans refer to white as ‘Caucasian’. The first time I heard that was on some TV show, I suppose a description of a suspect or such: “Middle-aged male, Caucasian…” So I was surprised. What did they mean by ‘Caucasian’? I thought they meant he was probably of Russian descent or such, because it referred to the Caucasus, a mountain region in Russia. But when I looked it up, apparently it was a name used for ALL white people. Which NOBODY else uses.

If you look into the history of the term ‘Caucasian’, things get interesting. Apparently somewhere in the 18th century, anthropologists thought that there were 3 main races: ‘Caucasian’, ‘Mongoloid’ and ‘Negroid’. This theory is long considered outdated, but apparently that didn’t stop Americans from using the term. And in fact, aside from wrongly using the term ‘Caucasian’ to denote ‘white skin colour’, there is some connotation attached to the term as well. Caucasians, or more specifically the ‘Circassian’ subtype of Caucasian people was seen as the ‘most beautiful humans’ in some pseudoscientific racial theory. Well, from that sort of crazy stereotype, it’s only a small step towards ‘white supremacy’ I suppose.

Because, let me get this clear… To me, the only race that exists is the ‘human race’. As someone with a background in science/academia, clearly I support Darwin’s theory of evolution as the most plausible explanation we have (as does a large part of the Western world. The US perhaps being an exception, because it’s still quite religious, and people still believe in creationism, making evolution controversial. It is not even remotely controversial in Western European countries). Combining archaeological findings of human fossils and evolution, the history of human life goes back to Africa, where humans evolved from apes.

Over time, these humans spread across the entire globe, and groups of humans in different parts of the world would continue to evolve independently. This led them to adapt to their local environment, which explains why humans in the north developed lighter skin. In the north, there were different levels of sun, therefore different levels of UV exposure and vitamin D production. This meant that less melanin was required. So evolution in Africa prevented genetic variations with less melanin from being successful. But in different areas in the world, this constraint no longer held. Variation in eye and hair colour can be explained in a similar way, as these are also dependent on genetic variations and melanin levels.

So, this means that we are all descendent from African people. It also means that skin colour variations are purely an adaptation to the environment, which can in no way be linked to any kind of perceived ‘superiority’ in any way, in terms of intelligence, behaviour or anything else. Skin colour is just that: skin colour.

What’s more, as humans developed better ways to travel, different groups that had evolved independently for many years, would interact with eachother again, so these separate evolutionary gene pools were mixed together again. So aside from any kind of ‘race’ based on skin colour being just some arbitrary point in evolution, even if you were to take such an arbitrary point in history, in practice most people would be a blend of these various arbitrary race definitions. For example, although the Neanderthal people are extinct, they have mixed with ‘modern’ humans, so various groups of people, mainly in Europe and Asia today, still carry certain Neanderthal-specific genes. It is believed that a genetic risk of Covid-19 can be led back to these Neanderthal-genes, for example.

The Neanderthals were a more primitive species of humans. It is not even clear whether they were capable of speech at all. Modern man is of the species of Homo Sapiens. And since Neanderthals never lived in Africa, they never mixed with African Homo Sapiens. So African (‘black’) people are genetically the most ‘pure’ modern humans. European (‘white’), Asian and even Native American people carry the more primitive Neanderthal genes. So if you want to make any kind of ‘racial argument’, then based on the gene-pool, ‘white superiority’ is a strange argument to make. After all, white people carry genes from a more primitive, archaic, extinct human species. Being extinct is hardly ‘superior’.

But there’s also a lot more recent mixing of genes. Because what some people call ‘white’ is basically everyone with a light skin colour. But that includes people with all different sorts of eye colours, hair colours, and also hair styles (straight, curly, frizzy etc). Which indicates that various gene pools, presumably from groups of people that evolved individually have been mixed. To give a recent example, take the recently deceased guitar legend Eddie van Halen. People may judge him as ‘white’, based on his appearance. But actually his mother was from Indonesia, so Asian. You see how quickly this whole ‘race’ thing goes bad. If you can’t even tell from the appearance of a ‘white’ person that one of his parents was of a different so-called ‘race’, then imagine how hard it is to tell whether a ‘white’ person had any ancestry of different ‘race’ two or more generations back.

So this whole idea of ‘race’ is just pseudoscience. It’s a social construct. Which is quite ironic, given that currently the Woke ‘antiracists’ are pushing a racial ideology. Which brings me closer to what I wanted to discuss. Because who were the last major group to push a pseudoscientific racial ideology? That’s right, the Nazis. They somehow believed that the “Aryan race” was superior to all others, and the Jews were the worst. Their interpretation of what ‘Aryan’ was, was basically white European people, ideally with blue eyes and blond hair. So in other words, it was basically a form of ‘white supremacy’. The Nazi Germans thought they were the ‘chosen people’, and since they considered themselves superior, obviously they had to take over the world.

Now what the Americans need to understand is that although most of Europe was white, and a large part of the population could pass for their idea of ‘Aryan’, they certainly were not interested in these ideas. The Germans went along because of years of propaganda and indoctrination by the Nazis. And even then many Germans only went along because they were under a totalitarian regime, and they had little choice. It is unclear how many Germans outside the Nazi party itself actually subscribed to the Nazi ideology. Germany also didn’t have a lot of allies in WWII (and even though Italy was also fascist, and was an ally, they were actually reluctant to adopt the racist ideology. Racism was not originally part of fascism. It was Hitler who added the racist element, and pressured Mussolini in adopting it).

Which explains why WWII was a war: Germany actually had to invade most countries, in order to push their Nazi ideology and get on with the Holocaust. Even then, there was an active resistance in many occupied countries, who tried to hide Jews and sabotage the Germans.

My country was one of those, and it still bears the scars of the war. Various cities had parts bombed. My mother lived in a relatively large house, which led to a German soldier being stationed there for a while (presumably to make sure they were not trying to hide Jews in the house). Concentration camps were built here, some of which are still preserved today, lest we forget.

And obviously WWII was not won by the Nazis. The Allies, who were again mostly white Western nations, clearly did not approve of the Nazis and their genocide.

So, given this short European perspective on WWII-related history, hopefully you might understand that terms like ‘Nazi’, ‘fascist’, ‘white supremacy’ and antisemitism resonate deeply with us, in a bad way.

And these days, a lot of people just use these terms gratuitously, mainly to insult people they don’t agree with, and dehumanize them (which is rather ironic, as this is exactly what the Nazis did to the Jews). Hopefully you understand that we take considerable offense at this.

And if you think that’s just extreme, activist people, guess again. It even includes people who should know better, and should be capable of balanced, rational thought. Such as Alec Watson of Technology Connections.

I give you this Twitter discussion:

This was related to the ‘mostly peaceful protests’ in Portland, as you can see. Clearly I did not agree with the quoted tweet, because it presented a false dichotomy: yes, government should be serving the people, but there are certain cases where it may be justified to beat people up on city streets (in order to serve the people). Namely, to stop rioters/domestic terrorists or otherwise violent groups. In Europe we are very familiar with this sort of thing, mostly with the removal of squatters from occupied buildings (who tend to put up quite violent protests) or when groups of fans from different sports teams attack eachother before, during or after a game.

After all, that is the concept of the ‘monopoly on violence‘ that the government has, through organizations such as the police and the army. We have very strict laws on guns and other arms, so we actually NEED the government to protect law-abiding citizens from violent/criminal people. Therefore, beating people up on the streets is perfectly fine, if that is what it takes to stop and arrest these people, in order to protect the rest.

So what I saw happening in Portland was a perfectly obvious situation where the government should stop these riots with force. Nothing wrong with beating up people who were trying to set a police station on fire, and throwing fireworks at the police etc. They were being violent and destroying property.

But debate ensued about that as well. Apparently Alec and other people did not consider destruction of property to be ‘violence’. That is funny, since you can find dictionary definitions that do. Apparently the meaning of words is being redefined here. Postmodernism/Wokeism at play. Aside from that, there are laws that state that the government needs to protect the people AND their property.

They were in denial about the destruction anyway, so I had to link to some Twitter feeds from people who reported on it, such as Andy Ngo and Elijah Schaffer. But as you can see, even then they were reluctant.

The conversation turned to Antifa and how they were fighting ‘fascists’. This is perhaps a good place for the second episode of Western European history. The history of Marxism and communism.

Because as you might know, Marxism was developed by Karl Marx and Friedrich Engels in Germany in the 19th century, most notably by publishing The Communist Manifesto and the book Das Kapital. Various communist parties in various European countries were formed, who aimed to introduce communism by means of a revolution. The first successful revolution occurred in 1917 in Russia by the Bolsheviks, led by Vladimir Lenin. In 1922 they formed the Soviet Union, which expanded communism to other countries gradually, most notably after WWII. Namely, after Germany tried to invade the Soviet Union, Stalin pushed back hard, and eventually moved all the way up to Berlin, causing Hitler to commit suicide and forcing the Nazis to capitulate, before the Allies arrived.

Effectively, Soviet forces now occupied large parts of Eastern Europe, including a large part of Germany itself. Stalin converted these parts to communism and made them into satellite states of the Soviet Union. This also led to Germany being split up into the Western Bundesrepublik Deutschland and the Eastern Deutsche Demokratische Republik (the communist satellite state).

This lasted up to the early 90s. Which means that a considerable amount of European people either lived under communism, or lived near countries under communism. These communist countries were sealed off from the outside world, with the most notable example being the Berlin Wall. They were totalitarian states.

After this short introduction, now to get back to Antifa, which originally started in the 1920s in Germany. Which was around the same time that fascism arose in Europe.

Fascism started in Italy, under Mussolini, and was later adopted by Hitler. They had political parties that had their own mobs/paramilitary groups, like a sort of ‘private army’ to intimidate political opponents, and eventually get into power. Also of note is that they initially identified themselves as leftist/socialist (Nazi is short for NationalSozialismus, the political identity of the NSDAP party, Nationalsozialistische Deutsche Arbeiterpartei). They were later classified as far-right, mainly because of their extreme nationalism, but nothing to do with their economic policies.

Communist parties used similar mob/paramilitary tactics, in order to organize their revolution and overthrow the government. Essentially both are domestic terrorists. This more or less made communists and fascists ‘natural enemies’. They also bear remarkable resemblance in many ways. Not only the mob tactics, but also the use of propaganda, and eventually establishing a totalitarian state, without much room for individuals and their opinions. Everything had to be regulated, including the media, arts, music etc.

Cynically one could say that communists and fascists are two sides of the same coin. Their tactics and goals are mostly the same, they only apply a slightly different ideology, either Marxism or Nazism. Both types of regimes caused millions of deaths. Communism even far more than Nazism, because it was more widespread and lasted longer. And not just in Russia either. The same happened in China or Cambodia for example. Dissidents had to be eliminated, which led to genocide.

The original German Antifa was ended in 1933 when the Nazis rose to power. Nazism ended in 1945, when WWII ended. Interestingly enough, the totalitarian regime in the communist states kept the idea alive that fascism was still alive in the Western states. And while the actual goal of the Berlin Wall was to keep people from escaping the dreadful DDR and reach the free BRD, they fed the people propaganda that the wall was put up in order to keep the fascists out (who, as already stated, didn’t exist anymore. But since the state controlled the media, their citizens had no idea about that, and only ‘knew’ what propaganda they were fed by the state).

And that brings me back to the current Antifa, which started in Portland. Ever since Trump started running for president, his opponents tried to frame him as far-right, racist, white supremacist, fascist and whatnot. Technically, he is none of these things. The only thing that is somewhat accurate is that he is clearly a right-wing politician. Both economically, and he also has a nationalist focus. To what extent that is actually ‘far-right’, is debatable.

But everything else just seems to be propaganda and gaslighting. He neither says nor does racist things, no signs of white supremacy either, and clearly he’s not a fascist. Mussolini and Hitler were ‘technically’ chosen democratically, but actually used mobs to intimidate political opponents (and in Hitler’s case, there were also a number of assassinations, in the Night of the Long Knives). Trump did none of these things. He was democratically chosen by the people, without any kind of intimidation, he hasn’t had anyone assassinated in order to get to power, or expand his power, or anything. He merely tries to implement his policies on healthcare, the economy, the environment and such. That is what presidents do.

He may be a lot of things (a populist, narcissistic, rude, anti-scientific etc), but he is not ‘the new Hitler’ or anything. He certainly hasn’t pushed any kind of racist ideology, let alone changed laws to that effect. He also has not made major changes to the law to create a totalitarian regime or anything (if he did, Antifa would have been eliminated quickly. Instead, most rioters are not even arrested at all, and the ones that do, tend to get little or no sentence. Fascism is far more deadly than that, idiots. You wouldn’t live to tell). So in no way does it look anywhere like fascism. What fascists is Antifa fighting? None, they’re gaslighting you.

Getting back to the discussion with Alec… I tried to make the point that Antifa (based on communism and fascism being two sides of the same coin) was acting far more fascist than any other group in the US at this time. They are the ones going out on the streets in large mobs, intimidating people with ‘the wrong opinion’, destroying property, looting, arson etc. Look up what fascists did in Italy and Germany, or what communist revolutionaries did in Russia, China etc. That looks nothing like what the Trump administration is doing, and everything like what Antifa is doing.

You’d have to be really stupid to not be able to look beyond the obvious ploy of calling an organization “Anti-Fascism”. It’s called Anti-Fascism, so it can’t be fascism, right? Wrong. It can, and it is. This is domestic terrorism, by the book. And like many terrorist organizations, they aren’t officially organized, but operate more in individual ‘cells’, making them harder to track.

But apparently Alec was so gaslit that he claimed that fascism didn’t mean what I think it meant (as in: the proper definition found in many history books, encyclopedia etc). Because ‘words can change meaning over time’. There we are, postmodernist/Wokeist word games again. Words have meaning, you can’t just change them. Fascism clearly describes a movement that historically started with Mussolini, and has pretty much ended after WWII. The term ‘fascism’ has since mainly been used politically/strategically, to undermine political opponents. Basically applying a Godwin. ‘Fascist’ has now come to mean “anyone that Antifa disagrees with”, or even “anyone that left-wing oriented people disagree with”.

Nobody has referred to themselves as ‘fascist’ since, and no regime or political movement has officially been labeled ‘fascist’ by anyone. We certainly don’t label the Trump administration a fascist government in Europe (or totalitarian, dictatorial, racist, or whatever else). But such labels are apparently in the US itself by the left (even including prominent Democrats, all the way up to Biden), in order to take down the Trump administration. I think we are in a better position to judge that from the outside, than the people who’ve been under the influence of the propaganda machine for years.

And of course, no actual debate was possible, so when I didn’t fall for the superficial word games, he just blocked me. Possibly because the ideas of Critical Race Theory and intersectionality have become mainstream, it appears that nuance has disappeared from debate. Instead, everything is very polarized. It is all black-and-white, nothing in between. It is all or nothing. Debates rarely go into actual substance and arguments. Messengers are shot and people are labeled as horrible persons for simply having a different opinion.

This exchange is what originally got me to write the previous blog. I wasn’t expecting even people from ‘my neck of the woods’ (techy/nerdy/science-minded people) to buy into this nonsense. In fact, I actually said that at some point during the exchange, that I thought he would be more rational about this, as his videos show a very rational guy. He actually tried to deny that the videos he makes require rationality, as you can see.

At the time I thought that was rather strange, but now I think I may understand why. Critical Race Theory places things such as ‘rationality’, ‘objectivity’, and science in general under ‘whiteness’. So perhaps that’s why he was trying to deny it. He may have actually believed that he would be a ‘white supremacist’ or ‘racist’ or whatever if he were to admit that he is generally a rational person.

And he wasn’t the only one who ‘went Woke’. There’s someone else in ‘my neck of the woods’. I will not say who it is, because it was a private conversation, where the exchange with Alec was public, on Twitter, and is still available to read for everyone. But I can be sure that it is someone that most people who read this blog will be familiar with.

I can only say: you people are on the wrong side of history. This Woke nonsense is destroying our freedom and our communities. The Woke will force their opinions on you, as a totalitarian system, and if you do not comply, they will shut you out. There is no debate possible, your arguments will not be heard, there is no room for any kind of nuance or anything. Not even with people who you’ve known for years, and who should know better than to think you’re anywhere near a racist, fascist, sexist, homophobe, transphobe or whatever other superficial label they use to deflect any other opinions and shut people out. We are ‘dissidents’, and we must be ‘eliminated’.

Communism failed because it was based on an overly simplified view of the world, that mainly saw the world as a struggle between classes. It ignored the fact that humans are individuals, and individuals have their flaws and weaknesses. People aren’t all equal, and you can’t force them to be.

The Woke are making a very similar mistake, where Critical Race Theory/Intersectionality is again a very simplified view of the world, only marginally different from the communist one. This time it is seen as a power struggle between various ‘characteristics’ on the intersectional grid (such as gender, race, sexual preference and whatnot). And they again want to make all people equal, this time by forcing equity between groups. Again, this can only be done by force, and will fail, because the view of people is oversimplified, and the intersectional grid is a flawed view of society and humanity.

And I hope I explained why things like ‘white supremacy’ are completely foreign to us Europeans. And how totalitarian regimes, both fascist and communist, are far closer to home with us. So how things like ‘fascist’, ‘white supremacist’, ‘racist’, ‘Nazi’ etc. are deeply insulting to us. They also are highly disrespectful to the millions of victims of those regimes. In Europe there are still many people who lost a lot of family because of the Nazis or the communists. If you really were empathic, as you claim to be, and really were about respect and tolerance, I wouldn’t even have to tell you, because your common sense would have already made you understand how terrible that kind of behaviour would be. But you aren’t. You’re insensitive, ignorant, intolerant excuses for human beings.

Sargon of Akkad (who is also European) also did a similar video on that by the way:

Posted in Science or pseudoscience? | 8 Comments

The Cult of Wokeness

As you may know, I do not normally want to engage in any kind of political talk. I’m not entirely sure if you can even call this topic ‘political’, because free speech, science, rationality and objectivity are cornerstones of the Western world, and form the basis of the constitution of most Western countries.

And as you may know, I have spoken out against pseudoscience before. And I have also been critical of deceptive marketing claims and hypes from hardware and software vendors, somewhat closer to home for me, as a software engineer. I value honesty, objectivity, rationality and science, because they have brought us so much over the course of history, and they can bring us so much more in the future (and with ‘us’ I mean all of humanity, because I am a humanist).

However, in current times it seems that these values have come under pressure, from a thing known as cancel culture. To make a very long and complicated story short, currently there is a “Woke” cult, which bases itself on identity politics and Critical race theory. In short, they think within a hierarchy of oppressors and oppressed identity groups. Any ‘oppressing group’ is not allowed to have any say or opinion on any ‘oppressed group’. That is their simplistic view of ‘social justice’, ‘racism’, ‘sexism’ and related topics.

It is somewhat of a combination of postmodernist thinking and neo-Marxism. It is rather difficult to explain it all in just a few sentences, but the basic concept is that they see everything as a ‘social construct’. So man-made. Which also means that they can ‘deconstruct’ these things. They see language as a way to construct and deconstruct things. Basically, society works a certain way because of human behaviour, and language is a big part of that. By redefining language, you can ‘deconstruct’ certain behaviour, if that makes sense. It is pseudoscience of course.

A common example is the redefinition of ‘racism’, into something that is defined by what the ‘victim’ experiences. By turning this definition around, they can now argue that you can be racist even if you didn’t intend to, because that no longer matters. If someone claims they have ‘experienced racism’, then it is true, and you must be a racist. They extend this to a concept of ‘institutional racism’, where just as with racism, it’s never entirely clear what an ‘institution’ is, but again it does not matter, because as long as a ‘victim’ has ‘experienced institutional racism’, then it must be true, and therefore institutional racism must exist, even if it can’t or won’t be defined.

In general that is the modus operandi of this Woke cult: they favour feelings and emotions over facts. In other words, they value subjectivity over objectivity. I suppose you understand how that affects the world as we know it, especially science and technology. This can go as far as them not accepting facts, because since objectivity does not exist, facts are always subjective, they are a ‘social construct’ as well. They claim that other people can have ‘other ways of knowing’ (which is basically a way of saying ‘magic’). Recently, there even was a discussion of how “2+2=4″ is not always true. For some people it could be “2+2=5”.

This is just a short introduction, but I urge you to dig into this more. There are various online sources. A good starting point is the site New Discourses. Another good source is Dr. Jordan B. Peterson. He has put up a short page on postmodernism and Marxism on his website. You can also find various of his talks on the subject on YouTube and such.

Online there are many Social Justice Warriors who will attack anyone with a wrong ‘opinion’. They don’t do this by using free speech, as in engaging in a conversation and exchanging viewpoints. They do this by basically drowning out these people. A mob mentality. They try to ‘cancel’ these people, to deplatform them.

This also leads to virtue signalling, where people post certain opinions for no apparent reason, to show they’re ‘on the good side’ (probably because they’re afraid to get cancelled themselves).

I started noticing that last thing on Twitter over the past year or so. I mostly follow tech-related people. And it occurred to me that quite a few people would post rainbow flags, and discuss trans rights and things. So I started wondering “why are they doing this? Are there so many gay/trans people in tech? I have been following this person for quite a while, and afaik they’re neither gay nor trans or anything, so what gives?”

Apparently this Woke-cult has been growing in the liberal arts colleges in the USA for many years, and it is now coming out, and trying to take over the world (some ‘academics’ are part of this, they have the credentials, but their work does not meet scientific standards, such as Robin DiAngelo and her book “White Fragility”). The Black Lives Matter movement and Antifa are the most visible manifestations of this cult at the moment. And they are trying to deconstruct many parts of society.

They want to ‘decolonize’ society, and are even attacking things like mathematics now. They claim it is a ‘social construct’ to manifest white supremacy. They want to remove the objectivity and ‘rehumanize’ mathematics. Does that sound crazy? Yes, it does. But I’m not making this up, as you can see.

Mathematics is perhaps the most abstract phenomenon you can think of, and is completely unbiased to any human. It is just pure logic and facts. It led to computers, who use mathematics to perform all sorts of tasks, again, purely with logic (arithmetic) and facts (data). Entirely unbiased to any human. And now you are proposing to look at the race and/or (ethnic) background of children to somehow teach them different kinds of mathematics? Firstly, that’s a racist thing to do. Secondly, it destroys mathematics, because it will no longer be a universal, unbiased language. The paper claims that it is merely a myth that mathematics is objective and culture-free. Yet it gives no explanation whatsoever, let alone a proof that this would be a myth.

If anything, I’d say there’s plenty of proof around. So much of our technology works on the basis of mathematic principles. And that same technology works all over the world. There are people all over the world who understand these same mathematic principles, regardless of their race, background, culture or anything.

The issue with these things is that from a distance, they sound noble, but when you dig deeper, things are not quite what they seem. Eventually, most people will (hopefully) reach their Woke breaking point. Make sure you know your boundaries, and know when those lines are crossed, and act accordingly.

Anyway, there are many different instances of this Woke-cult, and we have to stop it. We have to prevent it from taking over our world, and destroy everything we’ve worked so hard for to create. So, if you were not aware yet, then hopefully you are now, and hopefully you understand that you need to get to grips with what this Woke-cult is, so that you can recognize it. Note that it is also very much in the mainstream media these days. Look out for things like ‘diversity’ and ‘inclusivity’.

The New York Times for example, is a very clear example of a media outlet that is taken over entirely by the Woke-cult. Bari Weiss resigned there recently, and she published her resignation letter, which speaks volumes. You can also find it in the Washington Post and many other papers. It’s also with CNN, for example. Once you get a feeling of what to look for, you should pick up on Woke-media quickly. They basically all have a single viewpoint, and their articles are completely interchangeable. There are no real opinion pieces anymore, just propaganda.

It’s gone so far that some media, most notably the Australian Spectator, are actually promoting themselves as “Woke-free” media:

So, let us fight the good fight, for all of humanity!

Posted in Science or pseudoscience? | Tagged , , , , , , , , | 17 Comments

The strong ARM

I’ve done some posts on x86 vs ARM over the years, most recently on the new Microsoft Surface Pro X, which runs a ‘normal’ desktop version of Windows 10 on an ARM CPU, while also supporting x86 applications through emulation. This basically means that Microsoft is making ARM a ‘regular’ desktop solution that can be a full desktop replacement.

Rumours of similar activity in the Apple camp have been going around for a while as well. Ars Technica has run a story on it now, as it seems that Apple is about to make an official announcement.

In short, Apple is planning to do the same as Microsoft: instead of having their ARM devices as ‘second class citizens’, Apple will make a more conventional laptop based on a high-end ARM SoC, and will run a ‘normal’ version of macOS on it. So again a ‘regular’ desktop solution, rather than the iOS that current ARM devices run, which cannot run regular Mac applications. At this point it is not entirely clear whether these ARM devices can also run x86 applications. However, in the past, Apple did exactly that, to make 68k applications run on the PowerPC Macs, for a seamless transition. And they offered the Rosetta environment for the move from PowerPC to x86.

Aside from using emulation/translation to run applications as-is, they also offered a different solution however: they provided developers with a compiler that would generate code for multiple CPU architectures into a single binary (so both 68k and PPC, or both PPC and x86), a so-called Fat binary or Universal binary. The downside of this solution is of course that it requires applications to be compiled with this compiler, which rules out any x86 applications currently on the market.

In this sense it does not help that Intel is still struggling to complete their move from 14nm to 10nm and beyond. Apple can have its ARM SoCs made on 7nm, which should help to close the performance gap between ARM and high-end x86. I suppose that means that Intel will have to earn its right to be in Macs from now on. If Intel can maintain a performance benefit, then x86 and ARM can co-exist in the Mac ecosystem. But as soon as x86 and ARM approach performance parity, then Apple would have little reason to continue supporting x86.

Interesting times ahead.

Posted in Hardware news | Tagged , , , , , , , , , , | 3 Comments

Batch, batch, batch: Respect the classics!

Today I randomly stumbled upon some discussions about DirectX 12, Mantle and whatnot. It seems a lot of people somehow think that the whole idea of reducing draw call overhead was new for Mantle and DirectX 12. While some commenters managed to point out that even in the days of DirectX 11, there were examples of presentations from various vendors talking about reducing draw call overhead, that seemed to be as far back as they could go.

I on the other hand have witnessed the evolution of OpenGL and DirectX from an early stage. And I know that the issue of draw call overhead has always been around. In fact, it really came to the forefront when the first T&L hardware arrived. One example was the Quake renderer, which used a BSP tree, to effectively depth-sort the triangles. This was a very poor case for hardware T&L, because it created a draw call for every individual triangle. Hardware T&L was fast if it could process large batches of triangles in a single go. But the overhead of setting the GPU up for hardware T&L was quite large, given that you had to initialize the whole pipeline with the correct state. So sending triangles one at a time in individual draw calls was very inefficient on that type of hardware. This was not an issue when all T&L was done on the CPU, since all the state was CPU-side anyway, and CPUs are efficient at branching, random memory access etc.

This led to the development of ‘leafy BSP trees’, where triangles would not be sorted down to the individual triangle level. Instead, batches of triangles were grouped together into a single node, so that you could easily send larger batches of triangles to the GPU in a single draw call, and make the hardware T&L do its thing more efficiently. To give an idea of how old this concept is, a quick Google drew up a discussion on BSP trees and their efficiency with T&L hardware on Gamedev.net from 2001.

But one classic presentation from NVIDIA that has always stuck in my mind is their Batch Batch Batch presentation from the Game Developers Conference in 2003. This presentation was meant to ‘educate’ developers on the true cost of draw calls on hardware T&L and early programmable shader hardware. To put it in perspective, they use an Athlon XP 2700+ GHz CPU and a GeForce FX5800 as their high-end system in that presentation, which would have been cutting-edge at the time.

What they explain is that even in those days, the CPU was a huge bottleneck for CPUs. There was so much time spent on processing a single call and setting up the GPU, that you basically got thousands of triangles ‘for free’ if you would just add them to that single call. At 130 triangles or less, you are completely CPU-bound, even with the fastest CPU of the day.

So they explain that the key is not how many triangles you can draw per frame, but how many batches per frame. There is quite a hard limit to the number of batches you can render per frame, at a given framerate. They measured about 170k batches per second on their high-end system (and that was a synthetic test doing only the bare draw calls, nothing fancy). So if you would assume 60 fps, you’d get 170k/60 = 2833 batches per frame. At one extreme of the spectrum, that means that if you only send one triangle per batch, you could not render more than 2833 triangles per frame at 60 fps. And in practical situations, with complex materials, geometry, and the rest of the game logic running on the CPU as well, the number of batches will be a lot smaller.

At the other extreme however, you can take these 2833 batches per frame, and chuck each of them full of triangles ‘for free’. As they say, if you make a single batch 500 triangles, or even 1000 triangles large, it makes absolutely no difference. So with larger batches, you could easily get 2.83 million triangles on screen at the same 60 fps.

And even in 2003 they already warned that this situation was only going to get worse, since the trend was, and still is, that GPU performance scales much more quickly than CPU performance over time. So basically since the early days of hardware T&L the whole CPU overhead problem has been a thing. Not just since DirectX 11 or 12. These were the days of DirectX 7, 8 and 9 (they included numbers for GeForce2 and GeForce4MX cards, which are DX7-level, they all suffer the same issue. Even a GeForce2MX can do nearly 20 million triangles per second if fed efficently by the CPU).

So as you can imagine, a lot of effort has been put into both hardware and software to try and make draw calls more efficient. Like the use of instancing, rendering to vertexbuffers, supporting texture fetches from vertex shaders, redesigned state management, deferred contexts and whatnot. The current generation of APIs (DirectX 12, Vulkan, Mantle and Metal) are another step in reducing the bottlenecks surrounding draw calls. But although they reduce the cost of draw calls, they do not solve the problem altogether. It is still expensive to send a batch of triangles to the GPU, so you still need to feed the data efficiently. These APIs certainly don’t make draw calls free, and we’re nowhere near the ideal situation where you can fire off draw calls for single triangles and expect decent performance.

I hope you liked this bit of historical perspective. The numbers in the Batch Batch Batch presentation are very interesting.

Posted in Direct3D, Oldskool/retro programming, OpenGL, Software development, Vulkan | Tagged , , , , , , , , , , , | 5 Comments

Politicians vs entrepreneurs

Recently the discussion of a newly published book caught my attention. The book investigated some of the ramifications of the financial crisis of 2007-2008. Specifically, it investigated how a bank received government support. This was done in the form of the government buying a lot of controlling shares in the bank, and also installing a CEO. This CEO was a politician.

In short, the story goes that initially he did a good job, by carefully controlling the bank’s spendings and nursing the bank back to health. However, as time went on, the bank was ready to grow again, and invest in new projects. This became a problem, partly because of the government as a larger shareholder, and partly because of the CEO being a politician. They were reluctant to take risks, which resulted in various missed opportunities. Ironically enough it also meant that the government could have cashed in their shares at an opportune moment, and they would have had their full investment back, and even a profit. But because the government was reluctant to do so at the time, it is unlikely that they will get another opportunity soon, as the current Corona-crisis made the shares drop significantly, and the government would be at a huge loss again.

This in turn led to an internal struggle between the CEO and other members of the board, who wanted ‘real’ bankers, more willing to take risks, and expand on opportunities. Eventually it led to the ousting of the CEO.

What struck me with this story was that I recognize these different management styles in software as well. I’d like to name “Delphi” as a key word here. Back in my days at university, I once did an internship with two other students, at a small company. As a result, Delphi has been on my resume for ages, and I ended up doing projects at various different Delphi-shops. This caused me to realize at some point that you should not put skills on your resume that you don’t want to use.

Why Delphi?

Delphi is just an example I’m giving here, because I have first-hand experience with this situation. There are various other, similar situations though. But what is the issue with Delphi? Well, for that, we have to go back to the days of MS-DOS. A company by the name of Borland offered various development tools. Turbo Pascal was one of them, and it was very popular with hobbyists (and also demosceners). It had a very nice IDE for the time, which allowed you to build and debug from the same environment. Its compile-speed was revolutionary. And in those days, that mattered. Computers were not very fast, and it could take minutes to build even a very simple program, before you could run, test and debug it.

Turbo Pascal was designed to make building and linking as fast and efficient as possible (see also here). Today you may be used to just hitting “build and run in debugger” in your IDE, because it just takes a few seconds, and it’s an easy to way see if your new addition/fix will compile and work as expected. But in those days, that was not an option in most environments. Turbo Pascal was one of the first environments that made this feasible. It led to an entirely different way of developing. From meticulously preparing and writing your code to avoid any errors, the compiler became a tool to check for errors.

When the move was made to from MS-DOS to Windows, in the 90s, a new generation of Turbo Pascal was developed by Borland. This version of Turbo Pascal was called Delphi. Windows was an entirely different beast from MS-DOS though. DOS itself was written in assembly, and interacting with DOS or the BIOS required low-level programming (API calls were done via interrupts). This, combined with the fact that machines in the early days of DOS were limited in terms of CPU and memory, meant that quite a lot of assembly code was used. Windows however was written in a combination of assembly and C, and its APIs had a C interface.

As a result, not everyone who used Turbo Pascal for DOS, would automatically move to Delphi. Many developers, especially the professional ones, would use C/C++. And for the less experienced developers, there now was a big competitor in the form of Visual Basic. Where Delphi was supposed to promote its IDE and its RAD development as key selling points, Visual Basic now offered similar functionality for fast application development, but combined it with a BASIC dialect, which was easier to use than Pascal, for less experienced developers.

This meant that Delphi quickly became somewhat of a niche product. It was mainly used by semi-professionals. People who couldn’t or wouldn’t make the switch to C/C++, but who were too advanced to be using something like Visual Basic. The interesting thing is that even though during my internship in the early 2000s, I already felt that Delphi was a niche product, on its way out, it still survives to this day.

Delphi as a product has changed hands a few times. Borland no longer exists. Today, a company by the name of Embarcadero maintains Delphi and various other products originating from Borland, and they still aim to release a new major version every year.

While I don’t want to take away from their efforts (Delphi is a good product for what it is: a Pascal-based programming environment for Windows and other platforms), fact of the matter is that Embarcadero is a relatively small company, and they are basically the only ones aiming for Pascal solutions. Compare that to the C/C++ world, where there are various different vendors of compilers and other tools, and most major operating systems and many major applications are actually developed with this language and these tools. The result is that interfacing your code with an OS or third-party libraries, devices, services and whatnot is generally trivial and well-supported in C/C++, while you are generally on your own in Delphi.

And that’s just comparing Delphi with C/C++. Modern languages have since arrived, most notably C#, and these modern languages make development easier and faster than Delphi with its Pascal underpinnings. Which is not entirely a coincidence, given that Anders Hejlsberg, the original developer of Turbo Pascal and the lead architect of Delphi, left Borland for Microsoft in 1996, and became the lead architect of C#.

Back to the point

As I said, the use of Delphi can generally be traced back to semi-professional developers who started using Turbo Pascal in DOS. For the small company of my internship that was certainly the case. Clearly, being dependent on Delphi is quite a risk as a business. Firstly because there is only one supplier of your development tools. And development tools need maintenance. It has always been common for Delphi (and other development tools) to require updates when new versions of Windows were released. Since development tools tend to interact with the OS at a relatively low level, to make debugging and profiling code possible, they also tend to be more vulnerable to small changes in the OS than regular applications. So if Embarcadero cannot deliver an update in time, or even at all, you may find yourself in the situation that your application can not be made to work on the latest version of Windows.

Another risk stems from the fact that Delphi/Pascal is such a niche language. Not many developers will know the language. Most developers today will know C#, Java, C/C++. They can find plenty of jobs with the skills they already have, so they are not likely to want to learn Delphi just to work for you. The developers that remain, are generally older, more experienced developers, and their skills are a scarce resource which will be in demand, so they will be more expensive to hire.

This particular company was so small, that it was not realistic to expect them to migrate their codebase to another language. The migration itself would be too risky and have too much of an impact. With the amount of development resources they have, it would take years to migrate the codebase (even so, I would still recommend to develop new things in C/C++ or C# modules, which integrate into the existing codebase, and whenever there is time, also convert relevant existing code to such C/C++ or C# modules so that eventually a Delphi-free version of the application may be within reach).

However, over time I also worked at other companies that mainly used Delphi. And I’ve come to see Delphi as a red flag. The pattern always appeared to be that just a few semi-professionals with a Turbo Pascal background developed some core technology that the company was built on, and moving to Delphi was the logical/only next step.

Some of these companies ‘accidentally’ grew to considerable size (think 100+ employees), yet they never shook their Delphi roots, even though in this case the risk-factor of developer or other resources would not apply. All the other risks do, of course. So it should be quite obvious that action is required to get away from Delphi as quickly as possible.

Politician or entrepreneur?

That brings me to the original question. Because it seems that even though these companies have grown over time, their semi-professionalism from their early Turbo Pascal/Delphi days remains, and is locked into their company culture.

So the people who should be calling the shots, don’t want to take any risks, and just want to try and please everyone. The easiest way of doing that is to retain the current status quo. And that sounds an awful lot like a politician. Especially if you factor in that these people are semi-professionals, not true professionals. They may not actually have a proper grasp of the technology their company works with. They merely work based on opinions and second-hand information. They are reactive, not proactive.

Ironically it tends to perpetuate itself, because when that is the company culture, the people they tend to hire, will also be the same type of semi-professionals (less skilled developers, project managers without a technical background etc). Should they ‘accidentally’ hire a true professional/entrepreneur, then this person is not likely to function well in this environment. Those people would want to improve the company, update the culture, and be all they can be. But that may rub too many people the wrong way.

With a true entrepreneur it’s much easier to explain risks and possibilities, and plot a path to a better future. They will be more likely to try new things, and understand that not every idea may lead to success, so they may be more forgiving for experimentation as well (I don’t want to use the world ‘failure’ here, because I think taking risks should not be done blindly. You should experiment and monitor, and try to determine as early as possible whether an idea will be a success or not, so that you minimize the cost of failed ideas).

I think it’s the difference between looking at the past, and trying to hold on to what you’ve got, versus looking to the future and trying to gauge what you can do better, using creativity and innovation. A politician may be good in times of crisis, to try and minimize losses. But they will never bring a company to new heights.

And my experience in such companies is that they still use outdated/archaic tools, and tend to have very a very outdated software portfolio. Still selling products based on source code that hasn’t had any proper maintenance in over 10 years. Constantly running into issues like moving to Windows 10 or moving to 64-bit, which is not even an issue in the first place for other organizations, because they had already updated their tools and codebase before this ever became an issue (for example, C# is mostly architecture-agnostic, so most C# code will compile just fine for 32-bit and 64-bit, x86 or ARM. And since the .NET framework is part of Windows itself, your C# code will generally work fine out-of-the-box on a new version of Windows).

Being reactive only is a recipe for a technical debt disaster. I have experienced that they would not do ANY maintenance on their codebase whatsoever, outside of their projects. So there was no development or maintenance on their projects, unless they had a paying customer who specifically wanted a solution. Which also meant that the customer had to pay for all maintenance. This was an approach that obviously was not sustainable, since you could not charge the customers for what it would cost to do proper maintenance and solve all the technical debt. It would make your product way too expensive. The company actually wanted to have competitive pricing of course, even trying to undercut competitors. And project managers would also want to keep things as cheap as possible, so the situation only got worse over time.

I think Microsoft shows a very decent strategy for product development with Windows. Or at least, they did in the past 20+ years. For example, they made sure that Windows XP was a stable version. They could then move to a more experimental Windows, in the form of Vista, where they could address technical debt, and also add new functionality (such as Media Foundation and DirectX 10). Vista may not have been a huge success, but there was always XP for customers to fall back on. The same pattern repeated with Windows 7 and Windows 8-10. Windows 7 continued what Vista started, but made it stable and reliable for years to come. This again gave Microsoft the freedom to experiment with new things (touch interfaces, integrating with embedded devices, phones, tablets etc, and the Universal Windows Platform). Windows 8 and 8.1 were again not that successful, but Windows 10 is again a stable version of this technology.

So in general, you want to create a stable version of your current platform, for your customers to fall back on. The more stable you make this version, the more freedom you have to experiment with new and innovative ideas, and get rid of your technical debt.

I just mentioned Delphi as an obvious red flag that I encountered over the years, but I’m sure there are plenty of other red flags. I suppose Visual Basic would be another one. Please share your experiences in the comments.

Posted in Software development | 2 Comments

Some thoughts on emulators

Recently I watched Trixter’s latest Q&A video on YouTube, and at 26:15 there was a question regarding PC emulators:

That got me thinking, I have some things I’d like to share on that subject as well.

First of all, I share Trixter’s views in general. Although I am a perfectionist, I am not sure if perfectionism is my underlying reason in this case though. I think emulators should strive for accuracy, which is not necessarily “perfection”. It is more of a pragmatic thing: you want the emulator to be able to run all software correctly.

However, that leads us into a sort of chicken-and-egg problem, which revolves around definitions as well. What is “all software”? What does “correctly” mean? And in the case of a PC emulator, there’s even the question of what a “PC” is exactly. There are tons of different hardware specs for PCs, even if you only look at the ones created by IBM themselves. Let alone if you factor in all the clones and third-party addons. I will just give some thoughts on the three subjects here: What hardware? What software? What accuracy/correctness?

What hardware?

While the PC is arguably the most chaotic platform out there, in terms of different specs, we do see that emulators for other platforms also factor in hardware differences.

For example, if you look at the C64, at face value it’s just a single machine. However, if you look closer, then Commodore has always had both an NTSC and a PAL version of the machine. Their hardware was similar, but due to the different requirements of the TV signals, the NTSC and PAL machines were timed differently. This also led to software having to be developed specifically for an NTSC or PAL machine.

As a result, most emulators offer both configurations, so that you can run software targeted at either machine. Likewise, there are some differences in different revisions of the chips. Most notably the SID sound chip. While they are compatible at the software level, the 6581 version sounds quite different from the later 8580 version. Most emulators therefore offer you to select from various chips, so that the sound most closely matches that specific revision of machine.

The PC world is not like that however. There were so many different clone makers around, and most of these clones were far from perfect, that the number of different possible configurations would be impossible to configure and emulate. At the same time, the fact that basically no two machines were exactly alike, also makes it less relevant to emulate every single derivation. As long as you can emulate one machine ‘in the ballpark’, it gives you exactly the same experience as real hardware did back in the day.

So the question is more about defining which ‘ballparks’ you have. I would say that the original IBM PC 5150 would make a lot of sense to emulate correctly, as a starting point. This is the machine that the earliest software was targeted at, and also the machine that early clones were cloning.

The PC/XT 5160 and 5155 are just slight derivations of the 5150, and the differences generally do not affect software, they only matter for physical machines. For example, they no longer have the cassette port, and they have 8 expansion slots with a slightly narrower form factor than the 5150 did.

Likewise, because most clones of that era are generally imperfect, and could not run all software correctly, they are less interesting as an emulator target.

Another two machines that make an interesting ballpark are the IBM PCjr and the Tandy 1000. They are related to the original PC, but offer extended audio and video capabilities. The Tandy 1000 was more or less a clone of the PCjr, but the PCjr was a commercial flop, while the Tandy 1000 was a huge success. In practice, this means a lot more software targets the Tandy 1000 specifically, rather than the PCjr original.

From then on, the PC standard became more ‘blurred’. Clones took over from IBM, and software adapted to this situation, by being more forgiving about different speeds, or slight incompatibilities between chipsets and such. So perhaps a last ‘exact’ target could be the IBM AT 5170, but after that, just “generic” configurations for the different CPU types (386, 486, Pentium etc) would be good enough, because that’s basically what the machines were at that point.

What software?

For me the answer to this one is simple: One should strive to be able to run all software. I have seen various emulator devs dismiss 8088 MPH, because it is the only software of its kind, in how it uses the CGA hardware to generate extra colours and special video modes. I don’t agree with that argument.

The argument also seems to be somewhat unique to the PC emulator scene. If you look at C64 or Amiga emulators, they do try to run all software correctly. Even when demos or games find new hardware tricks, emulators are quickly modified to support this.

What accuracy/correctness?

I think this is especially relevant for people who want to use the emulator as a development tool. In the PC scene, it is quite common that demos are developed exclusively on DOSBox, and they turn out not to run on real hardware at all. Being able to run as much software as possible is one thing. But emulators should not be more forgiving than real hardware. Code that fails on real hardware, should also fail on an emulator.

An interesting guideline for accuracy/correctness is to emulate “any externally observable effects”. In other words: you can emulate things as a black box, as long as there is no way that you can tell the difference from the outside. At the extreme, it means you won’t have to emulate a machine down to the physical level of modeling all gates and actually emulating the electrons passing through the circuit. Which makes sense in some way, because the integrated circuits that make up the actual hardware are also black boxes to a certain extent. Only the input and output signals can be observed from the outside.

However, that is difficult to put in practice, because what exactly are these “externally observable effects”? It seems that this is somewhat of a chicken-and-egg problem. A definition that may shift as new tricks are discovered. I already mentioned 8088 MPH, which was the first to use the NTSC artifacting in a new way. Up to that point, emulators had always assumed that you could basically only observe 16 different artifact colours. It was known that there was more to NTSC artifacts than just these fixed 16 colours, but because nobody ever wrote any software that did anything with it, it was ignored in emulation, because it was not ‘externally observable’.

Another example is the C64 demo Vicious Sid. It has a “NO SID” part:

It exploits the fact that there is a considerable amount of crosstalk between video and audio in the C64’s circuit. So by carefully controlling the video signal, you can effectively play back controlled audio by means of this crosstalk.

So although it was known that this crosstalk exists, it was ignored by emulators, as it was just considered ‘noise’. However, Vicious Sid now does something ‘useful’ with this externally observable effect, so it should be emulated in order to run this demo correctly. And indeed, emulators were modified to make the video signal ‘bleed’ into the audio, like on a real machine.

This also indicates that there may be various other externally observable effects that are already known, but ignored in emulators so far, just waiting to be exploited by software in the future.

Getting back to 8088 MPH, aside from the 1024 colours, it also has some cycle-exact effects. These too cause a lot of problems with emulators. One reason is the waitstates that can be inserted on the data bus by hardware. CGA uses single-ported memory, so it cannot have both the video output circuit and the CPU access the video RAM at the same time. Therefore, it inserts waitstates on the bus, to block the CPU whenever the output circuit needs to access the video RAM.

This was a known externally observable effect, but no PC emulator ever bothered to emulate the hardware to this level, as far as I know. PC emulators tend to just emulate the different components in their own vacuum. In reality all components share the same bus, and therefore the components can influence each other. It is relevant that waitstates are actually inserted on the bus, and are actually adhered to by other components.

It is also relevant that although the different components may run on the same clock generator, they tend to have their own clock dividers internally, and this means that the relative phase of components to each other should also be taken into account. That is, there is a base clock of 14.31818 MHz on the motherboard. The CPU clock of 4.77 MHz is derived from that by dividing it by 3. Various other components run at other speeds, derived from that same base clock, such as 1.19 MHz for the PIT and 3.58 MHz for the NTSC colorburst and related timings.

We have found during development of 8088 MPH that the IBM PC is not designed to always start in the exact same state. In other words, the dividers do not necessarily all start at the same cycle on the base clock, which means that they can be out of phase in various ways. The relative phase of mainly CPU, PIT and CGA circuit may change between different power-cycles. In 8088 MPH this leads to the externally observable effect that snow may not always be hidden in the border during the plasma effect. You can see this during the party capture:

The effect was designed to hide the snow in the border. And during development it did exactly that. However, when we made this capture at the party, the machine was apparently powered on in one of the states that the waitstates were shifted to the right somewhat. There are two ‘columns of snow’ hidden in the border normally. But because of this phase shift, the second column of snow was now clearly visible on the left of the screen.

We did not change the code for the final version. But since we were now aware of the problem, we just power-cycled the machine until it was in one of the ‘good’ phase states, before we did the capture (it is possible to detect which state the system is in, via software. As far as we know it is not possible to modify this state in any way though, through software, so only a power-cycle can change it):

So in general I think this really is a thing that emulators MUST do: components should really interact with eachother, and the state of the bus really is an externally observable effect. As is the clock phase.

For most other emulators this is apparent, because software on a C64, Amiga, Atari ST or various other platforms tends to have very strict requirements for timing anyway. More often than not, software will not work as intended if emulation is just a cycle off at all. For PCs it is not that crucial, but I think that at least for the PC/XT platforms, this exact timing should be an option. Not just for 8088 MPH, but for all the cool games and demos people could write in the future, if they have an emulator that enables them to experiment with developing this type of code.

Related to that is the emulation of the video interface. Many PC emulators opt for just emulating the screen one frame at a time, or per-scanline at best. While this generally ‘works’, because most software tries to avoid changing the video state mid-frame or mid-scanline, it is not how the hardware works. If you write software that changes the palette in the middle of a scanline, then that is exactly what you will see on real hardware.

Because at the end of the day, let’s face it: that is how these machines work. You should emulate how the machine works. And this means it is more than the sum of its parts. Emulating only the individual components, while ignoring any interaction, is an insufficient approximation of the real machine.

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , , | Leave a comment

Windows and ARM: not over yet

As you may recall, I was quite fond of the idea of ARM and x86 getting closer together, where on the one hand, Windows could run on ARM devices, and on the other hand, Intel was developing smaller x86-based SOCs in their Atom line, aimed at embedded use and mobile devices such as phones and tablets.

It has been somewhat quiet on that front in recent years. On the one hand because Windows Phones never managed to gain significant marketshare, and ultimately were abandoned by Microsoft. On the other hand because Intel never managed to make significant inroads into the phone and tablet market with their x86 chips either.

However, Windows on ARM is not dead yet. Microsoft recently announced the Surface Pro X. It is a tablet, which can also be used as a lightweight laptop when you connect a keyboard. There are two interesting features here though. Firstly the hardware, which is not an x86 CPU, as in previous Surface Pro models. This one runs on an ARM SOC. And one that Microsoft developed in partnership with Qualcomm: the Microsoft SQ1. It is quite a high-end ARM CPU.

Secondly, there is the OS. Unlike earlier ARM-based devices, the Surface Pro X does not get a stripped-down version of Windows (previously known as Windows RT), where the desktop is very limited. No, this gets a full desktop. What’s more, Microsoft integrated an x86 emulator in the OS. Which means that it can not only run native ARM applications on the desktop, but also legacy x86 applications. So it should have the same level of compatibility as a regular x86-based Windows machine.

I suppose we can interpret this as a sign that Microsoft is still very serious about supporting the ARM architecture. I think that is interesting, because I’ve always liked the idea of having competition in terms of CPU architectures and instructionsets.

There are also other areas where Windows targets ARM. There is Windows 10 IoT Core. Microsoft supports a range of ARM-based devices here, including the Raspberry Pi and the DragonBoard. I have tried IoT Core on a Raspberry Pi 3B+, but was not very impressed. I want to use it as a cheap rendering device connected to a display. The RPi’s GPU is not supported by the drivers, so you get software rendering only. The DragonBoard however does have hardware rendering support, so I will be trying this out soon.

I ported my D3D11 engine to my Windows Phone (a Lumia 640) in the past, and that ran quite well. Developing for Windows 10 IoT is very similar, as it supports UWP applications. I dusted off my Windows Phone recently (I no longer use it, since support has been abandoned, and I switched to an Android phone for everyday use), and did some quick tests. Sadly Visual Studio 2019 does not appear to support Windows Phones for development anymore. But I reinstalled Visual Studio 2017, and that still worked. I can just connect the phone with a USB cable, and deploy debug builds directly from the IDE, and have remote debugging directly on the ARM device.

I expect the DragonBoard to be about the same in terms of usage and performance. Which should be interesting.

Posted in Direct3D, Hardware news, Software development, Software news | Tagged , , , , , , , | 5 Comments