The Cult of Wokeness, followup

The previous article was just meant as a quick overview and wake-up call. But I would like to say a few more things on the subject.

I have since read the book Cynical Theories by Helen Pluckrose and James Lindsay. I recommend that everyone reads this book, so that they are up-to-speed with the current Woke-mindset. At the very least, I suggest you read a review of the book, to get a rough idea. The review by Simon Jenkins gives a good quick overview of the topics that the book discusses. I will also repeat my recommendation to read some of the articles and background information on the New Discourses site.

I would like to elaborate on two things. Firstly there’s the pseudoscientific nature of it, which is what I am most concerned about, as I said before. Secondly, I also want discuss some forms in which Woke manifested itself in the real world.

Postmodernist philosophy

As you know I’ve done a write-up about the philosophy of science before. At university, this was taught in a number of courses in the first three years. I always took a liking to it. It is important to know what our current methods of science are exactly, and where they came from, how they evolved.

As you may have noticed, I did not cover postmodernism at all. That was not intentional. Postmodernism simply never crossed my path at the time. But now that it has, I went through my old university books and readers again, and indeed, there was no specific coverage of postmodernism at all. It seems that the only postmodernist philosopher that is referenced at all, is Paul Feyerabend.

Feyerabend is actually a somewhat controversial figure, as he wanted a sort of ‘anarchistic’ version of science, and rejected Popper’s falsification, for example. The university material I have, only spends one paragraph on him, merely to state that purely rational science is merely one extreme view, where Feyerabend represents the other extreme. They nuance it by saying that in practice, science operates somewhere in the gray area between these extremes.

And that brings me to the point I want to make. Postmodernism is extremely critical of society in general, and science specifically. There is some value to the ideas that postmodernism brings forward. At the same time, you should not take these ideas to the extreme. Also, the reason why they were not covered in the philosophy of science, is that they did not actually produce new knowledge or useful methods. So they did not add anything ‘tangible’ to science, they merely brought more focus to possible pitfalls of bias, political interest and other ideologies.

There is some merit to their idea of systems that can be ‘rigged’ by having a sort of bias built-in. A bias that you might be able to uncover by looking at the way that people talk about things, the ‘discourses’. That the system and the bias are ‘socially constructed’.

After all, with ‘politically correct’ language we are basically doing exactly that: we choose to use certain words, and avoid certain other words, to shift the perception (bias) of certain issues. So in that sense it is certainly possible to create certain ‘biases’ socially, and language is indeed the tool to do this.

However, they see everything as systems of power and hierarchy, and the goal of the system is always to maintain the position of power at the cost of the lesser groups (basically a very dystopian view, like in the book 1984 by George Orwell). That is not necessarily always the case. For example, science is not a system of social power. Its goal is to obtain (objective and universal) knowledge, not to benefit certain groups at the cost of others. Heck, if anything proves that beyond a shadow of a doubt, then it must be the main topic I normally cover on this blog: hardware and software. Scientists have developed digital circuits, transistors, computer chips, CPUs etc., and developed many tools, algorithms etc. to put this hardware to use. As a result, digital circuits and/or computers are now embedded in tons of devices all around you in everyday life, making life for everyone easier and better. Many people have jobs that exist solely because of these inventions. Everyone benefits in various ways from all this technology.

And I think that’s where the cynicism comes in. Postmodernists try to find problems of power-play and ‘oppression’ in every situation. That is indeed a ‘critical’ and ‘skeptical’ way of looking at things, but it’s not critical and skeptical in the scientific sense.

Where it goes wrong is when you assume that the possible problems you unearth in your close-reading of discourses, is the only possible explanation, and therefore you accept it as the truth. I am not sure if the original postmodern philosophers such as Foucault and Derrida actually meant to take their ‘Theory’ this far. But their successors certainly have.

This is most clear in the Critical Race Theory, which introduces the concept of ‘intersectionality’ (in Kimberlé Crenshaw’s book by the same name). The basic assumption here is that the postmodern ‘Theory’ of a racist system is the actual, real state of the world. Therefore all discourses must be a power-play between races. That assumption is certainly not correct in every situation, and most probably not even in the majority of situations.

The concept of intersectionality itself is another idea that may have some merit, but like the ‘social construct theories’, it does not apply as an absolute truth. As I already said in the previous post, in short intersectionality says that every person is part of any number of groups (such as gender, sex, sexual preference, race, etc). Therefore the prejudice against a person is also a combination of prejudice against these groups. For example, a black woman is both black and a woman. Therefore she may receive prejudice for being black and for being a woman. But crucially, she will also receive prejudice for a black woman. So intersectionality claims that prejudice against people is more than just the sum of the parts of groups that they are part of. At the ‘intersections’ between groups, there are ‘unique’ types of prejudice felt only by people that are part of both groups.

So far, the concept of intersectionality makes sense. People can indeed be ‘categorized’ into various groups, and will be members of a collection of groups at a time. And some combinations of groups may lead to specific kinds of prejudice, discrimination and whatnot.

However, the problem with intersectionality and Critical (Race) Theory arises when you start viewing this intersectionality as the absolute truth, the entire reality, the one and only system. That is an oversimplification of reality. The common way of viewing people was as individuals: they may be part of certain groups, and may share commonalities with others, but they are still unique individuals, who have their own thoughts and make their own decisions. But viewing people through an intersectional lens turns into identity politics: people are essentially reduced to the stereotype of their intersectional position, and are all expected to think and act alike. And that obviously is taking things a step too far.

Another very serious problem is that instead of looking for rationality, objectivity, or fact, these concepts are denounced. The focus is put on the ‘lived experiences’ (anecdotal evidence) of these groups instead. In the intersectional hierarchy, the ‘lived experience’ of an oppressed group always takes precedence over an oppressing group. Therefore, a woman’s word is always to believed over a man’s word, and a black person’s word is always to be believed over a white person’s word. If a woman says she experienced sexism, then it is considered a fact that there was sexism. If a black person says she experienced racism, then it is considered a fact that there was racism. Again, it is obvious how this can lead to false positives or exploitation of the system.

This is also where the system shows some of its obvious flaws and inconsistencies. Namely, these ‘lived experiences’ are subjective by definition, and as such, are viewed through the biased lens of the subject. This is exactly what caused people to develop the scientific method, to try and avoid bias, and reach objective views and rational explanations.

Postmodernism itself is supposed to be highly critical of biased discourses, but apparently bias is suddenly perfectly acceptable, and biased anecdotes are actually considered ‘true’ as long as the biased party is the one that is (subjectively) being ‘oppressed’. You just can’t make sense of this in any way. Intersectionality and Critical Race Theory are built on intellectual quicksand. It doesn’t make sense, and you can’t make sense of it, no matter how hard you try.

A nice example of how this ‘Theory’ can go wrong in practice can be found here, on this chart from New Discourses, under point 3:

Image

As you can see, there are only two possible choices to make, and both can be problematized into a racist situation under Critical Race Theory. While these may be *possible* explanations, they aren’t necessarily correct. There are plenty of alternative, non-racist explanations possible. But not under Critical Race Theory.

And that is a huge problem: CRT sees racism everywhere, so you will run into a number of false positives. That does not seem very scientific. The only scientific value that postmodernism approaches could have, is to search for possible hypotheses. But you would still need to actually scientifically research these, in order to find out if they are correct. Instead, they are ‘reified’: assumed to be true. CRT assumes that “the system” is racist, and white people have all the power, by definition. An assumption, not a proven hypothesis. An assumption, that you are unable to prove scientifically, because the evidence simply is not there.

Woke in practice

First of all, perhaps I should define ‘Woke’ as an extreme form of political correctness. A lot of things are ‘whitewashed’ in the media by either not reporting them at all, or reporting them in a very biased way with ‘coded language’. On the other hand, some things are ‘blackwashed’ (is that even a term?) by grossly overstating things, or downright nefarious framing of things.

Now, one thing that really rubs me the wrong way, to say the least, is the way World War II, Hitler, Nazi’s, fascism etc. are being used in today’s discourse. And it only strengthens the view that we in Europe already had of the US: these people seem to have little or no clue about history or the rest of the world.

And I say “Europe” because that’s how they look at us. As if we’re just one country, like the US, and the actual countries in Europe are more like different ‘states’. In this Woke-era, it’s important to note that Europe is nothing like that. For starters, nearly every country has its own language. So as soon as I cross a border, it immediately becomes difficult to even talk to other people. And there are far more differences. Countries in Europe still have their own unique national identities, ethnicities if you like. And Europe is a very old continent, like Africa. So long before there were ‘countries’ and ‘borders’, there were different tribes, that each had their own unique languages and identities, ethnicities. There’s even a Wikipedia page on the subject (and also for Africa).

Of course, this also leads to people having stereotypes of these different countries, and making fun of them, or there being some kind of rivalry between them. Things that the Woke would probably call ‘racism’. Except, to the Woke, they’re all ‘white’ and ‘European’, or ‘black’ and ‘African’. So apparently there is a complexity to the real world that they just don’t understand. Probably because their country is only a few hundred years old, and only has a single language, and (aside from Native Americans) never had any tribes to speak of. All ethnicities just mostly blended together as they came from Europe and Africa, and settled in America, taking on the new American identity.

Speaking of getting things completely wrong… Apparently Americans refer to white as ‘Caucasian’. The first time I heard that was on some TV show, I suppose a description of a suspect or such: “Middle-aged male, Caucasian…” So I was surprised. What did they mean by ‘Caucasian’? I thought they meant he was probably of Russian descent or such, because it referred to the Caucasus, a mountain region in Russia. But when I looked it up, apparently it was a name used for ALL white people. Which NOBODY else uses.

If you look into the history of the term ‘Caucasian’, things get interesting. Apparently somewhere in the 18th century, anthropologists thought that there were 3 main races: ‘Caucasian’, ‘Mongoloid’ and ‘Negroid’. This theory is long considered outdated, but apparently that didn’t stop Americans from using the term. And in fact, aside from wrongly using the term ‘Caucasian’ to denote ‘white skin colour’, there is some connotation attached to the term as well. Caucasians, or more specifically the ‘Circassian’ subtype of Caucasian people was seen as the ‘most beautiful humans’ in some pseudoscientific racial theory. Well, from that sort of crazy stereotype, it’s only a small step towards ‘white supremacy’ I suppose.

Because, let me get this clear… To me, the only race that exists is the ‘human race’. As someone with a background in science/academia, clearly I support Darwin’s theory of evolution as the most plausible explanation we have (as does a large part of the Western world. The US perhaps being an exception, because it’s still quite religious, and people still believe in creationism, making evolution controversial. It is not even remotely controversial in Western European countries). Combining archaeological findings of human fossils and evolution, the history of human life goes back to Africa, where humans evolved from apes.

Over time, these humans spread across the entire globe, and groups of humans in different parts of the world would continue to evolve independently. This led them to adapt to their local environment, which explains why humans in the north developed lighter skin. In the north, there were different levels of sun, therefore different levels of UV exposure and vitamin D production. This meant that less melanin was required. So evolution in Africa prevented genetic variations with less melanin from being successful. But in different areas in the world, this constraint no longer held. Variation in eye and hair colour can be explained in a similar way, as these are also dependent on genetic variations and melanin levels.

So, this means that we are all descendent from African people. It also means that skin colour variations are purely an adaptation to the environment, which can in no way be linked to any kind of perceived ‘superiority’ in any way, in terms of intelligence, behaviour or anything else. Skin colour is just that: skin colour.

What’s more, as humans developed better ways to travel, different groups that had evolved independently for many years, would interact with eachother again, so these separate evolutionary gene pools were mixed together again. So aside from any kind of ‘race’ based on skin colour being just some arbitrary point in evolution, even if you were to take such an arbitrary point in history, in practice most people would be a blend of these various arbitrary race definitions. For example, although the Neanderthal people are extinct, they have mixed with ‘modern’ humans, so various groups of people, mainly in Europe and Asia today, still carry certain Neanderthal-specific genes. It is believed that a genetic risk of Covid-19 can be led back to these Neanderthal-genes, for example.

The Neanderthals were a more primitive species of humans. It is not even clear whether they were capable of speech at all. Modern man is of the species of Homo Sapiens. And since Neanderthals never lived in Africa, they never mixed with African Homo Sapiens. So African (‘black’) people are genetically the most ‘pure’ modern humans. European (‘white’), Asian and even Native American people carry the more primitive Neanderthal genes. So if you want to make any kind of ‘racial argument’, then based on the gene-pool, ‘white superiority’ is a strange argument to make. After all, white people carry genes from a more primitive, archaic, extinct human species. Being extinct is hardly ‘superior’.

But there’s also a lot more recent mixing of genes. Because what some people call ‘white’ is basically everyone with a light skin colour. But that includes people with all different sorts of eye colours, hair colours, and also hair styles (straight, curly, frizzy etc). Which indicates that various gene pools, presumably from groups of people that evolved individually have been mixed. To give a recent example, take the recently deceased guitar legend Eddie van Halen. People may judge him as ‘white’, based on his appearance. But actually his mother was from Indonesia, so Asian. You see how quickly this whole ‘race’ thing goes bad. If you can’t even tell from the appearance of a ‘white’ person that one of his parents was of a different so-called ‘race’, then imagine how hard it is to tell whether a ‘white’ person had any ancestry of different ‘race’ two or more generations back.

So this whole idea of ‘race’ is just pseudoscience. It’s a social construct. Which is quite ironic, given that currently the Woke ‘antiracists’ are pushing a racial ideology. Which brings me closer to what I wanted to discuss. Because who were the last major group to push a pseudoscientific racial ideology? That’s right, the Nazis. They somehow believed that the “Aryan race” was superior to all others, and the Jews were the worst. Their interpretation of what ‘Aryan’ was, was basically white European people, ideally with blue eyes and blond hair. So in other words, it was basically a form of ‘white supremacy’. The Nazi Germans thought they were the ‘chosen people’, and since they considered themselves superior, obviously they had to take over the world.

Now what the Americans need to understand is that although most of Europe was white, and a large part of the population could pass for their idea of ‘Aryan’, they certainly were not interested in these ideas. The Germans went along because of years of propaganda and indoctrination by the Nazis. And even then many Germans only went along because they were under a totalitarian regime, and they had little choice. It is unclear how many Germans outside the Nazi party itself actually subscribed to the Nazi ideology. Germany also didn’t have a lot of allies in WWII (and even though Italy was also fascist, and was an ally, they were actually reluctant to adopt the racist ideology. Racism was not originally part of fascism. It was Hitler who added the racist element, and pressured Mussolini in adopting it).

Which explains why WWII was a war: Germany actually had to invade most countries, in order to push their Nazi ideology and get on with the Holocaust. Even then, there was an active resistance in many occupied countries, who tried to hide Jews and sabotage the Germans.

My country was one of those, and it still bears the scars of the war. Various cities had parts bombed. My mother lived in a relatively large house, which led to a German soldier being stationed there for a while (presumably to make sure they were not trying to hide Jews in the house). Concentration camps were built here, some of which are still preserved today, lest we forget.

And obviously WWII was not won by the Nazis. The Allies, who were again mostly white Western nations, clearly did not approve of the Nazis and their genocide.

So, given this short European perspective on WWII-related history, hopefully you might understand that terms like ‘Nazi’, ‘fascist’, ‘white supremacy’ and antisemitism resonate deeply with us, in a bad way.

And these days, a lot of people just use these terms gratuitously, mainly to insult people they don’t agree with, and dehumanize them (which is rather ironic, as this is exactly what the Nazis did to the Jews). Hopefully you understand that we take considerable offense at this.

And if you think that’s just extreme, activist people, guess again. It even includes people who should know better, and should be capable of balanced, rational thought. Such as Alec Watson of Technology Connections.

I give you this Twitter discussion:

This was related to the ‘mostly peaceful protests’ in Portland, as you can see. Clearly I did not agree with the quoted tweet, because it presented a false dichotomy: yes, government should be serving the people, but there are certain cases where it may be justified to beat people up on city streets (in order to serve the people). Namely, to stop rioters/domestic terrorists or otherwise violent groups. In Europe we are very familiar with this sort of thing, mostly with the removal of squatters from occupied buildings (who tend to put up quite violent protests) or when groups of fans from different sports teams attack eachother before, during or after a game.

After all, that is the concept of the ‘monopoly on violence‘ that the government has, through organizations such as the police and the army. We have very strict laws on guns and other arms, so we actually NEED the government to protect law-abiding citizens from violent/criminal people. Therefore, beating people up on the streets is perfectly fine, if that is what it takes to stop and arrest these people, in order to protect the rest.

So what I saw happening in Portland was a perfectly obvious situation where the government should stop these riots with force. Nothing wrong with beating up people who were trying to set a police station on fire, and throwing fireworks at the police etc. They were being violent and destroying property.

But debate ensued about that as well. Apparently Alec and other people did not consider destruction of property to be ‘violence’. That is funny, since you can find dictionary definitions that do. Apparently the meaning of words is being redefined here. Postmodernism/Wokeism at play. Aside from that, there are laws that state that the government needs to protect the people AND their property.

They were in denial about the destruction anyway, so I had to link to some Twitter feeds from people who reported on it, such as Andy Ngo and Elijah Schaffer. But as you can see, even then they were reluctant.

The conversation turned to Antifa and how they were fighting ‘fascists’. This is perhaps a good place for the second episode of Western European history. The history of Marxism and communism.

Because as you might know, Marxism was developed by Karl Marx and Friedrich Engels in Germany in the 19th century, most notably by publishing The Communist Manifesto and the book Das Kapital. Various communist parties in various European countries were formed, who aimed to introduce communism by means of a revolution. The first successful revolution occurred in 1917 in Russia by the Bolsheviks, led by Vladimir Lenin. In 1922 they formed the Soviet Union, which expanded communism to other countries gradually, most notably after WWII. Namely, after Germany tried to invade the Soviet Union, Stalin pushed back hard, and eventually moved all the way up to Berlin, causing Hitler to commit suicide and forcing the Nazis to capitulate, before the Allies arrived.

Effectively, Soviet forces now occupied large parts of Eastern Europe, including a large part of Germany itself. Stalin converted these parts to communism and made them into satellite states of the Soviet Union. This also led to Germany being split up into the Western Bundesrepublik Deutschland and the Eastern Deutsche Demokratische Republik (the communist satellite state).

This lasted up to the early 90s. Which means that a considerable amount of European people either lived under communism, or lived near countries under communism. These communist countries were sealed off from the outside world, with the most notable example being the Berlin Wall. They were totalitarian states.

After this short introduction, now to get back to Antifa, which originally started in the 1920s in Germany. Which was around the same time that fascism arose in Europe.

Fascism started in Italy, under Mussolini, and was later adopted by Hitler. They had political parties that had their own mobs/paramilitary groups, like a sort of ‘private army’ to intimidate political opponents, and eventually get into power. Also of note is that they initially identified themselves as leftist/socialist (Nazi is short for NationalSozialismus, the political identity of the NSDAP party, Nationalsozialistische Deutsche Arbeiterpartei). They were later classified as far-right, mainly because of their extreme nationalism, but nothing to do with their economic policies.

Communist parties used similar mob/paramilitary tactics, in order to organize their revolution and overthrow the government. Essentially both are domestic terrorists. This more or less made communists and fascists ‘natural enemies’. They also bear remarkable resemblance in many ways. Not only the mob tactics, but also the use of propaganda, and eventually establishing a totalitarian state, without much room for individuals and their opinions. Everything had to be regulated, including the media, arts, music etc.

Cynically one could say that communists and fascists are two sides of the same coin. Their tactics and goals are mostly the same, they only apply a slightly different ideology, either Marxism or Nazism. Both types of regimes caused millions of deaths. Communism even far more than Nazism, because it was more widespread and lasted longer. And not just in Russia either. The same happened in China or Cambodia for example. Dissidents had to be eliminated, which led to genocide.

The original German Antifa was ended in 1933 when the Nazis rose to power. Nazism ended in 1945, when WWII ended. Interestingly enough, the totalitarian regime in the communist states kept the idea alive that fascism was still alive in the Western states. And while the actual goal of the Berlin Wall was to keep people from escaping the dreadful DDR and reach the free BRD, they fed the people propaganda that the wall was put up in order to keep the fascists out (who, as already stated, didn’t exist anymore. But since the state controlled the media, their citizens had no idea about that, and only ‘knew’ what propaganda they were fed by the state).

And that brings me back to the current Antifa, which started in Portland. Ever since Trump started running for president, his opponents tried to frame him as far-right, racist, white supremacist, fascist and whatnot. Technically, he is none of these things. The only thing that is somewhat accurate is that he is clearly a right-wing politician. Both economically, and he also has a nationalist focus. To what extent that is actually ‘far-right’, is debatable.

But everything else just seems to be propaganda and gaslighting. He neither says nor does racist things, no signs of white supremacy either, and clearly he’s not a fascist. Mussolini and Hitler were ‘technically’ chosen democratically, but actually used mobs to intimidate political opponents (and in Hitler’s case, there were also a number of assassinations, in the Night of the Long Knives). Trump did none of these things. He was democratically chosen by the people, without any kind of intimidation, he hasn’t had anyone assassinated in order to get to power, or expand his power, or anything. He merely tries to implement his policies on healthcare, the economy, the environment and such. That is what presidents do.

He may be a lot of things (a populist, narcissistic, rude, anti-scientific etc), but he is not ‘the new Hitler’ or anything. He certainly hasn’t pushed any kind of racist ideology, let alone changed laws to that effect. He also has not made major changes to the law to create a totalitarian regime or anything (if he did, Antifa would have been eliminated quickly. Instead, most rioters are not even arrested at all, and the ones that do, tend to get little or no sentence. Fascism is far more deadly than that, idiots. You wouldn’t live to tell). So in no way does it look anywhere like fascism. What fascists is Antifa fighting? None, they’re gaslighting you.

Getting back to the discussion with Alec… I tried to make the point that Antifa (based on communism and fascism being two sides of the same coin) was acting far more fascist than any other group in the US at this time. They are the ones going out on the streets in large mobs, intimidating people with ‘the wrong opinion’, destroying property, looting, arson etc. Look up what fascists did in Italy and Germany, or what communist revolutionaries did in Russia, China etc. That looks nothing like what the Trump administration is doing, and everything like what Antifa is doing.

You’d have to be really stupid to not be able to look beyond the obvious ploy of calling an organization “Anti-Fascism”. It’s called Anti-Fascism, so it can’t be fascism, right? Wrong. It can, and it is. This is domestic terrorism, by the book. And like many terrorist organizations, they aren’t officially organized, but operate more in individual ‘cells’, making them harder to track.

But apparently Alec was so gaslit that he claimed that fascism didn’t mean what I think it meant (as in: the proper definition found in many history books, encyclopedia etc). Because ‘words can change meaning over time’. There we are, postmodernist/Wokeist word games again. Words have meaning, you can’t just change them. Fascism clearly describes a movement that historically started with Mussolini, and has pretty much ended after WWII. The term ‘fascism’ has since mainly been used politically/strategically, to undermine political opponents. Basically applying a Godwin. ‘Fascist’ has now come to mean “anyone that Antifa disagrees with”, or even “anyone that left-wing oriented people disagree with”.

Nobody has referred to themselves as ‘fascist’ since, and no regime or political movement has officially been labeled ‘fascist’ by anyone. We certainly don’t label the Trump administration a fascist government in Europe (or totalitarian, dictatorial, racist, or whatever else). But such labels are apparently in the US itself by the left (even including prominent Democrats, all the way up to Biden), in order to take down the Trump administration. I think we are in a better position to judge that from the outside, than the people who’ve been under the influence of the propaganda machine for years.

And of course, no actual debate was possible, so when I didn’t fall for the superficial word games, he just blocked me. Possibly because the ideas of Critical Race Theory and intersectionality have become mainstream, it appears that nuance has disappeared from debate. Instead, everything is very polarized. It is all black-and-white, nothing in between. It is all or nothing. Debates rarely go into actual substance and arguments. Messengers are shot and people are labeled as horrible persons for simply having a different opinion.

This exchange is what originally got me to write the previous blog. I wasn’t expecting even people from ‘my neck of the woods’ (techy/nerdy/science-minded people) to buy into this nonsense. In fact, I actually said that at some point during the exchange, that I thought he would be more rational about this, as his videos show a very rational guy. He actually tried to deny that the videos he makes require rationality, as you can see.

At the time I thought that was rather strange, but now I think I may understand why. Critical Race Theory places things such as ‘rationality’, ‘objectivity’, and science in general under ‘whiteness’. So perhaps that’s why he was trying to deny it. He may have actually believed that he would be a ‘white supremacist’ or ‘racist’ or whatever if he were to admit that he is generally a rational person.

And he wasn’t the only one who ‘went Woke’. There’s someone else in ‘my neck of the woods’. I will not say who it is, because it was a private conversation, where the exchange with Alec was public, on Twitter, and is still available to read for everyone. But I can be sure that it is someone that most people who read this blog will be familiar with.

I can only say: you people are on the wrong side of history. This Woke nonsense is destroying our freedom and our communities. The Woke will force their opinions on you, as a totalitarian system, and if you do not comply, they will shut you out. There is no debate possible, your arguments will not be heard, there is no room for any kind of nuance or anything. Not even with people who you’ve known for years, and who should know better than to think you’re anywhere near a racist, fascist, sexist, homophobe, transphobe or whatever other superficial label they use to deflect any other opinions and shut people out. We are ‘dissidents’, and we must be ‘eliminated’.

Communism failed because it was based on an overly simplified view of the world, that mainly saw the world as a struggle between classes. It ignored the fact that humans are individuals, and individuals have their flaws and weaknesses. People aren’t all equal, and you can’t force them to be.

The Woke are making a very similar mistake, where Critical Race Theory/Intersectionality is again a very simplified view of the world, only marginally different from the communist one. This time it is seen as a power struggle between various ‘characteristics’ on the intersectional grid (such as gender, race, sexual preference and whatnot). And they again want to make all people equal, this time by forcing equity between groups. Again, this can only be done by force, and will fail, because the view of people is oversimplified, and the intersectional grid is a flawed view of society and humanity.

And I hope I explained why things like ‘white supremacy’ are completely foreign to us Europeans. And how totalitarian regimes, both fascist and communist, are far closer to home with us. So how things like ‘fascist’, ‘white supremacist’, ‘racist’, ‘Nazi’ etc. are deeply insulting to us. They also are highly disrespectful to the millions of victims of those regimes. In Europe there are still many people who lost a lot of family because of the Nazis or the communists. If you really were empathic, as you claim to be, and really were about respect and tolerance, I wouldn’t even have to tell you, because your common sense would have already made you understand how terrible that kind of behaviour would be. But you aren’t. You’re insensitive, ignorant, intolerant excuses for human beings.

Sargon of Akkad (who is also European) also did a similar video on that by the way:

Posted in Science or pseudoscience? | 2 Comments

The Cult of Wokeness

As you may know, I do not normally want to engage in any kind of political talk. I’m not entirely sure if you can even call this topic ‘political’, because free speech, science, rationality and objectivity are cornerstones of the Western world, and form the basis of the constitution of most Western countries.

And as you may know, I have spoken out against pseudoscience before. And I have also been critical of deceptive marketing claims and hypes from hardware and software vendors, somewhat closer to home for me, as a software engineer. I value honesty, objectivity, rationality and science, because they have brought us so much over the course of history, and they can bring us so much more in the future (and with ‘us’ I mean all of humanity, because I am a humanist).

However, in current times it seems that these values have come under pressure, from a thing known as cancel culture. To make a very long and complicated story short, currently there is a “Woke” cult, which bases itself on identity politics and Critical race theory. In short, they think within a hierarchy of oppressors and oppressed identity groups. Any ‘oppressing group’ is not allowed to have any say or opinion on any ‘oppressed group’. That is their simplistic view of ‘social justice’, ‘racism’, ‘sexism’ and related topics.

It is somewhat of a combination of postmodernist thinking and neo-Marxism. It is rather difficult to explain it all in just a few sentences, but the basic concept is that they see everything as a ‘social construct’. So man-made. Which also means that they can ‘deconstruct’ these things. They see language as a way to construct and deconstruct things. Basically, society works a certain way because of human behaviour, and language is a big part of that. By redefining language, you can ‘deconstruct’ certain behaviour, if that makes sense. It is pseudoscience of course.

A common example is the redefinition of ‘racism’, into something that is defined by what the ‘victim’ experiences. By turning this definition around, they can now argue that you can be racist even if you didn’t intend to, because that no longer matters. If someone claims they have ‘experienced racism’, then it is true, and you must be a racist. They extend this to a concept of ‘institutional racism’, where just as with racism, it’s never entirely clear what an ‘institution’ is, but again it does not matter, because as long as a ‘victim’ has ‘experienced institutional racism’, then it must be true, and therefore institutional racism must exist, even if it can’t or won’t be defined.

In general that is the modus operandi of this Woke cult: they favour feelings and emotions over facts. In other words, they value subjectivity over objectivity. I suppose you understand how that affects the world as we know it, especially science and technology. This can go as far as them not accepting facts, because since objectivity does not exist, facts are always subjective, they are a ‘social construct’ as well. They claim that other people can have ‘other ways of knowing’ (which is basically a way of saying ‘magic’). Recently, there even was a discussion of how “2+2=4″ is not always true. For some people it could be “2+2=5”.

This is just a short introduction, but I urge you to dig into this more. There are various online sources. A good starting point is the site New Discourses. Another good source is Dr. Jordan B. Peterson. He has put up a short page on postmodernism and Marxism on his website. You can also find various of his talks on the subject on YouTube and such.

Online there are many Social Justice Warriors who will attack anyone with a wrong ‘opinion’. They don’t do this by using free speech, as in engaging in a conversation and exchanging viewpoints. They do this by basically drowning out these people. A mob mentality. They try to ‘cancel’ these people, to deplatform them.

This also leads to virtue signalling, where people post certain opinions for no apparent reason, to show they’re ‘on the good side’ (probably because they’re afraid to get cancelled themselves).

I started noticing that last thing on Twitter over the past year or so. I mostly follow tech-related people. And it occurred to me that quite a few people would post rainbow flags, and discuss trans rights and things. So I started wondering “why are they doing this? Are there so many gay/trans people in tech? I have been following this person for quite a while, and afaik they’re neither gay nor trans or anything, so what gives?”

Apparently this Woke-cult has been growing in the liberal arts colleges in the USA for many years, and it is now coming out, and trying to take over the world (some ‘academics’ are part of this, they have the credentials, but their work does not meet scientific standards, such as Robin DiAngelo and her book “White Fragility”). The Black Lives Matter movement and Antifa are the most visible manifestations of this cult at the moment. And they are trying to deconstruct many parts of society.

They want to ‘decolonize’ society, and are even attacking things like mathematics now. They claim it is a ‘social construct’ to manifest white supremacy. They want to remove the objectivity and ‘rehumanize’ mathematics. Does that sound crazy? Yes, it does. But I’m not making this up, as you can see.

Mathematics is perhaps the most abstract phenomenon you can think of, and is completely unbiased to any human. It is just pure logic and facts. It led to computers, who use mathematics to perform all sorts of tasks, again, purely with logic (arithmetic) and facts (data). Entirely unbiased to any human. And now you are proposing to look at the race and/or (ethnic) background of children to somehow teach them different kinds of mathematics? Firstly, that’s a racist thing to do. Secondly, it destroys mathematics, because it will no longer be a universal, unbiased language. The paper claims that it is merely a myth that mathematics is objective and culture-free. Yet it gives no explanation whatsoever, let alone a proof that this would be a myth.

If anything, I’d say there’s plenty of proof around. So much of our technology works on the basis of mathematic principles. And that same technology works all over the world. There are people all over the world who understand these same mathematic principles, regardless of their race, background, culture or anything.

The issue with these things is that from a distance, they sound noble, but when you dig deeper, things are not quite what they seem. Eventually, most people will (hopefully) reach their Woke breaking point. Make sure you know your boundaries, and know when those lines are crossed, and act accordingly.

Anyway, there are many different instances of this Woke-cult, and we have to stop it. We have to prevent it from taking over our world, and destroy everything we’ve worked so hard for to create. So, if you were not aware yet, then hopefully you are now, and hopefully you understand that you need to get to grips with what this Woke-cult is, so that you can recognize it. Note that it is also very much in the mainstream media these days. Look out for things like ‘diversity’ and ‘inclusivity’.

The New York Times for example, is a very clear example of a media outlet that is taken over entirely by the Woke-cult. Bari Weiss resigned there recently, and she published her resignation letter, which speaks volumes. You can also find it in the Washington Post and many other papers. It’s also with CNN, for example. Once you get a feeling of what to look for, you should pick up on Woke-media quickly. They basically all have a single viewpoint, and their articles are completely interchangeable. There are no real opinion pieces anymore, just propaganda.

It’s gone so far that some media, most notably the Australian Spectator, are actually promoting themselves as “Woke-free” media:

So, let us fight the good fight, for all of humanity!

Posted in Science or pseudoscience? | Tagged , , , , , , , , | 16 Comments

The strong ARM

I’ve done some posts on x86 vs ARM over the years, most recently on the new Microsoft Surface Pro X, which runs a ‘normal’ desktop version of Windows 10 on an ARM CPU, while also supporting x86 applications through emulation. This basically means that Microsoft is making ARM a ‘regular’ desktop solution that can be a full desktop replacement.

Rumours of similar activity in the Apple camp have been going around for a while as well. Ars Technica has run a story on it now, as it seems that Apple is about to make an official announcement.

In short, Apple is planning to do the same as Microsoft: instead of having their ARM devices as ‘second class citizens’, Apple will make a more conventional laptop based on a high-end ARM SoC, and will run a ‘normal’ version of macOS on it. So again a ‘regular’ desktop solution, rather than the iOS that current ARM devices run, which cannot run regular Mac applications. At this point it is not entirely clear whether these ARM devices can also run x86 applications. However, in the past, Apple did exactly that, to make 68k applications run on the PowerPC Macs, for a seamless transition. And they offered the Rosetta environment for the move from PowerPC to x86.

Aside from using emulation/translation to run applications as-is, they also offered a different solution however: they provided developers with a compiler that would generate code for multiple CPU architectures into a single binary (so both 68k and PPC, or both PPC and x86), a so-called Fat binary or Universal binary. The downside of this solution is of course that it requires applications to be compiled with this compiler, which rules out any x86 applications currently on the market.

In this sense it does not help that Intel is still struggling to complete their move from 14nm to 10nm and beyond. Apple can have its ARM SoCs made on 7nm, which should help to close the performance gap between ARM and high-end x86. I suppose that means that Intel will have to earn its right to be in Macs from now on. If Intel can maintain a performance benefit, then x86 and ARM can co-exist in the Mac ecosystem. But as soon as x86 and ARM approach performance parity, then Apple would have little reason to continue supporting x86.

Interesting times ahead.

Posted in Hardware news | Tagged , , , , , , , , , , | 3 Comments

Batch, batch, batch: Respect the classics!

Today I randomly stumbled upon some discussions about DirectX 12, Mantle and whatnot. It seems a lot of people somehow think that the whole idea of reducing draw call overhead was new for Mantle and DirectX 12. While some commenters managed to point out that even in the days of DirectX 11, there were examples of presentations from various vendors talking about reducing draw call overhead, that seemed to be as far back as they could go.

I on the other hand have witnessed the evolution of OpenGL and DirectX from an early stage. And I know that the issue of draw call overhead has always been around. In fact, it really came to the forefront when the first T&L hardware arrived. One example was the Quake renderer, which used a BSP tree, to effectively depth-sort the triangles. This was a very poor case for hardware T&L, because it created a draw call for every individual triangle. Hardware T&L was fast if it could process large batches of triangles in a single go. But the overhead of setting the GPU up for hardware T&L was quite large, given that you had to initialize the whole pipeline with the correct state. So sending triangles one at a time in individual draw calls was very inefficient on that type of hardware. This was not an issue when all T&L was done on the CPU, since all the state was CPU-side anyway, and CPUs are efficient at branching, random memory access etc.

This led to the development of ‘leafy BSP trees’, where triangles would not be sorted down to the individual triangle level. Instead, batches of triangles were grouped together into a single node, so that you could easily send larger batches of triangles to the GPU in a single draw call, and make the hardware T&L do its thing more efficiently. To give an idea of how old this concept is, a quick Google drew up a discussion on BSP trees and their efficiency with T&L hardware on Gamedev.net from 2001.

But one classic presentation from NVIDIA that has always stuck in my mind is their Batch Batch Batch presentation from the Game Developers Conference in 2003. This presentation was meant to ‘educate’ developers on the true cost of draw calls on hardware T&L and early programmable shader hardware. To put it in perspective, they use an Athlon XP 2700+ GHz CPU and a GeForce FX5800 as their high-end system in that presentation, which would have been cutting-edge at the time.

What they explain is that even in those days, the CPU was a huge bottleneck for CPUs. There was so much time spent on processing a single call and setting up the GPU, that you basically got thousands of triangles ‘for free’ if you would just add them to that single call. At 130 triangles or less, you are completely CPU-bound, even with the fastest CPU of the day.

So they explain that the key is not how many triangles you can draw per frame, but how many batches per frame. There is quite a hard limit to the number of batches you can render per frame, at a given framerate. They measured about 170k batches per second on their high-end system (and that was a synthetic test doing only the bare draw calls, nothing fancy). So if you would assume 60 fps, you’d get 170k/60 = 2833 batches per frame. At one extreme of the spectrum, that means that if you only send one triangle per batch, you could not render more than 2833 triangles per frame at 60 fps. And in practical situations, with complex materials, geometry, and the rest of the game logic running on the CPU as well, the number of batches will be a lot smaller.

At the other extreme however, you can take these 2833 batches per frame, and chuck each of them full of triangles ‘for free’. As they say, if you make a single batch 500 triangles, or even 1000 triangles large, it makes absolutely no difference. So with larger batches, you could easily get 2.83 million triangles on screen at the same 60 fps.

And even in 2003 they already warned that this situation was only going to get worse, since the trend was, and still is, that GPU performance scales much more quickly than CPU performance over time. So basically since the early days of hardware T&L the whole CPU overhead problem has been a thing. Not just since DirectX 11 or 12. These were the days of DirectX 7, 8 and 9 (they included numbers for GeForce2 and GeForce4MX cards, which are DX7-level, they all suffer the same issue. Even a GeForce2MX can do nearly 20 million triangles per second if fed efficently by the CPU).

So as you can imagine, a lot of effort has been put into both hardware and software to try and make draw calls more efficient. Like the use of instancing, rendering to vertexbuffers, supporting texture fetches from vertex shaders, redesigned state management, deferred contexts and whatnot. The current generation of APIs (DirectX 12, Vulkan, Mantle and Metal) are another step in reducing the bottlenecks surrounding draw calls. But although they reduce the cost of draw calls, they do not solve the problem altogether. It is still expensive to send a batch of triangles to the GPU, so you still need to feed the data efficiently. These APIs certainly don’t make draw calls free, and we’re nowhere near the ideal situation where you can fire off draw calls for single triangles and expect decent performance.

I hope you liked this bit of historical perspective. The numbers in the Batch Batch Batch presentation are very interesting.

Posted in Direct3D, Oldskool/retro programming, OpenGL, Software development, Vulkan | Tagged , , , , , , , , , , , | 5 Comments

Politicians vs entrepreneurs

Recently the discussion of a newly published book caught my attention. The book investigated some of the ramifications of the financial crisis of 2007-2008. Specifically, it investigated how a bank received government support. This was done in the form of the government buying a lot of controlling shares in the bank, and also installing a CEO. This CEO was a politician.

In short, the story goes that initially he did a good job, by carefully controlling the bank’s spendings and nursing the bank back to health. However, as time went on, the bank was ready to grow again, and invest in new projects. This became a problem, partly because of the government as a larger shareholder, and partly because of the CEO being a politician. They were reluctant to take risks, which resulted in various missed opportunities. Ironically enough it also meant that the government could have cashed in their shares at an opportune moment, and they would have had their full investment back, and even a profit. But because the government was reluctant to do so at the time, it is unlikely that they will get another opportunity soon, as the current Corona-crisis made the shares drop significantly, and the government would be at a huge loss again.

This in turn led to an internal struggle between the CEO and other members of the board, who wanted ‘real’ bankers, more willing to take risks, and expand on opportunities. Eventually it led to the ousting of the CEO.

What struck me with this story was that I recognize these different management styles in software as well. I’d like to name “Delphi” as a key word here. Back in my days at university, I once did an internship with two other students, at a small company. As a result, Delphi has been on my resume for ages, and I ended up doing projects at various different Delphi-shops. This caused me to realize at some point that you should not put skills on your resume that you don’t want to use.

Why Delphi?

Delphi is just an example I’m giving here, because I have first-hand experience with this situation. There are various other, similar situations though. But what is the issue with Delphi? Well, for that, we have to go back to the days of MS-DOS. A company by the name of Borland offered various development tools. Turbo Pascal was one of them, and it was very popular with hobbyists (and also demosceners). It had a very nice IDE for the time, which allowed you to build and debug from the same environment. Its compile-speed was revolutionary. And in those days, that mattered. Computers were not very fast, and it could take minutes to build even a very simple program, before you could run, test and debug it.

Turbo Pascal was designed to make building and linking as fast and efficient as possible (see also here). Today you may be used to just hitting “build and run in debugger” in your IDE, because it just takes a few seconds, and it’s an easy to way see if your new addition/fix will compile and work as expected. But in those days, that was not an option in most environments. Turbo Pascal was one of the first environments that made this feasible. It led to an entirely different way of developing. From meticulously preparing and writing your code to avoid any errors, the compiler became a tool to check for errors.

When the move was made to from MS-DOS to Windows, in the 90s, a new generation of Turbo Pascal was developed by Borland. This version of Turbo Pascal was called Delphi. Windows was an entirely different beast from MS-DOS though. DOS itself was written in assembly, and interacting with DOS or the BIOS required low-level programming (API calls were done via interrupts). This, combined with the fact that machines in the early days of DOS were limited in terms of CPU and memory, meant that quite a lot of assembly code was used. Windows however was written in a combination of assembly and C, and its APIs had a C interface.

As a result, not everyone who used Turbo Pascal for DOS, would automatically move to Delphi. Many developers, especially the professional ones, would use C/C++. And for the less experienced developers, there now was a big competitor in the form of Visual Basic. Where Delphi was supposed to promote its IDE and its RAD development as key selling points, Visual Basic now offered similar functionality for fast application development, but combined it with a BASIC dialect, which was easier to use than Pascal, for less experienced developers.

This meant that Delphi quickly became somewhat of a niche product. It was mainly used by semi-professionals. People who couldn’t or wouldn’t make the switch to C/C++, but who were too advanced to be using something like Visual Basic. The interesting thing is that even though during my internship in the early 2000s, I already felt that Delphi was a niche product, on its way out, it still survives to this day.

Delphi as a product has changed hands a few times. Borland no longer exists. Today, a company by the name of Embarcadero maintains Delphi and various other products originating from Borland, and they still aim to release a new major version every year.

While I don’t want to take away from their efforts (Delphi is a good product for what it is: a Pascal-based programming environment for Windows and other platforms), fact of the matter is that Embarcadero is a relatively small company, and they are basically the only ones aiming for Pascal solutions. Compare that to the C/C++ world, where there are various different vendors of compilers and other tools, and most major operating systems and many major applications are actually developed with this language and these tools. The result is that interfacing your code with an OS or third-party libraries, devices, services and whatnot is generally trivial and well-supported in C/C++, while you are generally on your own in Delphi.

And that’s just comparing Delphi with C/C++. Modern languages have since arrived, most notably C#, and these modern languages make development easier and faster than Delphi with its Pascal underpinnings. Which is not entirely a coincidence, given that Anders Hejlsberg, the original developer of Turbo Pascal and the lead architect of Delphi, left Borland for Microsoft in 1996, and became the lead architect of C#.

Back to the point

As I said, the use of Delphi can generally be traced back to semi-professional developers who started using Turbo Pascal in DOS. For the small company of my internship that was certainly the case. Clearly, being dependent on Delphi is quite a risk as a business. Firstly because there is only one supplier of your development tools. And development tools need maintenance. It has always been common for Delphi (and other development tools) to require updates when new versions of Windows were released. Since development tools tend to interact with the OS at a relatively low level, to make debugging and profiling code possible, they also tend to be more vulnerable to small changes in the OS than regular applications. So if Embarcadero cannot deliver an update in time, or even at all, you may find yourself in the situation that your application can not be made to work on the latest version of Windows.

Another risk stems from the fact that Delphi/Pascal is such a niche language. Not many developers will know the language. Most developers today will know C#, Java, C/C++. They can find plenty of jobs with the skills they already have, so they are not likely to want to learn Delphi just to work for you. The developers that remain, are generally older, more experienced developers, and their skills are a scarce resource which will be in demand, so they will be more expensive to hire.

This particular company was so small, that it was not realistic to expect them to migrate their codebase to another language. The migration itself would be too risky and have too much of an impact. With the amount of development resources they have, it would take years to migrate the codebase (even so, I would still recommend to develop new things in C/C++ or C# modules, which integrate into the existing codebase, and whenever there is time, also convert relevant existing code to such C/C++ or C# modules so that eventually a Delphi-free version of the application may be within reach).

However, over time I also worked at other companies that mainly used Delphi. And I’ve come to see Delphi as a red flag. The pattern always appeared to be that just a few semi-professionals with a Turbo Pascal background developed some core technology that the company was built on, and moving to Delphi was the logical/only next step.

Some of these companies ‘accidentally’ grew to considerable size (think 100+ employees), yet they never shook their Delphi roots, even though in this case the risk-factor of developer or other resources would not apply. All the other risks do, of course. So it should be quite obvious that action is required to get away from Delphi as quickly as possible.

Politician or entrepreneur?

That brings me to the original question. Because it seems that even though these companies have grown over time, their semi-professionalism from their early Turbo Pascal/Delphi days remains, and is locked into their company culture.

So the people who should be calling the shots, don’t want to take any risks, and just want to try and please everyone. The easiest way of doing that is to retain the current status quo. And that sounds an awful lot like a politician. Especially if you factor in that these people are semi-professionals, not true professionals. They may not actually have a proper grasp of the technology their company works with. They merely work based on opinions and second-hand information. They are reactive, not proactive.

Ironically it tends to perpetuate itself, because when that is the company culture, the people they tend to hire, will also be the same type of semi-professionals (less skilled developers, project managers without a technical background etc). Should they ‘accidentally’ hire a true professional/entrepreneur, then this person is not likely to function well in this environment. Those people would want to improve the company, update the culture, and be all they can be. But that may rub too many people the wrong way.

With a true entrepreneur it’s much easier to explain risks and possibilities, and plot a path to a better future. They will be more likely to try new things, and understand that not every idea may lead to success, so they may be more forgiving for experimentation as well (I don’t want to use the world ‘failure’ here, because I think taking risks should not be done blindly. You should experiment and monitor, and try to determine as early as possible whether an idea will be a success or not, so that you minimize the cost of failed ideas).

I think it’s the difference between looking at the past, and trying to hold on to what you’ve got, versus looking to the future and trying to gauge what you can do better, using creativity and innovation. A politician may be good in times of crisis, to try and minimize losses. But they will never bring a company to new heights.

And my experience in such companies is that they still use outdated/archaic tools, and tend to have very a very outdated software portfolio. Still selling products based on source code that hasn’t had any proper maintenance in over 10 years. Constantly running into issues like moving to Windows 10 or moving to 64-bit, which is not even an issue in the first place for other organizations, because they had already updated their tools and codebase before this ever became an issue (for example, C# is mostly architecture-agnostic, so most C# code will compile just fine for 32-bit and 64-bit, x86 or ARM. And since the .NET framework is part of Windows itself, your C# code will generally work fine out-of-the-box on a new version of Windows).

Being reactive only is a recipe for a technical debt disaster. I have experienced that they would not do ANY maintenance on their codebase whatsoever, outside of their projects. So there was no development or maintenance on their projects, unless they had a paying customer who specifically wanted a solution. Which also meant that the customer had to pay for all maintenance. This was an approach that obviously was not sustainable, since you could not charge the customers for what it would cost to do proper maintenance and solve all the technical debt. It would make your product way too expensive. The company actually wanted to have competitive pricing of course, even trying to undercut competitors. And project managers would also want to keep things as cheap as possible, so the situation only got worse over time.

I think Microsoft shows a very decent strategy for product development with Windows. Or at least, they did in the past 20+ years. For example, they made sure that Windows XP was a stable version. They could then move to a more experimental Windows, in the form of Vista, where they could address technical debt, and also add new functionality (such as Media Foundation and DirectX 10). Vista may not have been a huge success, but there was always XP for customers to fall back on. The same pattern repeated with Windows 7 and Windows 8-10. Windows 7 continued what Vista started, but made it stable and reliable for years to come. This again gave Microsoft the freedom to experiment with new things (touch interfaces, integrating with embedded devices, phones, tablets etc, and the Universal Windows Platform). Windows 8 and 8.1 were again not that successful, but Windows 10 is again a stable version of this technology.

So in general, you want to create a stable version of your current platform, for your customers to fall back on. The more stable you make this version, the more freedom you have to experiment with new and innovative ideas, and get rid of your technical debt.

I just mentioned Delphi as an obvious red flag that I encountered over the years, but I’m sure there are plenty of other red flags. I suppose Visual Basic would be another one. Please share your experiences in the comments.

Posted in Software development | 2 Comments

Some thoughts on emulators

Recently I watched Trixter’s latest Q&A video on YouTube, and at 26:15 there was a question regarding PC emulators:

That got me thinking, I have some things I’d like to share on that subject as well.

First of all, I share Trixter’s views in general. Although I am a perfectionist, I am not sure if perfectionism is my underlying reason in this case though. I think emulators should strive for accuracy, which is not necessarily “perfection”. It is more of a pragmatic thing: you want the emulator to be able to run all software correctly.

However, that leads us into a sort of chicken-and-egg problem, which revolves around definitions as well. What is “all software”? What does “correctly” mean? And in the case of a PC emulator, there’s even the question of what a “PC” is exactly. There are tons of different hardware specs for PCs, even if you only look at the ones created by IBM themselves. Let alone if you factor in all the clones and third-party addons. I will just give some thoughts on the three subjects here: What hardware? What software? What accuracy/correctness?

What hardware?

While the PC is arguably the most chaotic platform out there, in terms of different specs, we do see that emulators for other platforms also factor in hardware differences.

For example, if you look at the C64, at face value it’s just a single machine. However, if you look closer, then Commodore has always had both an NTSC and a PAL version of the machine. Their hardware was similar, but due to the different requirements of the TV signals, the NTSC and PAL machines were timed differently. This also led to software having to be developed specifically for an NTSC or PAL machine.

As a result, most emulators offer both configurations, so that you can run software targeted at either machine. Likewise, there are some differences in different revisions of the chips. Most notably the SID sound chip. While they are compatible at the software level, the 6581 version sounds quite different from the later 8580 version. Most emulators therefore offer you to select from various chips, so that the sound most closely matches that specific revision of machine.

The PC world is not like that however. There were so many different clone makers around, and most of these clones were far from perfect, that the number of different possible configurations would be impossible to configure and emulate. At the same time, the fact that basically no two machines were exactly alike, also makes it less relevant to emulate every single derivation. As long as you can emulate one machine ‘in the ballpark’, it gives you exactly the same experience as real hardware did back in the day.

So the question is more about defining which ‘ballparks’ you have. I would say that the original IBM PC 5150 would make a lot of sense to emulate correctly, as a starting point. This is the machine that the earliest software was targeted at, and also the machine that early clones were cloning.

The PC/XT 5160 and 5155 are just slight derivations of the 5150, and the differences generally do not affect software, they only matter for physical machines. For example, they no longer have the cassette port, and they have 8 expansion slots with a slightly narrower form factor than the 5150 did.

Likewise, because most clones of that era are generally imperfect, and could not run all software correctly, they are less interesting as an emulator target.

Another two machines that make an interesting ballpark are the IBM PCjr and the Tandy 1000. They are related to the original PC, but offer extended audio and video capabilities. The Tandy 1000 was more or less a clone of the PCjr, but the PCjr was a commercial flop, while the Tandy 1000 was a huge success. In practice, this means a lot more software targets the Tandy 1000 specifically, rather than the PCjr original.

From then on, the PC standard became more ‘blurred’. Clones took over from IBM, and software adapted to this situation, by being more forgiving about different speeds, or slight incompatibilities between chipsets and such. So perhaps a last ‘exact’ target could be the IBM AT 5170, but after that, just “generic” configurations for the different CPU types (386, 486, Pentium etc) would be good enough, because that’s basically what the machines were at that point.

What software?

For me the answer to this one is simple: One should strive to be able to run all software. I have seen various emulator devs dismiss 8088 MPH, because it is the only software of its kind, in how it uses the CGA hardware to generate extra colours and special video modes. I don’t agree with that argument.

The argument also seems to be somewhat unique to the PC emulator scene. If you look at C64 or Amiga emulators, they do try to run all software correctly. Even when demos or games find new hardware tricks, emulators are quickly modified to support this.

What accuracy/correctness?

I think this is especially relevant for people who want to use the emulator as a development tool. In the PC scene, it is quite common that demos are developed exclusively on DOSBox, and they turn out not to run on real hardware at all. Being able to run as much software as possible is one thing. But emulators should not be more forgiving than real hardware. Code that fails on real hardware, should also fail on an emulator.

An interesting guideline for accuracy/correctness is to emulate “any externally observable effects”. In other words: you can emulate things as a black box, as long as there is no way that you can tell the difference from the outside. At the extreme, it means you won’t have to emulate a machine down to the physical level of modeling all gates and actually emulating the electrons passing through the circuit. Which makes sense in some way, because the integrated circuits that make up the actual hardware are also black boxes to a certain extent. Only the input and output signals can be observed from the outside.

However, that is difficult to put in practice, because what exactly are these “externally observable effects”? It seems that this is somewhat of a chicken-and-egg problem. A definition that may shift as new tricks are discovered. I already mentioned 8088 MPH, which was the first to use the NTSC artifacting in a new way. Up to that point, emulators had always assumed that you could basically only observe 16 different artifact colours. It was known that there was more to NTSC artifacts than just these fixed 16 colours, but because nobody ever wrote any software that did anything with it, it was ignored in emulation, because it was not ‘externally observable’.

Another example is the C64 demo Vicious Sid. It has a “NO SID” part:

It exploits the fact that there is a considerable amount of crosstalk between video and audio in the C64’s circuit. So by carefully controlling the video signal, you can effectively play back controlled audio by means of this crosstalk.

So although it was known that this crosstalk exists, it was ignored by emulators, as it was just considered ‘noise’. However, Vicious Sid now does something ‘useful’ with this externally observable effect, so it should be emulated in order to run this demo correctly. And indeed, emulators were modified to make the video signal ‘bleed’ into the audio, like on a real machine.

This also indicates that there may be various other externally observable effects that are already known, but ignored in emulators so far, just waiting to be exploited by software in the future.

Getting back to 8088 MPH, aside from the 1024 colours, it also has some cycle-exact effects. These too cause a lot of problems with emulators. One reason is the waitstates that can be inserted on the data bus by hardware. CGA uses single-ported memory, so it cannot have both the video output circuit and the CPU access the video RAM at the same time. Therefore, it inserts waitstates on the bus, to block the CPU whenever the output circuit needs to access the video RAM.

This was a known externally observable effect, but no PC emulator ever bothered to emulate the hardware to this level, as far as I know. PC emulators tend to just emulate the different components in their own vacuum. In reality all components share the same bus, and therefore the components can influence each other. It is relevant that waitstates are actually inserted on the bus, and are actually adhered to by other components.

It is also relevant that although the different components may run on the same clock generator, they tend to have their own clock dividers internally, and this means that the relative phase of components to each other should also be taken into account. That is, there is a base clock of 14.31818 MHz on the motherboard. The CPU clock of 4.77 MHz is derived from that by dividing it by 3. Various other components run at other speeds, derived from that same base clock, such as 1.19 MHz for the PIT and 3.58 MHz for the NTSC colorburst and related timings.

We have found during development of 8088 MPH that the IBM PC is not designed to always start in the exact same state. In other words, the dividers do not necessarily all start at the same cycle on the base clock, which means that they can be out of phase in various ways. The relative phase of mainly CPU, PIT and CGA circuit may change between different power-cycles. In 8088 MPH this leads to the externally observable effect that snow may not always be hidden in the border during the plasma effect. You can see this during the party capture:

The effect was designed to hide the snow in the border. And during development it did exactly that. However, when we made this capture at the party, the machine was apparently powered on in one of the states that the waitstates were shifted to the right somewhat. There are two ‘columns of snow’ hidden in the border normally. But because of this phase shift, the second column of snow was now clearly visible on the left of the screen.

We did not change the code for the final version. But since we were now aware of the problem, we just power-cycled the machine until it was in one of the ‘good’ phase states, before we did the capture (it is possible to detect which state the system is in, via software. As far as we know it is not possible to modify this state in any way though, through software, so only a power-cycle can change it):

So in general I think this really is a thing that emulators MUST do: components should really interact with eachother, and the state of the bus really is an externally observable effect. As is the clock phase.

For most other emulators this is apparent, because software on a C64, Amiga, Atari ST or various other platforms tends to have very strict requirements for timing anyway. More often than not, software will not work as intended if emulation is just a cycle off at all. For PCs it is not that crucial, but I think that at least for the PC/XT platforms, this exact timing should be an option. Not just for 8088 MPH, but for all the cool games and demos people could write in the future, if they have an emulator that enables them to experiment with developing this type of code.

Related to that is the emulation of the video interface. Many PC emulators opt for just emulating the screen one frame at a time, or per-scanline at best. While this generally ‘works’, because most software tries to avoid changing the video state mid-frame or mid-scanline, it is not how the hardware works. If you write software that changes the palette in the middle of a scanline, then that is exactly what you will see on real hardware.

Because at the end of the day, let’s face it: that is how these machines work. You should emulate how the machine works. And this means it is more than the sum of its parts. Emulating only the individual components, while ignoring any interaction, is an insufficient approximation of the real machine.

Posted in Oldskool/retro programming | Tagged , , , , , , , , , , , , | Leave a comment

Windows and ARM: not over yet

As you may recall, I was quite fond of the idea of ARM and x86 getting closer together, where on the one hand, Windows could run on ARM devices, and on the other hand, Intel was developing smaller x86-based SOCs in their Atom line, aimed at embedded use and mobile devices such as phones and tablets.

It has been somewhat quiet on that front in recent years. On the one hand because Windows Phones never managed to gain significant marketshare, and ultimately were abandoned by Microsoft. On the other hand because Intel never managed to make significant inroads into the phone and tablet market with their x86 chips either.

However, Windows on ARM is not dead yet. Microsoft recently announced the Surface Pro X. It is a tablet, which can also be used as a lightweight laptop when you connect a keyboard. There are two interesting features here though. Firstly the hardware, which is not an x86 CPU, as in previous Surface Pro models. This one runs on an ARM SOC. And one that Microsoft developed in partnership with Qualcomm: the Microsoft SQ1. It is quite a high-end ARM CPU.

Secondly, there is the OS. Unlike earlier ARM-based devices, the Surface Pro X does not get a stripped-down version of Windows (previously known as Windows RT), where the desktop is very limited. No, this gets a full desktop. What’s more, Microsoft integrated an x86 emulator in the OS. Which means that it can not only run native ARM applications on the desktop, but also legacy x86 applications. So it should have the same level of compatibility as a regular x86-based Windows machine.

I suppose we can interpret this as a sign that Microsoft is still very serious about supporting the ARM architecture. I think that is interesting, because I’ve always liked the idea of having competition in terms of CPU architectures and instructionsets.

There are also other areas where Windows targets ARM. There is Windows 10 IoT Core. Microsoft supports a range of ARM-based devices here, including the Raspberry Pi and the DragonBoard. I have tried IoT Core on a Raspberry Pi 3B+, but was not very impressed. I want to use it as a cheap rendering device connected to a display. The RPi’s GPU is not supported by the drivers, so you get software rendering only. The DragonBoard however does have hardware rendering support, so I will be trying this out soon.

I ported my D3D11 engine to my Windows Phone (a Lumia 640) in the past, and that ran quite well. Developing for Windows 10 IoT is very similar, as it supports UWP applications. I dusted off my Windows Phone recently (I no longer use it, since support has been abandoned, and I switched to an Android phone for everyday use), and did some quick tests. Sadly Visual Studio 2019 does not appear to support Windows Phones for development anymore. But I reinstalled Visual Studio 2017, and that still worked. I can just connect the phone with a USB cable, and deploy debug builds directly from the IDE, and have remote debugging directly on the ARM device.

I expect the DragonBoard to be about the same in terms of usage and performance. Which should be interesting.

Posted in Direct3D, Hardware news, Software development, Software news | Tagged , , , , , , , | 5 Comments

Bitbucket ends support for Mercurial (Hg), a quick guide

I have been a long-time user of Bitbucket for my personal projects, over 10 years now, I believe. My preferred source control system has always been Mercurial (Hg), especially in those early days, when I found the tools for Git on Windows to be quite unstable and plagued with compatibility issues. Using TortoiseHg on Windows was very straightforward and reliable.

As a result, all of my repositories that I have created on Bitbucket over the years, have been Mercurial ones. However, Git appears to have won the battle in the end, and this has triggered Bitbucket to stop supporting Mercurial repositories. They will no longer allow you to create new Mercurial repositories starting February 1st 2020, and by June 2020, they will shut down Mercurial access altogether, and what’s worse: they will *delete* all your existing Mercurial repositories.

So basically you *have* to migrate your Mercurial repositories before June 1st 2020, or else you lose your code and history forever.

Now, given such a decision, it would have been nice if Bitbucket had offered an automatic migration service, but alas, there is no such thing. You need to manually convert your repositories. I suppose the most obvious choice is to migrate them to Git repositories on Bitbucket. That is what I have done. There are various ways to do it, and various sources that give you half-baked solutions using half-baked tools. So I thought I’d explain how I did it, and point out the issues I ran into.

Tools

First of all, we have to choose the tools we want to use for this migration. I believe the best tool for the job is the hg-git Mercurial plugin. It allows the hg commandline tool to access Git repositories, which means you can push your existing Hg repository directly to Git.

As I said, I use TortoiseHg, and they already include the hg-git plugin. You need to enable it though, by ticking its checkbox. Go to File->Settings, and enable it in the dialog:

hg-git

Sadly, that turned out not to work very well. There is a problem with the distributed plugins, which causes the hg-git extension to crash with a strange ‘No module named selectors!’ message. This issue here discusses it, although there is no official fix yet. But if you scroll down, you do find a zip file with a fixed distribution of the hg-git plugin and its dependencies. Download that file, unzip it into the TortoiseHg folder (replacing the existing contents in the lib folder), and hg-git is ready for action!

On the Git-side, I use TortoiseGit. Git is a bit ‘special’ though. That is, most Git-tools do not include Git itself, but expect that you have a binary Git distribution (git.exe and supporting tools/libs) installed already. For Windows, there is Git for Windows to solve that dependency. Install that first, and then install TortoiseGit (or whatever tool you want to use. Or just use Git directly from the commandline). But first I want to mention an important ‘snag’ with Git on Windows (or any platform for that matter).

Git wants to convert line-endings when committing or checking out code. On UNIX-like systems, you generally don’t notice, because Git wants to default to having LF as the line-ending for all text-files in a repository. LF happens to be the default on UNIX-like systems, so effectively there is not usually any conversion going on. On Windows, CRLF is the default line-ending, so that would mean that text files on a Windows system are converted to LF when you commit them, and converted back to CRLF when you check out.

Git argues that this is better for cross-platform compatibility. Personally I think this is a bad idea. There are two reasons why:

  1. Just because you are using Git from a Windows system does not necessarily mean you are using CRLF line endings (you may be checking out code for a different platform, or you are using tools that do not use CRLF). Most Windows software is quite resilient to both types of line-endings, and various tools use LF endings even on Windows (like Doxygen for example, it generates HTML files with LF endings. Which is not a problem, because browsers can handle HTML with either line-ending). Some tools even require LF endings, else they do not work.
  2. Git cannot reliably detect whether a file is text or binary. This means that you have to add exceptions to a .gitattributes file to take care of any special cases. Which you usually find AFTER you’ve committed them, and they turn out to break something when someone else checks them out.

So I would personally suggest to not use the automatic conversion of line-endings. I prefer committing and checking out as-is. Just make sure you get the file right on the initial commit, and it will check out correctly on all systems.

When you install Git for Windows, make sure you select as-is on the following dialog during installation:

install-[1]-git-03

As it says, it is the core.autocrlf setting, in case you want to change it later.

I would argue that the whole CRLF/LF thing is a non-issue anyway. Most version control systems do not perform these conversions. In fact, oftentimes code is distributed as a tarball or a zip file. When you extract those, you don’t get any conversion either. But still you can compile that same code on various platforms without doing any conversion at all.

Using hg-git

The process I used to convert each repository is a simple 7-step process. I based it on the article you can find here, but modified it somewhat for this specific use-case.

1) Rename your existing repository in Bitbucket. I do this by adding ‘Hg’ to the end. So for example if I have “My Repository”, I change its name to “My Repository Hg”. This means that the URL will also change, so you can no longer accidentally clone from or commit to this repository from any working directories/tools. It also means that you can create the new Git repository under the same name as your original Hg repository.

2) Do a clean hg clone into a new directory. This makes sure you don’t run into any problems with your working directory having uncommitted changes, or perhaps a corrupted local history or such. You can just use TortoiseHg to create this new clone, or run hg from the commandline.

3) Create the new Git repository on Bitbucket, and grab its URL (eg https://(username)@bitbucket.org/(username)/my-repository.git)

Note that you can use either https or ssh access for Git. However, I have found it to be quite troublesome to get ssh set up and working under Windows, so I would recommend using https here.

4) Open a command prompt, go to the directory of your clean hg clone from step 2, and run the following command:

hg bookmark -r default master

This command is very important, because it links the ‘default’ branch of your Hg repository to the ‘master’ branch of your Git repository. In both environments they have a special status.

5) Now push your local Hg repository to the new Git repository using hg-git:

hg push https://(username)@bitbucket.org/(username)/my-repository.git

At this point your new Git repository should be filled with a complete copy of your Hg repository, including all the history. You can now delete this local clone. If you have any working directories you want to change to Git, the final 3 steps will explain how to do that.

(Note: I am not entirely sure, but I believe hg-git will always do an as-is push to Git, so no changing of line-endings. I don’t think it even uses git.exe at all, so it probably will not respond to the Git for Windows configuration anyway. At any rate, it did an as-is push with my repositories, and I could not find any setting or documentation on hg-git for changing line-endings).

6) Analogous to step 2, perform a new git clone from the repository into a new directory, to make sure there can be no existing files or other repository state that may corrupt things.

7) Make sure your working directory is at the latest commit of the default branch of your Hg repository (you may need to modify the .hgrc file, because it will still point to the old URL of the Hg repository, and we have renamed it in step 1, so you need to set it to the correct URL). Since this step and the next one are somewhat risky, you might want to create a copy of your working directory first, so you can always go back to Hg if the transition to Git didn’t go right.

8) Remove the (hidden) .hg directory from the existing working directory of your old Hg repository, and copy the (hidden) .git directory from the directory in step 6. This will effectively unlink your working directory from Hg altogether, and link it to Git (also at the latest commit).

You can now delete the local clone from step 6.

Note that if you chose a different line-ending option than ‘as-is’, you may find that Git now shows a lot of changes in your files. And when you do a diff, you don’t see any changes. That’s because Git considers it a change when it has to modify the line-endings, but diff does not show line-endings as changes.

Posted in Software development | Tagged , , , , , , , , , | Leave a comment

Am I a software architect?

In the previous article, I tried to describe how I see a software architect, and how he or she would operate within a team or organization. One topic I also wanted to discuss is the type of work a software architect would actually do. However, I decided to save that for a later article, so that is where we are today.

As you could read in the previous article, ‘software architecture’ is quite vague and ‘meta’.  Software architecture happens at a high-level of abstraction by nature, so trying to describe it will always remain vague. And that’s not just me. I followed a Software Architecture course at university, and it was just as vague. They made you aware of this however. The course concentrated only on a case study where requirements and constraints had to be extracted from the input of various parties, and had to be translated into an architecture document (various diagrams and explanations at various levels, some aimed at the customer, some at end-users, some at developers etc. Also various usage scenarios and a risk analysis/evaluation of the architecture). No actual programming was involved.

That is somewhat artificial of course. The architect’s work is rarely that cut-and-dry in practice. Firstly, not all software can just be designed ‘on paper’ like that. Or at least, the chances of getting it right the first time would be slim. I find that often when I need to design new software, I am introduced to various technologies that I have not used before, and therefore I do not know how they would behave in an architecture beforehand. To give a simple example: if you were to design a C# application, you might make a lot of use of multithreading and Tasks (thread pooling). But if you were to design a browser-based JavaScript application, the usage of threading is very limited, so you would choose a different route for your design.

Secondly, in practice, especially these days with Agile, DevOps, cross-functional teams and all that, the architect is generally also a team member, and will also participate in development, as I’ve already mentioned in the previous article.

So let’s make all this a little more practical. The guideline I use is that the architect “solves problems”. And I mean that quite literally. That is, the architect takes the high-level problem, and translates it to a working solution. My criterion here is that the problem is “solved”, as in: once the code has been written, the problem is no longer a problem. There is now a library, framework or toolset that handles the problem in a straightforward way (I’m sure most developers have found those ‘headache’ products that just keep generating new bugs, no matter how much you try to fix them, and never performed satisfactorily to begin with. A real architect delivers problem-free products).

That does not necessarily mean that the architect writes the actual code. It means that the architect reduces the high-level abstraction and complexity, and translates it to the level required for the development team to build the solution. What level that is exactly, depends on the capabilities of the team. Worst-case, the architect will actually have to write some code entirely by himself.

But if you think about it that way, as a developer you are using such libraries, frameworks and toolsets all the time. You don’t write your own OS, compiler, database, web browser etc. Other people have solved these problems for you. Problems that may be too difficult for the average developer to solve by themselves anyway. Each of these topics requires highly skilled developers. And even then, these developers may be skilled in only one or perhaps a few of these topics at best. That is, it’s unlikely for an OS expert to also be an expert at database, compiler, browser etc technology. People tend to specialize in a few topics, because there’s just not enough time to master everything.

So that is how I see the role of architects in practice: these are the experts who create the ‘building blocks’ for the topics your company specializes in.

Perhaps it is interesting to take some topics that I have discussed earlier on this blog. If you look at my retro programming, you will find that quite a bit of it has the characteristics of research and development. That is, a lot of it is ‘off the beaten path’, and is about things that are not done often, or perhaps have not been done at all before.

The goal is to study these things, and get them under control. Beat them into submission, so to say. Take my music player for example. It started as the following problem:

“Play back VGM files for an SN76489 sound chip on a PC”

You can start with the low-hanging fruit, by just taking simple VGM files with 50 Hz or 60 Hz updates, and focus on reasonably powerful PCs. Another developer had actually developed a VGM player like that, but ran into problems with VGM files taken from certain SEGA games, which used samples:

The problem with VGM files is that each sample is preceded by a delay value, so it doesn’t appear to be a fixed rate. And in fact, the sample could be optimized, so that the rate is indeed not fixed at all. That is, the SN76489 will play a given sample indefinitely. So if your sample data contains two or more samples of the same value, it can be optimized by just playing the first one, and adding the delays. Various VGM tools will optimize the data in this way.

The other guy had trouble getting the samples to play properly at all, even on a faster system. VGM allows for 44.1 kHz data maximum, so the straightforward way would be to just fire a timer interrupt at 44.1 kHz, and perform some internal bookkeeping to handle the VGM data. If your machine is fast enough (fast 386/486 at least), it can work, but it is very bruteforce. Especially when compared to the Sega Master System the game came from. That one only has a Z80 CPU at 4 MHz.

So I thought there was a nice challenge: if a Z80 at 4 MHz can play this, then so should an 8088 at 4.77 MHz. And the machine shouldn’t just be able to play the data by hogging the CPU. It should actually be able to play the data in the ‘background’, so using interrupts.

That is the problem I tried to solve, and as you see, I also drew some diagrams to try and explain the concept at a higher level. Once I had the concept worked out, I could just write the preprocessor and player, and the problem was solved.

I later re-used the same idea for a MIDI player. Since the basic problem of timing and interrupts was already solved, it was relatively easy to just write an alternative preprocessor that interpreted MIDI data instead of VGM data. Likewise, modifying the player for MIDI data instead of SN76489 was also trivial.

And these players also made use of two other problems that had been solved previously. The first is the auto-end-of-interrupt feature of the 8259 chip, which I discussed earlier. Because I solved that problem by creating easy-to-use library functions and macros, it was trivial to add them to any program, such as this player.

The second one is the disk streaming, which I also discussed earlier. Again, since the main logic of using a 64k ringbuffer and firing off 32k reads with DMA in the background was solved, it was relatively easy to add it to a VGM or MIDI player. Which in turn worked around the memory limitations (neither VGM nor MIDI data is very compact, and more complex files can easily go beyond 640k).

So, I think to sum up, here are a few characteristics that apply to a software architect:

  • Able to extract requirements and constraints from the stakeholders’ wishes/descriptions
  • Able to determine risks and limitations
  • Able to translate the problem into a working solution
  • Able to find the best/fastest/most optimal/elegant solution
  • Able to explain the problem and solution to the various stakeholders at the various levels of abstraction/knowledge required
  • Able to explain the solution to other developers so that they can build it
  • Able to actually ‘solve’ the problem by creating a library/framework/toolset/etc that is easy to (re)use for other developers

Politics

Here’s another thing I’d like to add to the previous article. I talked about how a software architect would work together with project managers, developers and other departments within the organization. I would like to stress how important it is that you can actually work together. I have been in situations where I may have had the title of architect on paper, and I was expected to be responsible for various things. But the company had a culture where managers decided everything, often even without informing, let alone consulting me.

If you ever find yourself in such a situation, then RUN. The title of architect is completely meaningless if you do not actually have a say in anything. How can you be responsible for the development of software when you have no control over the terms under which that software is to be developed? And of course, managers never understand (or at least won’t admit) that it’s often their decisions that leads to projects not meeting their targets.

For example, if you take some of the things described above, such as getting requirements/constraints and doing risk assessment. These things take time, and time has to be allocated in the life cycle of the project to perform these tasks. If a project manager can just decide that you do not get any time to prepare, and you just have to start developing right away, because they already set a deadline with the client, there’s not much you can do. Except RUN.

Posted in Software development | Tagged , , , , , | Leave a comment

Who is a software architect? What is software architecture?

After the series of articles I did on software development a while ago, I figured that the term ‘Software Architect’ is worth some additional discussion. I have argued that Software Engineering may mean different things to different people, and the same goes for Software Architecture. There does not appear to be a single definition of what Software Architecture is, or what a software architect does and does not do, what kind of abilities, experience and responsibilities a software architect should have.

In my earlier article, I described it as follows:

Software Engineering seemed like a good idea at the time, and the analogy was further extended to Software Architecture around the 1990s, by first designing a high-level abstraction of the complex system, trying to reduce the complexity and risks, and improving quality by dealing with the most important design choices and requirements first.

I stand by that description, but it already shows that it’s quite difficult to determine where Software Engineering ends and Software Architecture starts, as I described architecture ans an extension of engineering. So where exactly does engineering stop, and does architecture start?

I think that in practice, it is impossible to tell. Also, I think it is completely irrelevant. I will try to explain why.

Two types of architects

If you take my above description literally, you might think that software architects and software engineers are different people, and their workload is mutually exclusive. That is not the way I meant it though. I would argue that the architecture is indeed an abstraction of the system, so it is not the actual code. Instead it is a set of documents, diagrams etc, describing the various aspects of the system, and the philosophies and choices behind it (and importantly, also an evaluation of possible alternatives and motivation why these were not chosen).

So I would say there is a clear separation between architecture and engineering: architecture stops where the implementation starts. In the ideal case at least. In practice you may need to re-think the architecture at a certain phase during implementation, or even afterwards (refactoring).

That however does not necessarily imply that there is a clear separation between personnel. That is, the people specifying the architecture are not necessarily mutually exclusive with the people implementing it.

The way I see it, there are two types of architects:

  1. The architect who designs the system upfront, documents it, and then passes on the designs to the team of engineers that build it.
  2. The architect who works as an active member of the team of engineers building the system

Architect type 1

The first type of architect is a big risk in various ways, in my opinion. When you design a system upfront, can you really think of everything, and get it right the first time? I don’t think you can. Inevitably, you will find things during the implementation phase that may have to be done differently.

When the architect is not part of the actual team, it is more difficult to give feedback on the architecture. There is the risk of the architect appearing to be in an ‘ivory tower’.

Another thing is that when the architect only creates designs, and never actually writes code, how good can that architect actually be at writing code? It could well be that his or her knowledge of actual software engineering is quite superficial, and not based on a lot of hands-on experience. This might result in the architect reading about the latest-and-greatest technologies, but only having superficial understanding, and wanting to apply these technologies without fully understanding the implications. This is especially dangerous since usually new technologies are launched with a ‘marketing campaign’, mainly focusing on all the new possibilities, and not looking at any risks or drawbacks.

Therefore it is important for an architect to be critical of any technology, and to be able to take any information with a healthy helping of salt, cutting through all the overly positive and biased blurb, and understanding what this new technology really means in the real world, and more importantly, what it doesn’t.

The superficial knowledge in general might also lead to inferior designs, because the architect cannot think through problems down to the implementation detail. They may have superficial knowledge of architectural and design patterns, and they may be able to ‘connect the dots’ to create an abstraction of a complete system, but it may not necessarily be very good.

Architect type 2

The second type is the one I have always associated with the term ‘Software Architect’. I more or less see an architect as a level above ‘Senior Software Engineer’. So not just a very good, very experienced engineer, but one of those engineers with ‘guru’ status: the ones you always go to when you have difficult questions, and they always come up with the answers.

I don’t think just any experienced software engineer can be a good architect. You do need to be able to think at an abstract level, which is not something you can just develop by experience. I think it is more the other way around: if you have that capability of abstract thought, you will develop yourself as a software engineer to that ‘guru’ level automatically, because you see the abstractions, generalizations, connections, patterns and such, in the code you write, and the documentation you study.

I think it is important that the architect works with the team on implementing the system. This makes it easier for team members to approach him with questions or doubts about the design. It also makes it easier for the architect to see how the design is working in practice, and new insights might arise during development, to modify and improve the design.

Once an architect stops working hands-on with the team, he will get out of touch with the real world and eventually turn into architect type 1.

I suppose this type of architect brings us back to the earlier ’10x programmers’ as I mentioned in an earlier blog. I think good architects are reasonably rare, just like 10x programmers. I am not entirely sure to what extent 10x programmers are also architects, but I do think there’s somewhat of an overlap. I don’t think that overlap is 100% though. That is, as mentioned earlier, there is more to being a software architect than just being a good software engineer. There are also various other skills involved.

The role of an architect

If it is not even clear what an architect really is or isn’t, then it is often even less clear what his role is in an organization. I believe this is something that many organizations struggle with, especially ones that try to work according to the Scrum methodology. Allow me to give my view on how an architect can function best within an organization.

In my view, an architect should not work on a project-basis. He should have a more ‘holistic’ view. On the technical side, most organizations will have various projects that share similar technology. An architect can function as the linking pin between these projects, and make sure that knowledge is shared throughout the organization, and that code and designs can be re-used where possible.

At the management level, it is also beneficial to have a ‘linking pin’ from the technical side, someone who oversees the various projects from a technical level. Someone who knows what kind of technologies there are available in-house, and who has knowledge of or experience in certain fields.

Namely, one of the first things in a new projects is (or should be) to assess the risks, cost and duration. The architect will be able to give a good idea of technology that is already available for re-use, as well as the people best suited for the project. Building the right team is essential for the success of a project. There is also a big correlation between the team and the risks, cost and duration. A more experienced team will need less time to complete the same tasks, and may be able to do it better, therefore with less risk, than a team of people who are inexperienced with the specific technology.

Since architects are scarce resources, it would not be very effective to make architects work on one project full-time. Their unique skills and knowledge will not be required on a single project all the time. At the same time, their unique skills and knowledge can also not be applied to other projects where they may be required. So it is a waste to make an architect work as a ‘regular’ software engineer full-time.

With project managers it seems more common that they only work on a project for a part of their time, and that they run multiple projects at a time. They balance their time between projects on an as-needed basis. For example, at the start of a new project, there may be times where 100% of their time is spent on a single project, but once the project is underway, sometimes a weekly meeting can be enough to remain up-to-date. I think an architect should be employed in much the same way. Their role inside a project is very similar, the main difference being that a project manager focuses on the business side of things, where the architect focuses on the technical side of things.

This is where the lead developer comes in. The lead developer will normally be the most experienced developer in a team. It is the task of the lead developer to make sure the team implements the design as specified by the architect. So in a way the lead developer is the right-hand man (or woman) of the architect in the project, taking care of day-to-day business, in the technical realm.

The project manager will need to work both with the lead developer and the architect. As long as the project is on track, it will be mainly the project manager being informed by the lead developer of day-to-day progress. But whenever problems arise, the current project planning may no longer be valid or viable. In that case, the architect should be an ‘inhouse consultant’, where the project manager and lead developer ask the architect to analyse the problems, and to determine the next course of action. Was the planning too optimistic, and should milestones be moved further into the future, at a more realistic pace? Did the team get stuck on a programming problem that the architect or perhaps some other expert can assist them with? Does the design not work as intended in practice, and does the architect need to work with the team to modify the design and rework the existing code?

The holistic position of the architect also allows him to look at other development-related areas, such as coding standards, build tools etc. And the architect can keep up with the state of technology from research or competing companies, and help plan the strategy of a product line. Likewise, the architect can always be consulted during the sales process. Firstly to answer technical questions about existing products. Secondly, when a potential customer wants certain functionality that is not yet available, the architect can give an expert opinion on how feasible/costly it will be to implement that functionality, and what kind of resources/personnel it would take. In the more complex/risky cases, the architect might spend a few weeks on a feasibility study, possibly developing a proof-of-concept/prototype in the process.

Finally, I would like to reference a few resources that are somewhat related. Firstly, here is an interesting article discussing strategy: https://www.mindtheproduct.com/2018/03/growing-up-lean/

As I argued in my earlier blogs on Agile/Scrum… these methods are often not used correctly in practice. One of the reasons is because people try to predict things too far into the future. This article describes a very nice approach to planning that is not focused on timelines/deadlines/milestones as much, but on current, near term and future time horizons, where the scope is different for each term. The further things are away, the less detail you have. Which is good, because you can’t predict in detail anyway.

The second one is the topic of Software Product Lines: https://en.wikipedia.org/wiki/Software_product_line

It is a great way to look at software development. This goes well beyond just ‘design for change’, and the idea and analogy to product lines such as in (car) manufacturing may make the way of thinking more concrete than just ‘design for change’.

Posted in Software development | Tagged , , , , , , , | 1 Comment