That’s not Possible on this Platform!


The past year-and-a-half have been quite interesting. First there was the plan to form a team and build a demo for the original IBM PC with CGA. We weren’t quite sure where it would lead at first, but as the project progressed, it matured into something we considered worthy of competing. But we had no idea how the Revision 2015-crowd would respond to it. Much to my surprise, they totally loved it. It seems a lot of sceners had an XT clone back in the day, and could appreciate what we were doing.

So, after a very exciting vote, where the first 4 entries were nearly tied, we were declared the winners of the Revision 2015 oldskool compo. Mission accomplished! But as it turned out, this wasn’t the end of 8088 MPH, it was only the start.

The demo quickly crossed the boundaries of the demoscene and got picked up by other interested people across the web, such as gamers/game developers, embedded system developers and whatnot, on twitter, various blogs, forums and such. There was even some moderate ‘mainstream’ media coverage in the form of newspaper websites and tech-oriented websites.

While we were working on the project, I had made a reference to Batman Forever, by Batman Group, a few times.

This demo completely redefined the Amstrad CPC platform, using the hardware in new and innovative ways, and pushing the limits way further than any Amstrad CPC demo before it. This is what we also hoped to achieve with 8088 MPH. After all, before we started development on 8088 MPH, the best demo on a stock IBM 5150/5160 with CGA and PC speaker was probably the CGADEMO by Codeblasters.

Granted, shortly before 8088 MPH was released, there was Genesis Project, with GP-01.

This was an interesting release, as it showed some ‘new’ and ‘modern’ effects on the PC/XT with CGA and PC speaker platform. However, it wasn’t quite of the level of magnitude that we had planned. Aside from that, it didn’t actually run on real hardware, which was a shame.

8088 MPH was the opposite of this: it ran fine on real hardware, but not on any emulators. We were hoping that emulator developers would get inspired by the demo, and pick up the challenge to make an emulator that is accurate enough to run the demo. Initially there did not seem to be too much interest there, but over time (and after reenigne himself built in proper NTSC decoding into DOSBox, for the support of the new high-colour modes), a few developers started to get serious about it.

It seems that 8088 MPH has become ‘notorious’ as a compatibility benchmark, and it has been used to demonstrate an 8088/8086-compatible soft-core by MicroLabs.

And although it took a long time, someone has finally added a reference to 8088 MPH to the CGA-article on Wikipedia:

Later demonstrations by enthusiasts have increased the maximum number of colors the CGA is known to produce in a single image to approximately a thousand. Aside of artifacting, this technique involves the text mode tweak which quadruples its rows, thus offering the benefit of 16 foreground and 16 background colors. Certain ASCII characters such as U and ‼ are then used to produce the necessary patterns, which result in non-dithered images with an effective resolution of 80×100 on a composite monitor.[23]

Granted, it is not a very accurate description, and for some reason, the actual demo title or groups are not mentioned, nor did they show any screenshots of the demo to illustrate the new modes. But at least there is a mention there, which is a nice start.

And lastly, there was Revision 2016. We were nominated for two Meteoriks, “Best low-end demo” and “That’s not possible on this platform!”. We ended up winning the latter category.

We feel it is a great honour that people chose this demo as the most ‘impossible’ demo of 2015. I have always been an ‘oldskool’ scener, and to me, making demos is all about doing ‘impossible’ things on your machine. Pushing the machine to its limits, and beyond. So to me this category embodies the oldskool demoscene, and it is great to be the winner of this particular category. While we would also have liked to have won in other categories, we realize that our demo may not have been the slickest production around. Partly because of the limitations of our platform, but also partly because making demos of this scale on this platform is completely new, and as we said in the end-scroller, this is only the beginning. Techniques have to evolve and mature, to get closer to the sophistication of demos on other platforms. We couldn’t completely bridge the gap in just a single demo. But we hope that people build on our work, and take it further.

I really like what Urs says about our demo when announcing us as the winner. That the category was originally meant for just a single effect that was ‘impossible for the platform’, but how our demo stood apart in not having just a single effect like that, but basically all effects in the entire demo were like that, which is quite unique. And that this demo inspires others to do new ‘impossible’ things as well.

I also really liked that he said that he hopes for us to make more demos like this. It seems we have really succeeded in putting the IBM PC 5150/5155/5160 on the map as a valid demo platform.

I was also pleasantly surprised by the huge cheering from the crowd during the showing of the nominees. It seems they cheered loudest at 8088 MPH. It was very cool to have our demo in the spotlight a second time, a year after we originally made it.

Lastly, we had another nice surprise in the oldskool demo compo.

The C64 demo of Fairlight/Offence/Noice parodied the 8088 MPH intro screen. Imitation is the sincerest form of flattery? I think it is a sign that they loved our demo last year (or at least our ‘respect’ to the C64 platform, which we feared to be our biggest competitor in the competition), even though we beat them last year. Perhaps it actually means that this time they were afraid that another PC demo would be their biggest competitor? Well, sadly we did not have the time to make a demo this year, or even attend Revision.

At any rate, 8088 MPH has become everything we hoped it would be, and so much more than that, beyond our wildest dreams. We would like to thank everyone who has voted for our demo, commented on it, spread the word about it, or whatever else they may have done to make 8088 MPH as popular and well-known as it is today, and for all the recognition we received for our work.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , | Leave a comment

Rise of the Tomb Raider: DX12 update

An update for Rise of the Tomb Raider was released yesterday, adding support for the DX12 API:


As you can see:

Adds NVIDIA VXAO Ambient Occlusion technology. This is the world’s most advanced real-time AO solution, specifically developed for NVIDIA Maxwell hardware. (Steam Only)

So it seems like this is the first game which uses some actual DX12_1 features. VXAO uses the new conservative rasterization feature of DX12_1, which I personally mentioned as one of the most important new features in DX12 on various occasions, since it allows new/more advanced rendering algorithms. It will be interesting to see how this feature is received by the general public. It will also be interesting to see how many other games will follow this example.

Edit: According to this article on Tech Report, the VXAO feature is only available in DX11 mode, which would imply that it uses DX11.3, which gives access to the same DX12_1 features, through the older API.

This post from Nixxes on the Steam Community forum seems to imply that support for VXAO in DX12 will be added later:

You should see VXAO as the rightmost ambient occlusion option, assuming you are on Maxwell hardware, have up to date drivers, and are running on DX11. VXAO is not yet available on DX12 from NVIDIA.


Posted in Direct3D | Tagged , , , , , , , , , , | 1 Comment

Pseudoscience: DNA activation

The second ‘treatment’ offered on the site promoting Kangen Water is a thing called ‘DNA Activation’. Now in this case, they do not even try to keep up a scientific appearance. They just flatout reject the theories of DNA by the ‘establishment’, as in conventional science, and present their own. I suppose that this strategy is chosen because the two strands ‘double helix’ shape of DNA is an iconic shape, that most people will be familiar with. With Kangen water on the other hand, they probably think the clustering theory may actually sound plausible, since most people will not know anything about the molecular structure of water at that level.

What is it?

To be honest, it is not quite clear what it actually is. They claim that DNA actually has far more than just 2 strands, namely 12, which they refer to as ‘junk DNA’, and that these extra strands can be ‘activated’. However, it is not made clear how exactly these extra strands will be activated, and why this activation would have any kind of effect. There is a lot of talk about spiritual and even extra-terrestrial concepts, but it mostly sounds like some strange conspiracy theory, while it does not go into the details of the process of activation at all.

What are the claimed benefits?

Again, not a lot of concrete information here. Phrases such as “Connecting to your higher self and your divine purpose”. Quite hollow rhetoric. But perhaps that is the idea: anyone will add their own interpretation of these hollow phrases with something they actually desire.

I think in short it is supposed to make you feel better, what ever that means specifically.

So what is the problem?

Well, for starters, they do not even get the conventional theory of DNA correct. The site says: “At this moment most people on the planet have 2 double helix strands activated.”

No, conventional theory says that two strands of molecules bond together to form a single double-helix molecule.

Another flaw is that they take ‘junk DNA’ literally. As if it is useless. The term ‘junk DNA’ is in fact used by conventional science, but the term is meant to describe the parts of the DNA that are ‘non-coding’ in terms of genetic information. Science does not claim it is useless. In fact, since DNA is closely related to the evolution theory, this would imply that useless features would eventually evolve away. The fact that junk DNA still exists in all lifeforms would indicate that this particular form of DNA was preferred through natural selection. It is believed that junk DNA actually performs a role during the replication of DNA during mitosis. Aside from that, it is also believed that the non-coding parts of DNA help prevent mutations, since only a small part of the molecule actually carries the ‘critical’ genetic information. Think of it as the organic equivalent of a lightning rod.

Another issue is that they try to connect the function of the strands of DNA to ancient Indian concepts such as chakra’s. They also claim that DNA activation was already practiced by ancient civilizations.

The obvious problem there is that these ancient civilizations had no idea what DNA was, since it wasn’t discovered until 1869, and the current theory of the double-helix shape was not formed until 1953.

This theory was of course not just pulled out of thin air. The researchers used the technique of X-ray diffraction to study the structure of the molecule. The image known as “Photo 51” provided key information to the double-helix structure. If DNA in fact had far more strands, then this image would have looked very different. Also, this is ancient technology of course. Better methods for imaging DNA have been developed since, and more recently, with the work of some Italian researchers, it is now possible to take direct images of DNA. These images still confirm the double-helix model, with 2 strands.


Aside from that, the whole mechanics of DNA activation are not clear to me. Even if we assume that there were more than 2 strands, then how exactly would one activate them? The DNA is in every cell of your body, so you would have to activate millions of cells at once. And our cells are constantly dividing and duplicating, so how do you activate the DNA when it is being replicated all the time? It would have to be done almost instantly, else you get a new cell with new DNA that is not activated yet, and you have to start all over.

And even if you ‘activate’ this DNA, what exactly does that mean? It seems to imply that it suddenly unlocks genetic information that was ignored until now. But that does not make sense. Because if this genetic information can be unlocked through ‘activation’, it could also be unlocked ‘by accident’ through mutation. So some people would be born with ‘activated’ DNA. And if this ‘activated’ DNA is indeed superior, then these people would evolve through natural selection at the cost of the inferior ‘non-activated’ people. Unless of course they want to deny the whole theory of evolution as well. But then, why bother to base your theory on DNA and genetics in the first place?

The mind boggles…

Lastly, I think we can look back at Kuhn’s criteria. Similar to Occam’s Razor, Kuhn states that the most simple explanations are preferred over more complex ones. The theory of 12 DNA strands is certainly more complex than the theory for 2 DNA strands. They could have formulated their theory with just 2 DNA strands anyway, since they seem to base the activation on the ‘junk DNA’. There was no need to ‘invent’ extra strands for the junk DNA. Conventional science had already stated that 98% of the DNA molecule is non-coding. They could have just gone with that. It seems by the way, that this ‘DNA activation’ or ‘DNA perfection’ theory originated with Toby Alexander. That should give you some more leads.

Final words

But wait, you say. Just because certain theories do not fit into the current paradigm does not mean they are necessarily wrong. As Kuhn said, every now and then it is time for a revolution to redefine the paradigm. Yes, but Kuhn also said that there needs to be a ‘crisis’ with the current paradigm: observations that cannot be explained by the current theories. And new theories should be a better explanation than the current ones. Such as Einstein and Newton: the Newtonian laws were reasonably accurate, but Einstein ran into some things that could not be explained by them. Einstein’s new laws were a more accurate model, which could explain everything the Newtonian laws did, and more.

In this case, we do not have a crisis. And the new theories are not a refinement of our current paradigm, but are actually in conflict with observations that ARE explained properly by the current paradigm. It seems that these theories are mainly designed to solve the ‘crisis’ of trying to explain whatever it is that these people want to sell you.

And since the topics I have discussed here are scientific, as per Merton’s norm of communalism, all the knowledge is out there, available to everyone. You do not have to take my word for it, I merely point out some of the topics, facts and theories which you can verify and study yourself by finding more information on the subject, or talking to other scientists.

In the name of proper science, I also had these articles peer-reviewed by other scientists in relevant fields.

By the way, the aforementioned page doesn’t just stop at Kangen Water and DNA Activation. There is also a page on ‘Star Genie crystals’. I figured that topic is so obviously non-scientific that I do not even need to bother covering it. These pseudoscientists move quickly though. As I was preparing these articles, a new page appeared, about ‘Schumann Resonance’. Again, taking something scientific (Schumann resonance itself is a real phenomenon), and making it into some kind of spiritual mumbo-jumbo. And mangling the actual science in the process. For example, it makes the following claim:

“Raise your frequency, enrich your life!”

Well, then you don’t understand resonance. Resonance is the phenomenon where one oscillating system ‘drives’ another to increase its amplitude at a preferential frequency. So resonance can raise your amplitude, but not your frequency. Which is also what the Schumann resonance pseudoscience *should* be about. Namely, it is based on the fact that alpha-brainwaves are at a frequency very close to the strongest Schumann resonance of the Earth (around 7.83 Hz). So the pseudoscience-claim should along the lines that your brain can ‘tune in’ to this ‘Earth frequency’. But I suppose this particular website does not even know or care what they’re trying to sell, as long as it sells.

If any single one of these scams does not make you doubt the trustworthiness of this site, then still be the fact that every single ‘treatment’ offered on this site turns up tons of discussion on scientific/skeptic-oriented sites should at least make you think twice. This is not just coincidence.

Posted in Science or pseudoscience? | Tagged , , , , , , , , , , , , , , , , , | 4 Comments

Pseudoscience: Kangen Water

I suppose that most of you, like myself, had never heard of Kangen water before. So let’s go over it in a nutshell.

What is it?

Kangen water is water that is filtered by a special water purification device, sold by a company named Enagic.

The device uses electrolysis to ionize the water, making it alkaline (pH larger than 7), and it uses special filters, which claim to ‘cluster’ the water molecules.

What are the claimed benefits?

This depends greatly on who you ask. On Enagic’s site itself, you will see that there is not that much information about anything. More on that later. You have to find a distributor to find out the price, more on that later. You will find that the distributors tend to have claims about Kangen being beneficial to your health, because of things like hydratation, detoxification effects, restoring the acid-alkaline balance of your body, and anti-oxidants. Claims can go as far as Kangen water preventing cancer.

Click here and here for a typical example of a site promoting Kangen water.

So what is the problem?

On the surface, it all looks rather scientific, with all the technical terms, diagrams, and videos with simple demonstrations of fluid physics, references to books and people with scientific degrees. But is any of it real, can the claims be verified independently?

One of the first clues could be the water clustering. The site refers to the book “The water puzzle and the hexagonal key” by Dr. Mu Shik Jhon. The idea of hexagonal water is not accepted by conventional science. While water clusters have been observed experimentally, these clusters are volatile, because hydrogen bonds form and break continuously at an extremely high rate. It has never been proven that there is a way to get water into a significantly stable form of clusters. Another name that is mentioned is Dr. Masaru Emoto. That is ‘doctor of alternative medicine’, at some open university from India. He takes the water cluster/crystal idea further, and even claims that speech and thought can influence the energy level of these water crystals. Clearly we have stepped well into the realm of parapsychology here, which again is not accepted by conventional science, due to lack of evidence and verification.

This idea of water structures or ‘water memory’ is actually quite old, and has often been promoted in the field of homeopathy, as a possible mechanism to explain homeopathic remedies. You could search for the story of Jacques Benveniste and his paper published in the science journal Nature, Vol. 333 on 30 June 1988. When Benveniste was asked to repeat his procedures under controlled circumstances in a double-blind test, he failed to show any effects of water memory.

A similar story holds for anti-oxidants. A few years ago there was a ‘hype’ about anti-oxidants, connected to the free-radical theory of aging, which later turned into the ‘superfoods’-craze. Studies showed very good health-benefits of anti-oxidants, and many food companies started adding anti-oxidant supplements and advertising with them.

More recently however, studies have shown that anti-oxidant therapy has little or no effect, and in fact can be detrimental to one’s health in certain cases. The current stance of food authorities is that the free-radical theory of aging has no physiological proof ‘in vivo’, and therefore the proclaimed benefits of anti-oxidants have no scientific basis.

Likewise there is no scientific basis for any health effects of alkaline water. Physiologically it even seems unlikely that it would have any benefit at all. As soon as you drink the water, it comes into contact with stomach acid, which will lead to a classic acid-base reaction, neutralizing the ionization of the water immediately. Which is probably a good thing, because if it actually did have an effect on your body pH, drinking too much of this water could be dangerous.

Because the body pH is important, the body is self-regulating, with a process known as acid-base homeostasis. The body has several buffering agents to regulate the pH very tightly. In a healthy individual there is no need for any external regulation of body pH, and in fact it is your body that decides the pH. This can be done mainly by two processes:

  1. By controlling the rate of breath, changing CO2 levels in the blood.
  2. Via the kidneys. Excess input of acid or base is regulated and excreted via the urine.

Note also that this is a balance. The claims generally imply that acidic is bad and alkaline is good. But in reality too alkaline (alkalosis) is just as bad as too acidic (acidosis). It is all about the balance.

You can find various information on this and other water scams on the net, from reputable sources such as this overview by the Alabama Cooperative Extension System.

So, there is no scientific basis whatsoever that the Kangen water has any positive effect on your health, or even that the Kangen machine can give the water certain of the claimed properties, such as hexagonal clustering. Which might explain how these machines are marketed. The distribution is done through a multi-level marketing scheme, also known as a pyramid scheme. You can find the agreement for European distributors here. Note the very strict rules about advertisement. They do not want the Enagic-name used anywhere, unless they have specifically checked your content, and have given you written approval. Apparently they most certainly do NOT want you to make claims that the product or company can not back up.

Another red flag is that the advertisements are generally done via ‘testimonials’: people who tell about their experiences with the product. They tend to be the ones that make claims about health and that sort of thing. The key here is that a seller is never responsible for what anyone says in a testimonial. So beware of that: any claims the seller does not explicitly make, but are only put forward in a testimonial, can likely not be backed up. Otherwise the seller would just make these claims himself, in order to promote the product.

The pyramid scheme also makes these machines very expensive, because each seller at each level will want to make profit, causing a lot of inflation of the price. Based on the parts used in a Kangen machine, the whole thing could probably be built for under 100 euros. But these machine actually go for prices in excess of 1000 euros. This is why they don’t list prices on the main website, but tell you to contact your distributor. They do have a web store, but as you see, these prices are very high as well. If you are lucky enough to find a distributor who is high up in the hierarchy, you can find these machines for less. Try searching Ebay for these machines, for example.

This so-called Kangen water and machines for ionizing water can be traced back to the 1950s in Japan. One would expect that if this Kangen water was indeed as healthy and beneficial as claimed, that there would be plenty of empirical evidence to support the claims by now, and these machines would have become mainstream, and would just be sold in regular shops.

So, the Kangen machines appear to be a case of ‘cargo cult’ science: they make everything look and feel like legitimate science, but if you dig a little deeper, you will find that there is no actual scientific basis, and the references are mostly to material that is not accepted by conventional science, but considered to be of a pseudoscientific nature.

In fact, this particular seller seems to push things a bit TOO far, by also mentioning the ‘Bovis scale’, which is a common concept in dowsing… Which is a more ‘conventional’ type of pseudoscience. Similar to the topic I will be covering next time.

Posted in Science or pseudoscience? | Tagged , , , , , , , , , , , , , , , | 1 Comment

The philosophy of science

And now for something completely different… I have been critical of hardware and software vendors and their less ethical actions in the past. But a while ago, something happened that did not have anything to do with computer science at all. But it did have to do with ethics and science. Or rather, pseudoscience.

Namely, someone who I considered to be a good friend got involved with a man who was selling Kangen water machines and performing “DNA activation”. As I read the articles on his website, I was reminded of the philosophy of science courses at my university. About what is real science and what is not. What is a real argument, and what is a fallacy.

There was also the ethical side of things. Clearly she did not realize that this man was a fraud. But should she be told? I believe that everyone is free to make their own choices in life, and to make their own mistakes. But at the least they should have as much information as possible to try and make the right decisions.

Anyway, so let’s get philosophical about science.

Demarcation problem

Before we can decide what is science and what is not, we must first define what science is. This is actually a very complicated question, which people have tried to answer throughout history, and can be traced back all the way to the ancient Greek philosophers. It is known as the ‘demarcation problem’.

A part of the problem is that science has been around for a long time, and not everything that was historically seen as being part of science may actually fit more modern views of science. If you try to approach the problem from the other side, then your criteria may be too loose, in order to fit the historical, or ‘de facto’ sciences as well, since scienctific methodologies have evolved over time, as human knowledge grew.

There are two ways to approach the problem. One way is a descriptive definition, trying to name characteristics of science. Another way is a normative definition, defining an ideal view of what science should be. This implies however that humans have some ‘innate sense’ of science.
Neither approach will lead to a perfect definition of science, but there is something to be said for both. So let us look at some attempts of defining science throughout the ages.

Sir Francis Bacon

Seen as one of the ‘fathers of modern science’, Sir Francis Bacon (1561-1626) wrote a number of works on his views of science, where he put forward various ideas and principles which are still valid and in use today. In his day, science was still seen as classifying knowledge, and being able to logically deduce particular facts from the larger system of knowledge, from more general facts, known as deductive reasoning. In such a system of science, progress in the sense new inventions and discoveries could never happen, because they would not fit into the existing framework of knowledge.

So instead of deductive reasoning, Bacon introduced a method of inductive reasoning: the exact opposite. Instead of deducing specifics from generics, specific observations are generalized into more universal theories by looking at connections between the observations.
However, the potential flaw here is that humans are fallible and their senses can deceive them. So Bacon and other philosophers have tried to identify the pitfalls of human error, and have tried to come up with methodologies to avoid error, and find the truth. You might have heard one of Bacon’s messages: “ipsa scientia potestas est” (knowledge itself is power).

Scientists have to be skeptic, objective and critical. They also have to form a collective, and check each other’s work, in order to ‘filter’ or ‘purify’ the knowledge from human error. Scientific knowledge is universal, and belongs to everyone. The goal of scientific research should be to extend our knowledge, not to serve any particular agenda. These ideals and principles were later characterized by Robert K. Merton in his four ‘Mertonian norms’:

  • Communalism
  • Universalism
  • Disinterestedness
  • Organized Skepticism

These norms are also known by the acronym of ‘CUDOS’, formed by the first letters of each word.

Karl Popper

In the early 1900s, a movement within philosophy emerged, known as logical positivism. It was mainly propagated by a group of philosophers and scientists known as the Wiener Kreis (Vienna Circle). Logical positivism promoted the verifiability principle: a statement only has meaning if it can objectively be evaluated to be true or false, in other words, only if it can be verified. This rejects many statements, such as religious ones, for example: “There is a god”. There is no way to prove or disprove such a statement. Therefore, it is cognitively meaningless.

While there certainly is some merit to this way of thinking, it also leads to a human error, namely that of confirmation bias. Let us use an abstract example (yes, the computer scientist will use numbers of course). In scientific research, you will perform experiments to test certain theories. These experiments will give you empirical measurement data. To abstract this, let us use numbers as our abstract ‘measurements’.
Consider the following statement:

All numbers greater than -5 are positive.

Now, there are many numbers you can think of, which are both positive and greater than -5. In fact, there are infinitely many such numbers. So, everytime you find such a number, you confirm the statement. But does that make it true? No, it does not. We only need to find one number that is greater than -5, but not positive, and we have refuted the statement. In this abstract example, we already know beforehand that the statement is false. But in scientific research you do not know the outcome beforehand, which is why you perform the experiments. But if your experiments never explore the area in which the tests would fail, you will never see any evidence that the theory is flawed, even though it is.

This, in a nutshell, is the criticism that Karl Popper (1902-1994) had on logical positivism. Scientists would believe that a theory would become more ‘true’ the more tests were done to confirm it, and at some point it would be considered an absolute truth, a dogma. However, a theory would only need to fail one test to refute it. Falsifying is a much stronger tool than verifying. Which is exactly what we want from science: strong criticism.

Popper was partly inspired by the upcoming psychoanalysis movement (Freud, Adler, Jung), where it was easy to find ‘confirmation’ of the theories they produced, but it was impossible to refute these theories. Popper felt that there was something inherently unscientific, or even pseudoscientific about it, which led to him finding new ways to look at science and scientific theories. The scientific value of a theory is not the effects that can be explained by the theory, but rather the effects that are ruled out by the theory.

Thomas S. Kuhn

One great example of the power of falsifiability is Einstein’s theory of relativity. The Newtonian laws of physics had been around for some centuries, and they had been verified many times. Einstein’s theories predicted some effects that had never been seen before, and would in fact be impossible under the Newtonian laws. But, Einstein’s theory could be falsified by a simple test: According to Einstein’s theories, a strong gravitational field, such as from the sun, would bend light more than Newtonian laws would imply, because it would distort space itself. This would mean that during a solar eclipse it should be possible to see stars which are behind the sun. The light of stars that are close to the edge of the sun would have to pass through the sun’s ‘gravitational lens’, and their path would be warped.
Einstein formulated this theory in 1915, and in 1919 there was a solar eclipse where the theory could be put to the test. And indeed, when the astronomers determined the position of a star near the edge of the sun, its position was shifted compared to the ‘normal’ position, by the arc that Einstein had calculated. The astronomers had tried to falsify the theory of relativity, and it passed the test. This meant that the Newtonial laws of physics, as often as they had been validated before, must be wrong.

The main problem with Popper’s falsifiability principle however is that it is not very practical in daily use. Most of the time, you want to build on a certain foundation of knowledge, in order to apply scientific theories to solve problems (‘puzzles’). So Thomas S. Kuhn (1922-1996) proposed that there are two sides to science. There is ‘normal science’, where you build on the existing theories and knowledge. And then there are exceptional cases, such as Einstein’s theory, which are a ‘paradigm shift’, a revolution in that field of science.

During ‘normal science’, you generally do not try to disprove the current paradigm. When there are tests that appear to refute the current paradigm, the initial assumption is not that the paradigm is wrong, but rather that there was a flaw in the work of the researcher, it is just an ‘anomaly’. However, at some point, the amount of ‘anomalies’ may stack up to a level where it becomes clear that there is a problem in the current paradigm. This leads to a ‘crisis’, where a new paradigm is required to explain the anomalies more accurately than the current paradigm. This leads to a period of ‘revolutionary science’, where scientists will try to find a better paradigm to replace the old one.

This also leads to competing theories on the same subject. Kuhn laid out some criteria for theory choice:

  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory’s consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam’s razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena

Logic and fallacies

Scientific theories are all about valid reasoning, or in other words: logic. The problem here is once again human error. A scientist should be well-versed in critical thinking, in order to detect and avoid common pitfalls in logic, known as fallacies. A fallacy is a trick of the mind where something might intuitively sound logical, but if you think about it critically, you will see that the drawn conclusion cannot be based on the arguments presented, and therefore is not logically sound. This does not necessarily imply that the concluded statement itself is false, merely that its validity does not follow from the reasoning (which is somewhat of a meta-fallacy).
There are many common fallacies, so even trying to list them all goes beyond the scope of this article, but it might be valuable to familiarize yourself with them somewhat. You will probably find that once you have seen them, it is not that hard to pick them out of written or spoken text, you will get triggered by a fallacy quickly.
You should be able to find various such lists and examples on line, such as on Wikipedia. Which brings up a common fallacy: people often discredit information linked via Wikipedia, based on the claim that Wikipedia is not a reliable source. While it is true that not all information is accurate or reliable, this is no guarantee that all information on Wikipedia is unreliable or inaccurate. This is known as the ‘fallacy of composition’. Harvard has a good guide of how to use Wikipedia, and says it can be a good way to familiarize yourself with a topic. So there you go.

Fallacies are not necessarily deliberate. Sometimes your mind just plays tricks on you. But in pseudoscience (or marketing for that matter), fallacies are often used deliberately to make you believe things that aren’t actually true.

So, now that we have an idea of what science is, or tries to be, we should also be able to see when something tries to look like science, but isn’t.

As a nice example, I give you this episode of EEVBlog, which deals with a device called the ‘Batteriser’

It claims to boost battery life up to 800%, but the technology behind it just doesn’t add up. Note how they cleverly manipulate all sort of statements to make you think the product is a lot more incredible than it actually is. And note how they even managed to manipulate a Dr. from the San Jose State University to ‘verify’ the claims, so that gives you the false impression that (all of) the claims of this product are backed up by an ‘authority’.

Posted in Science or pseudoscience? | Tagged , , , , , , , , , , | Leave a comment

The myth of HBM

It’s amazing, but AMD has done it again… They have managed to trick their customer base into believing yet another bit of nonsense about AMD’s hardware.

This time it is about HBM. As we all know, AMD has traded in GDDR5 memory for HBM on their high-end cards, delivering more bandwidth. The downside is that with the current state of technology, it is not feasible to put more than 4 GB on a GPU. This while even AMD’s own GDDR5 cards already have 8 GB on board, and the competing nVidia cards are available with 6 or even 12 GB of memory.

So far, so good. Now, the problem is that AMD somehow brainwashed their followers into believing that more bandwidth can compensate for less capacity. So everywhere on the forums you read people arguing that 4 GB is not a problem because it’s HBM, not GDDR5.

The video memory on a video card acts mostly as a texture and geometry cache. For the GPU to reach its expected level of performance, it needs to be able to access its textures, geometry and other data from the high-speed memory on the video card, rather than from the much slower system memory.

As long as your data fits inside video memory, the bandwidth determines your performance. However, as soon as you run into the capacity limit of your video memory, you need to start paging in data from system memory. Since system memory is generally an order of magnitude slower than video memory, the speed of the video memory is completely irrelevant. The speed at which the data can be transferred to video memory is completely bound by the system memory speed.

So, HBM does in no way make paging data in/out to system memory any faster than any other memory technology would. Therefore the only performance problem we’re dealing with here is the point at which the paging becomes necessary. Which is solely dependent on capacity. A card with 4 GB will hit that point sooner than a card with 6, 8 or 12 GB. And when that point is hit, performance will become erratic, because your game will have to page textures periodically, resulting in frame drops. That should be pretty easy to understand for anyone who bothers to think it through for a few moments.

The one thing you can say is that because the initial performance is higher, it can ‘absorb’ a frame drop slightly better. That is, if you only look at average framerates. If you look at frame times, you’ll still see nasty jitter, and the overall experience will be far from smooth. You will be experiencing stutter every time the system has to wait for a new texture or other data to be loaded.

Posted in Hardware news | Tagged , , , , , , , , , , , | 19 Comments

PC-compatibility, it’s all relative

Update 21-12-2015: I have updated some of the information after testing on an AT with old Intel 8259A chips, and added some extra information on EISA and newer systems.

I would like to pick up where I left off last time, and that is with the auto-end-of-interrupt feature of the Intel 8259A PIC used in PC-compatibles. At the time I had a working proof-of-concept on my IBM PC/XT 5160, but not much more. The plot thickened when I wanted to make my routine generic for any PC-compatible machine. Namely, as we have already seen with 8088 MPH, PC-compatibility is a very relative notion. For example, EGA/VGA cards have very limited backward compatibility with CGA. And even among original IBM CGA cards, there are some notable differences.

The story of the 8259A PIC is another case where there are some subtle and some not-so-subtle differences between machines.

Classes of PCs

I think we should start by defining what types of PCs IBM has offered. So let’s start at the beginning, and have a quick look at some of the defining hardware specifications.

IBM 5150 PC

The first PC was quite a modest machine:

  • 8088 CPU at 4.77 MHz
  • Single 8259A Programmable Interrupt Controller
  • Single 8237 DMA controller
  • 8253 Programmable Interrupt Timer
  • PC keyboard interface
  • 5 wide 8-bit ISA expansion slots
  • CGA and/or MDA video
  • IBM Cassette BASIC ROM
  • Tape interface

IBM 5160 PC/XT

The XT is a slight variation of the original PC, where the tape interface was dropped (but the Cassette BASIC ROM was kept, since the other versions of BASIC were not standalone, but extensions to this BASIC), and the ISA expansion slots were placed closer together, and increased to a total number of 8 slots. The XT became the standard, and most clones were modeled after the XT, with the main difference being the lack of the ROM BASIC. So:

  • 8088 CPU at 4.77 MHz
  • Single 8259A Programmable Interrupt Controller
  • Single 8237 DMA controller
  • 8253 Programmable Interrupt Timer
  • PC keyboard interface
  • 8 narrow 8-bit ISA expansion slots
  • CGA and/or MDA video
  • IBM Cassette BASIC ROM

In most cases, the PC and XT can be lumped together into the same class. The missing cassette interface only makes a difference if you actually wanted to use a cassette. But floppies and later harddrives became the storage of choice for PC, so cassette was never really used. Likewise, the slightly different form-factor of the ISA slots doesn’t make much difference either. The IBM 5155 Portable PC also uses the same motherboard as the PC/XT, and works exactly the same as well.

IBM 5170 PC/AT

The AT was quite a departure from the original PC and XT. It bumped up the platform to 16-bit, had more interrupt and DMA channels, and also introduced a new keyboard interface, with bi-directional communication. This was again the blueprint for many clones, and later 32-bit (386 and higher) largely maintained the same hardware capabilities (in fact, even your current PC will still be backward-compatible):

  • 80286 CPU at 6 MHz
  • Two 8259A PICs, in cascaded master/slave arrangement
  • Two 8253 DMA controllers, cascaded for 16-bit DMA transfers
  • 8253 Programmable Interrupt Timer
  • AT keyboard interface
  • 6 narrow 16-bit ISA expansion slots (backward compatible with 8-bit XT slots) and 2 narrow 8-bit ISA expansion slots
  • MC146818 real-time clock and timer
  • CGA, EGA or MDA video
  • IBM Cassette BASIC ROM

Aside from this, the AT also led to a standardization of power supply and case/motherboard form factors.

IBM also sold the 5162 XT/286, which, unlike the name suggests, had the same enhanced hardware capabilities as the AT, but housed in a PC/XT-style case.

Honourable mention: IBM PCjr

Although it never became a widespread standard, IBM made another variation on the PC-theme, namely the PCjr. It was not a fully PC-compatible machine, although it also runs a version of DOS, includes a version of BASIC, and its hardware is mostly compatible with that of the PC (8088 at 4.77 MHz and CGA-compatible video).

The biggest differences to a PC are:

  • Enhanced video chip with 16-colour modes
  • No dedicated video RAM, but system RAM shared with the video chip
  • SN76489 audio chip
  • No 8253 DMA controller on-board
  • ‘Sidecar’ interface instead of ISA slots for expansion
  • PCjr keyboard interface

Since it does have an 8259A PIC chip, the issues discussed here apply to PCjr as well. Also, the enhanced audio and video capabilities were cloned by Tandy (but not marketed as such, since PCjr was a commercial failure).

So, where are the problems?

Now that we have established that not all PC-compatible machines are quite equal in terms of hardware, let’s see how this affects us when programming the 8259A PIC.

The most obvious difference here is between the PC/XT and the AT. The AT uses two cascaded 8259A PICs. These PICs need to be initialized in a different way. The problem is that you can’t read back any of the settings from the PIC. So you can’t just save, restore or modify the current settings. You need to do a complete reprogramming of the chip, without being able to tell how it is configured beforehand.

Now, you may think that int 15h, ah=0Ch would be a nice way to check for this. It returns some feature-bytes, where there is a bit to indicate a second 8259A. But alas, this BIOS function was not present in the first revision of the AT BIOS. So you can not assume that if the BIOS doesn’t support this function, that the machine must be PC/XT class.

So I decided to check for the existence of a second PIC where the AT would normally have it, which is at I/O ports 0xA0 and 0xA1. The one thing you can read back and modify is the mask register. So, I used a simple trick to see if there was ‘memory’ at this port:

	// Check if we have two PICs
	in al, 0xA1
	mov bl, al	// Save PIC2 mask
	not al		// Flip bits to see if they 'stick'
	out 0xA1, al
	out 0xEE, al	// delay
	in al, 0xA1
	xor al, bl	// If writing worked, we expect al to be 0xFF
	inc al		// Set zero flag on 0xFF
	jnz noCascade
	mov al, bl
	out 0xA1	// Restore PIC2 mask
	...		// We have two PICs
	...		// We have one PIC

Now we can assume that there is a second PIC present. Which means we should be dealing with an AT-class machine. I wanted to make my application do a clean exit back to DOS, and restore the original state of the PICs. Now, we don’t know exactly how the PICs are initialized. All we know is that we are either dealing with a PC/XT-class machine or an AT-class machine.

So, I have studied the BIOS code for the PC, XT and AT. The PC and XT set up their PIC in the same way, so that should do for the PC/XT-class with a single 8259A. And in the other case, I took the setup code for the two PICs from the AT BIOS. This means that in both cases, the PICs should be the left in the same state as after a reset, when my program shuts down. We can only hope that all clones work the same way.

Buffered and unbuffered modes

When studying the BIOS code, I noticed another difference between the PC/XT and the AT setup code. The PC/XT code initializes the 8259A in ‘buffered’ mode in ICW1 (it actually sets it up as ‘slave’ as well, but this bit probably does not do anything, since you set whether it runs in cascaded mode or not with ICW1, and it is configured as single). In that case, it uses the SP/EN pin to signal that its data output is enabled, so that external hardware can buffer the data.

The AT initializes its 8259A’s in non-buffered mode. This also means that the master/slave mode of each chip is triggered by the SP/EN pin (so it is now an input instead of an output), rather than by setting the mode in software via ICW4. And if we study the circuit of the AT (see page 1-76 here), we see that indeed the master PIC is wired to +5v and the slave PIC is wired to GND at the SP/EN pin.

There should be no harm in enabling buffered mode on the PICs in an AT though, and in theory you can set up the first PIC as standalone, and just configure it the same as you would on a PC/XT, ignoring the second PIC. But since we know we have to reset the PICs to the AT-specific configuration anyway, we might as well do a more ‘correct’ setup to AEOI-mode while we are at it, and stick to buffered mode for PC/XT and non-buffered mode for PC/AT.

Intel Inside?

Another issue is that 8259A chips are not necessarily made by Intel. Just like with early x86 CPUs, there were various ‘second source’ manufacturers of these chips, namely AMD, NEC, UMC and Siemens. You can find any one of these brands, even in original IBM machines. And like with the Motorola/Hitachi 6845 chip encountered on IBM CGA cards, it could be that these alternative suppliers may have slightly different behaviour.

Moreover, on newer systems, even XT-class clones, you will not be dealing with actual 8259A chips at all, but the logic will be integrated in multifunction chips. My Commodore PC20-III has a Faraday FE2010 chip, and my 286 has a Headland HT18/C. Both have only one chip that takes care of all the basic motherboard logic. And I have found the Headland to be somewhat picky in how you set up AEOI. With this large variety of chips out there, it may well be that there are other chips that have picky/broken/missing AEOI support. This feature was rarely used, so such problems may go completely unnoticed for the entire lifetime of a system.

Old and new 8259A

Another ‘gotcha’ can be read in the 8259A spec sheet. It says the following:

The AEOI mode can only be used in a master 8259A and not a slave. 8259As with a copyright date of 1985 or later will operate in the AEOI mode as a master or a slave.

That is rather nasty. XTs were made from 1983 to 1987, and ATs from 1984 to 1987, so either revision could be in these systems.

What are the consequences? Well, the second PIC in the AT should be running in slave mode. If it is a pre-1985 chip, then it will not work in AEOI mode. With the help of modem7, we could actually verify this on real hardware. His AT is an early model, and has pre-1985 chips, where we could not get the slave into AEOI-mode, even though we tried a few different approaches, trying to bend the rules somewhat.


So we shouldn’t try to use AEOI on the second PIC, if we want to be compatible with all AT systems. Note that in cascaded mode, an EOI needs to be sent both to the master and the slave that generated the IRQ. We can still save one EOI here, when the master is running in AEOI mode, so there are at least some gains still.

Luckily the first PIC is the most interesting one, since it handles the things we are normally interested in, like the timer, the keyboard and disk interrupts. Early sound cards would generally also stick to the first PIC (generally IRQ5 or IRQ7), for compatibility with PC/XT systems.

It could also be that the ‘buffered slave’ setup in ICW4 may not work reliably on certain clone chips in stand-alone mode, so to be safe, you should set it to ‘buffered master’ instead, when you want to enable AEOI. I encountered this issue on a 286 clone of mine. It is a late model 286 (BIOS date 7/7/91), with integrated Headland chipset. I found that AEOI only worked when I set ‘buffered master’, or when I set it to non-buffered mode (where it would be hardwired to master). I know it was only the AEOI that did not work, because the system worked fine if I still sent manual EOI commands to the PIC.

Using AEOI with buffered master mode worked on all 8259A chips I’ve tried, old and new Intels, and various clones.

Can we detect whether enabling AEOI actually worked? Well, yes. Namely, if the PIC does not get an EOI, it will not issue a new interrupt. So what can we do? We can enable AEOI, and set up a timer interrupt. Inside the handler, we increment a counter.  Our application will then wait a while (e.g. by polling the counter to detect when it wraps around), so that multiple timer interrupts will have fired. Then we check whether the counter has a value greater than 1. If so, then an EOI has been issued after each interrupt, so AEOI worked.

You can do this for both PICs, because the master PIC has the standard 8253 PIT connected to it, and the slave PIC has the MC146818 CMOS timer connected to it. Both timers can generate interrupts at a fixed interval, so for both cases you can set up an interrupt handler with a counter.

And what about the PS/2?

After the AT, IBM decided to set up a new standard, the PS/2, which was not entirely backward-compatible with the PC platform. The PS/2 is very PC-like though, in that they still use x86 processors, most of the hardware is very similar (in fact, the VGA standard was introduced on the PS/2 line and adopted by PC clones), it has an AT-compatible BIOS (as well as a new Advanced BIOS) and it runs DOS as well.

And indeed, in the PS/2 we also find the trusty two 8259A’s that we know from the AT. However, because it doesn’t use the ISA-bus, but the new MCA-bus, there is a difference. On the ISA-bus, interrupts are edge-triggered. On the MCA-bus however, they are level-triggered. This means that PS/2 systems need yet another 8259A setup and restore routine. So you will need to detect whether you are running on a PS/2 system or not. You could use int 15h, ah=oCh for this (all PS/2 systems support it), and perhaps check for MCA support in the feature-bytes returned in the table.

What happened after that?

MCA was the first ‘new’ bus architecture for the PC platform, where the engineers figured that level-triggered interrupts were nicer than edge-triggered ones. For later buses, such as EISA and PCI, engineers came to the same conclusion. When they were working on EISA, they had to solve a problem: how do we maintain backward compatibility with ISA?

They solved this by modifying the 8259A design somewhat. Instead of having a global setting for edge-triggered or level-triggered interrupts, this could be set on a per-interrupt basis. An Edge/Level Control Register (ELCR) is added for each PIC. The master PIC ELCR is at 0x4D0, and the slave PIC ELCR is at 0x4D1. Like with the interrupt mask register, each bit corresponds to one of the interrupt lines. When set to 0, that interrupt line is edge-triggered, when set to 1, the line is level-triggered. The global setting in the legacy register for the 8259A is ignored (these systems never use real 8259A chips, but always have the logic integrated into the chipset).

So basically, we do not have to care about edge/level triggering for newer systems. We don’t reprogram those registers when we enable the AEOI flag in the PIC, so they should retain the proper configuration. If you are interested in EISA, you can read more about it in this book.

So what do we do?

We basically have three types of configurations:

  1. PC/XT: Single 8259A, edge-triggered
  2. AT: Cascaded 8259A, edge-triggered
  3. PS/2: Cascaded 8259A, level-triggered (although there are ISA-based PS/2 systems, I believe all MCA-based PS/2 systems are AT-class or better)

We know how to detect which configuration we have (check if the mask of the second PIC can be written and read back, or use int 15h, ah=C0h to get system information, if that function is supported). At the very least, we know that standalone and master 8259A’s can all run in AEOI mode. So we can make three different routines to initialize the first 8259a to AEOI mode. And we can also make three different routines to initialize them back to their default mode on application exit.

To make this easier to manage, I made a simple helper function to set the different ICW values. There are 4 in total, but ICW3 and ICW4 can be optional in some cases, which complicates things somewhat. So I created a function to deal with that. Note that I write to an unused port to delay IO somewhat. For PC/XT machines this is not required. For AT’s, it is. IBM uses jmp $+2 delays in its code, which works well enough on a real AT, but on faster/newer systems (386/486), it is better to delay with a write to a port. I use port 0xEE, because that port is not used by anything:

void InitPIC(uint16_t address, uint8_t ICW1, uint8_t ICW2, uint8_t ICW3, uint8_t ICW4)
	_asm {

		mov dx, [address]
		inc dx
		in al, dx	// Save old mask
		mov bl, al
		dec dx

		mov al, [ICW1]
		out dx, al
		out DELAY_PORT, al	// delay
		inc dx
		mov al, [ICW2]
		out	dx, al
		out DELAY_PORT, al	// delay

		// Do we need to set ICW3?
		test [ICW1], ICW1_SINGLE
		jnz skipICW3

		mov al, [ICW3]
		out dx, al
		out DELAY_PORT, al	// delay
		// Do we need to set ICW4?
		test [ICW1], ICW1_ICW4
		jz skipICW4

		mov al, [ICW4]
		out dx, al
		out DELAY_PORT, al	// delay
		mov al, bl		// Restore old mask
		out dx, al


With this helper-function, it becomes reasonably easy to initialize the PICs to auto-EOI mode and set them back to regular operation:

void SetAutoEOI(MachineType machineType)
	switch (machineType)
				ICW4_8086|ICW4_AEOI );
		case MACHINE_PS2:
				ICW4_8086|ICW4_AEOI );
void RestorePICState(MachineType machineType)
	switch (machineType)
				ICW4_8086|ICW4_BUF_SLAVE );
				ICW4_8086 );
				ICW4_8086 );
		case MACHINE_PS2:
				ICW4_8086 );
				ICW4_8086 );

If you set up a detection routine with a timer and a counter in the handler, as I mentioned before, you could try a few variations of setting up AEOI, and check if it worked, to make it more robust for ‘wonky’ 8259A clones, and perhaps to detect problems and bail out with a warning to the user, rather than crashing their system because you assume it just works. Of course there is still the risk that the system doesn’t use the ‘default’ setup you’ve assumed (I have taken the above values from the original IBM BIOSes for PC and AT). Which means that it’s already too late when you detected that AEOI doesn’t work. Because you changed the PIC configuration, and you don’t know the initial state. So it may be best to warn the user beforehand that he may have to reset his system if things go wrong.

If you’re interested, you can download the source and binary of my simple test-program here. If you find any strange quirks on your 8259A chips, please let me know in the comments what chips you tested, and what strange things you saw.

And what did all this earn us?

Well, we can just save on these two instructions in our interrupt handler now:

// Send end-of-interrupt command to PIC
mov al, 0x20
out al, 0x20

Somewhat of a Pyrrhic victory you say? Well, indeed, it’s a lot of trouble for very few gains, but any gains are welcome, and once you get this working, you can just stop worrying about it and reap the benefits, modest as they may be. Especially with high-freqency timer handling, such as with playback of digital audio, it may just give you that extra ‘push over the cliff’, to speak with Nigel Tufnel. These interrupts go to 11!

Bonus material

When I was playing around with the 8259A stuff, Trixter pointed me to an article in this old magazine, which covers programming the 8259A in great detail (see page 173 and further). It does not really go into AEOI much, but it covers pretty much everything else, such as the priority schemes, and is a great read. Priority is the trade-off you’re making when enabling AEOI: each interrupt is immediately acknowledged, and can fire again as soon as the CPU is ready to receive interrupts. If you acknowledge the interrupts manually, you get control over which interrupts may fire when. For games and demos this is not such a big issue, because we generally don’t need to service that many interrupts anyway, and we can mask out anything we’re not interested in at any given time.

Amazing how in-depth and technical articles were in regular PC magazines back in the day, compared to how dumbed-down everything is today in mainstream media.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , , , | 2 Comments