Pseudoscience: DNA activation

The second ‘treatment’ offered on the site promoting Kangen Water is a thing called ‘DNA Activation’. Now in this case, they do not even try to keep up a scientific appearance. They just flatout reject the theories of DNA by the ‘establishment’, as in conventional science, and present their own. I suppose that this strategy is chosen because the two strands ‘double helix’ shape of DNA is an iconic shape, that most people will be familiar with. With Kangen water on the other hand, they probably think the clustering theory may actually sound plausible, since most people will not know anything about the molecular structure of water at that level.

What is it?

To be honest, it is not quite clear what it actually is. They claim that DNA actually has far more than just 2 strands, namely 12, which they refer to as ‘junk DNA’, and that these extra strands can be ‘activated’. However, it is not made clear how exactly these extra strands will be activated, and why this activation would have any kind of effect. There is a lot of talk about spiritual and even extra-terrestrial concepts, but it mostly sounds like some strange conspiracy theory, while it does not go into the details of the process of activation at all.

What are the claimed benefits?

Again, not a lot of concrete information here. Phrases such as “Connecting to your higher self and your divine purpose”. Quite hollow rhetoric. But perhaps that is the idea: anyone will add their own interpretation of these hollow phrases with something they actually desire.

I think in short it is supposed to make you feel better, what ever that means specifically.

So what is the problem?

Well, for starters, they do not even get the conventional theory of DNA correct. The site says: “At this moment most people on the planet have 2 double helix strands activated.”

No, conventional theory says that two strands of molecules bond together to form a single double-helix molecule.

Another flaw is that they take ‘junk DNA’ literally. As if it is useless. The term ‘junk DNA’ is in fact used by conventional science, but the term is meant to describe the parts of the DNA that are ‘non-coding’ in terms of genetic information. Science does not claim it is useless. In fact, since DNA is closely related to the evolution theory, this would imply that useless features would eventually evolve away. The fact that junk DNA still exists in all lifeforms would indicate that this particular form of DNA was preferred through natural selection. It is believed that junk DNA actually performs a role during the replication of DNA during mitosis. Aside from that, it is also believed that the non-coding parts of DNA help prevent mutations, since only a small part of the molecule actually carries the ‘critical’ genetic information. Think of it as the organic equivalent of a lightning rod.

Another issue is that they try to connect the function of the strands of DNA to ancient Indian concepts such as chakra’s. They also claim that DNA activation was already practiced by ancient civilizations.

The obvious problem there is that these ancient civilizations had no idea what DNA was, since it wasn’t discovered until 1869, and the current theory of the double-helix shape was not formed until 1953.

This theory was of course not just pulled out of thin air. The researchers used the technique of X-ray diffraction to study the structure of the molecule. The image known as “Photo 51” provided key information to the double-helix structure. If DNA in fact had far more strands, then this image would have looked very different. Also, this is ancient technology of course. Better methods for imaging DNA have been developed since, and more recently, with the work of some Italian researchers, it is now possible to take direct images of DNA. These images still confirm the double-helix model, with 2 strands.

direct-imaging-of-dna-fibers

Aside from that, the whole mechanics of DNA activation are not clear to me. Even if we assume that there were more than 2 strands, then how exactly would one activate them? The DNA is in every cell of your body, so you would have to activate millions of cells at once. And our cells are constantly dividing and duplicating, so how do you activate the DNA when it is being replicated all the time? It would have to be done almost instantly, else you get a new cell with new DNA that is not activated yet, and you have to start all over.

And even if you ‘activate’ this DNA, what exactly does that mean? It seems to imply that it suddenly unlocks genetic information that was ignored until now. But that does not make sense. Because if this genetic information can be unlocked through ‘activation’, it could also be unlocked ‘by accident’ through mutation. So some people would be born with ‘activated’ DNA. And if this ‘activated’ DNA is indeed superior, then these people would evolve through natural selection at the cost of the inferior ‘non-activated’ people. Unless of course they want to deny the whole theory of evolution as well. But then, why bother to base your theory on DNA and genetics in the first place?

The mind boggles…

Lastly, I think we can look back at Kuhn’s criteria. Similar to Occam’s Razor, Kuhn states that the most simple explanations are preferred over more complex ones. The theory of 12 DNA strands is certainly more complex than the theory for 2 DNA strands. They could have formulated their theory with just 2 DNA strands anyway, since they seem to base the activation on the ‘junk DNA’. There was no need to ‘invent’ extra strands for the junk DNA. Conventional science had already stated that 98% of the DNA molecule is non-coding. They could have just gone with that. It seems by the way, that this ‘DNA activation’ or ‘DNA perfection’ theory originated with Toby Alexander. That should give you some more leads.

Final words

But wait, you say. Just because certain theories do not fit into the current paradigm does not mean they are necessarily wrong. As Kuhn said, every now and then it is time for a revolution to redefine the paradigm. Yes, but Kuhn also said that there needs to be a ‘crisis’ with the current paradigm: observations that cannot be explained by the current theories. And new theories should be a better explanation than the current ones. Such as Einstein and Newton: the Newtonian laws were reasonably accurate, but Einstein ran into some things that could not be explained by them. Einstein’s new laws were a more accurate model, which could explain everything the Newtonian laws did, and more.

In this case, we do not have a crisis. And the new theories are not a refinement of our current paradigm, but are actually in conflict with observations that ARE explained properly by the current paradigm. It seems that these theories are mainly designed to solve the ‘crisis’ of trying to explain whatever it is that these people want to sell you.

And since the topics I have discussed here are scientific, as per Merton’s norm of communalism, all the knowledge is out there, available to everyone. You do not have to take my word for it, I merely point out some of the topics, facts and theories which you can verify and study yourself by finding more information on the subject, or talking to other scientists.

In the name of proper science, I also had these articles peer-reviewed by other scientists in relevant fields.

By the way, the aforementioned page doesn’t just stop at Kangen Water and DNA Activation. There is also a page on ‘Star Genie crystals’. I figured that topic is so obviously non-scientific that I do not even need to bother covering it. These pseudoscientists move quickly though. As I was preparing these articles, a new page appeared, about ‘Schumann Resonance’. Again, taking something scientific (Schumann resonance itself is a real phenomenon), and making it into some kind of spiritual mumbo-jumbo. And mangling the actual science in the process. For example, it makes the following claim:

“Raise your frequency, enrich your life!”

Well, then you don’t understand resonance. Resonance is the phenomenon where one oscillating system ‘drives’ another to increase its amplitude at a preferential frequency. So resonance can raise your amplitude, but not your frequency. Which is also what the Schumann resonance pseudoscience *should* be about. Namely, it is based on the fact that alpha-brainwaves are at a frequency very close to the strongest Schumann resonance of the Earth (around 7.83 Hz). So the pseudoscience-claim should along the lines that your brain can ‘tune in’ to this ‘Earth frequency’. But I suppose this particular website does not even know or care what they’re trying to sell, as long as it sells.

If any single one of these scams does not make you doubt the trustworthiness of this site, then still be the fact that every single ‘treatment’ offered on this site turns up tons of discussion on scientific/skeptic-oriented sites should at least make you think twice. This is not just coincidence.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Pseudoscience: Kangen Water

I suppose that most of you, like myself, had never heard of Kangen water before. So let’s go over it in a nutshell.

What is it?

Kangen water is water that is filtered by a special water purification device, sold by a company named Enagic.

The device uses electrolysis to ionize the water, making it alkaline (pH larger than 7), and it uses special filters, which claim to ‘cluster’ the water molecules.

What are the claimed benefits?

This depends greatly on who you ask. On Enagic’s site itself, you will see that there is not that much information about anything. More on that later. You have to find a distributor to find out the price, more on that later. You will find that the distributors tend to have claims about Kangen being beneficial to your health, because of things like hydratation, detoxification effects, restoring the acid-alkaline balance of your body, and anti-oxidants. Claims can go as far as Kangen water preventing cancer.

Click here and here for a typical example of a site promoting Kangen water.

So what is the problem?

On the surface, it all looks rather scientific, with all the technical terms, diagrams, and videos with simple demonstrations of fluid physics, references to books and people with scientific degrees. But is any of it real, can the claims be verified independently?

One of the first clues could be the water clustering. The site refers to the book “The water puzzle and the hexagonal key” by Dr. Mu Shik Jhon. The idea of hexagonal water is not accepted by conventional science. While water clusters have been observed experimentally, these clusters are volatile, because hydrogen bonds form and break continuously at an extremely high rate. It has never been proven that there is a way to get water into a significantly stable form of clusters. Another name that is mentioned is Dr. Masaru Emoto. That is ‘doctor of alternative medicine’, at some open university from India. He takes the water cluster/crystal idea further, and even claims that speech and thought can influence the energy level of these water crystals. Clearly we have stepped well into the realm of parapsychology here, which again is not accepted by conventional science, due to lack of evidence and verification.

This idea of water structures or ‘water memory’ is actually quite old, and has often been promoted in the field of homeopathy, as a possible mechanism to explain homeopathic remedies. You could search for the story of Jacques Benveniste and his paper published in the science journal Nature, Vol. 333 on 30 June 1988. When Benveniste was asked to repeat his procedures under controlled circumstances in a double-blind test, he failed to show any effects of water memory.

A similar story holds for anti-oxidants. A few years ago there was a ‘hype’ about anti-oxidants, connected to the free-radical theory of aging, which later turned into the ‘superfoods’-craze. Studies showed very good health-benefits of anti-oxidants, and many food companies started adding anti-oxidant supplements and advertising with them.

More recently however, studies have shown that anti-oxidant therapy has little or no effect, and in fact can be detrimental to one’s health in certain cases. The current stance of food authorities is that the free-radical theory of aging has no physiological proof ‘in vivo’, and therefore the proclaimed benefits of anti-oxidants have no scientific basis.

Likewise there is no scientific basis for any health effects of alkaline water. Physiologically it even seems unlikely that it would have any benefit at all. As soon as you drink the water, it comes into contact with stomach acid, which will lead to a classic acid-base reaction, neutralizing the ionization of the water immediately. Which is probably a good thing, because if it actually did have an effect on your body pH, drinking too much of this water could be dangerous.

Because the body pH is important, the body is self-regulating, with a process known as acid-base homeostasis. The body has several buffering agents to regulate the pH very tightly. In a healthy individual there is no need for any external regulation of body pH, and in fact it is your body that decides the pH. This can be done mainly by two processes:

  1. By controlling the rate of breath, changing CO2 levels in the blood.
  2. Via the kidneys. Excess input of acid or base is regulated and excreted via the urine.

Note also that this is a balance. The claims generally imply that acidic is bad and alkaline is good. But in reality too alkaline (alkalosis) is just as bad as too acidic (acidosis). It is all about the balance.

You can find various information on this and other water scams on the net, from reputable sources such as this overview by the Alabama Cooperative Extension System.

So, there is no scientific basis whatsoever that the Kangen water has any positive effect on your health, or even that the Kangen machine can give the water certain of the claimed properties, such as hexagonal clustering. Which might explain how these machines are marketed. The distribution is done through a multi-level marketing scheme, also known as a pyramid scheme. You can find the agreement for European distributors here. Note the very strict rules about advertisement. They do not want the Enagic-name used anywhere, unless they have specifically checked your content, and have given you written approval. Apparently they most certainly do NOT want you to make claims that the product or company can not back up.

Another red flag is that the advertisements are generally done via ‘testimonials’: people who tell about their experiences with the product. They tend to be the ones that make claims about health and that sort of thing. The key here is that a seller is never responsible for what anyone says in a testimonial. So beware of that: any claims the seller does not explicitly make, but are only put forward in a testimonial, can likely not be backed up. Otherwise the seller would just make these claims himself, in order to promote the product.

The pyramid scheme also makes these machines very expensive, because each seller at each level will want to make profit, causing a lot of inflation of the price. Based on the parts used in a Kangen machine, the whole thing could probably be built for under 100 euros. But these machine actually go for prices in excess of 1000 euros. This is why they don’t list prices on the main website, but tell you to contact your distributor. They do have a web store, but as you see, these prices are very high as well. If you are lucky enough to find a distributor who is high up in the hierarchy, you can find these machines for less. Try searching Ebay for these machines, for example.

This so-called Kangen water and machines for ionizing water can be traced back to the 1950s in Japan. One would expect that if this Kangen water was indeed as healthy and beneficial as claimed, that there would be plenty of empirical evidence to support the claims by now, and these machines would have become mainstream, and would just be sold in regular shops.

So, the Kangen machines appear to be a case of ‘cargo cult’ science: they make everything look and feel like legitimate science, but if you dig a little deeper, you will find that there is no actual scientific basis, and the references are mostly to material that is not accepted by conventional science, but considered to be of a pseudoscientific nature.

In fact, this particular seller seems to push things a bit TOO far, by also mentioning the ‘Bovis scale’, which is a common concept in dowsing… Which is a more ‘conventional’ type of pseudoscience. Similar to the topic I will be covering next time.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , | 1 Comment

The philosophy of science

And now for something completely different… I have been critical of hardware and software vendors and their less ethical actions in the past. But a while ago, something happened that did not have anything to do with computer science at all. But it did have to do with ethics and science. Or rather, pseudoscience.

Namely, someone who I considered to be a good friend got involved with a man who was selling Kangen water machines and performing “DNA activation”. As I read the articles on his website, I was reminded of the philosophy of science courses at my university. About what is real science and what is not. What is a real argument, and what is a fallacy.

There was also the ethical side of things. Clearly she did not realize that this man was a fraud. But should she be told? I believe that everyone is free to make their own choices in life, and to make their own mistakes. But at the least they should have as much information as possible to try and make the right decisions.

Anyway, so let’s get philosophical about science.

Demarcation problem

Before we can decide what is science and what is not, we must first define what science is. This is actually a very complicated question, which people have tried to answer throughout history, and can be traced back all the way to the ancient Greek philosophers. It is known as the ‘demarcation problem’.

A part of the problem is that science has been around for a long time, and not everything that was historically seen as being part of science may actually fit more modern views of science. If you try to approach the problem from the other side, then your criteria may be too loose, in order to fit the historical, or ‘de facto’ sciences as well, since scienctific methodologies have evolved over time, as human knowledge grew.

There are two ways to approach the problem. One way is a descriptive definition, trying to name characteristics of science. Another way is a normative definition, defining an ideal view of what science should be. This implies however that humans have some ‘innate sense’ of science.
Neither approach will lead to a perfect definition of science, but there is something to be said for both. So let us look at some attempts of defining science throughout the ages.

Sir Francis Bacon

Seen as one of the ‘fathers of modern science’, Sir Francis Bacon (1561-1626) wrote a number of works on his views of science, where he put forward various ideas and principles which are still valid and in use today. In his day, science was still seen as classifying knowledge, and being able to logically deduce particular facts from the larger system of knowledge, from more general facts, known as deductive reasoning. In such a system of science, progress in the sense new inventions and discoveries could never happen, because they would not fit into the existing framework of knowledge.

So instead of deductive reasoning, Bacon introduced a method of inductive reasoning: the exact opposite. Instead of deducing specifics from generics, specific observations are generalized into more universal theories by looking at connections between the observations.
However, the potential flaw here is that humans are fallible and their senses can deceive them. So Bacon and other philosophers have tried to identify the pitfalls of human error, and have tried to come up with methodologies to avoid error, and find the truth. You might have heard one of Bacon’s messages: “ipsa scientia potestas est” (knowledge itself is power).

Scientists have to be skeptic, objective and critical. They also have to form a collective, and check each other’s work, in order to ‘filter’ or ‘purify’ the knowledge from human error. Scientific knowledge is universal, and belongs to everyone. The goal of scientific research should be to extend our knowledge, not to serve any particular agenda. These ideals and principles were later characterized by Robert K. Merton in his four ‘Mertonian norms’:

  • Communalism
  • Universalism
  • Disinterestedness
  • Organized Skepticism

These norms are also known by the acronym of ‘CUDOS’, formed by the first letters of each word.

Karl Popper

In the early 1900s, a movement within philosophy emerged, known as logical positivism. It was mainly propagated by a group of philosophers and scientists known as the Wiener Kreis (Vienna Circle). Logical positivism promoted the verifiability principle: a statement only has meaning if it can objectively be evaluated to be true or false, in other words, only if it can be verified. This rejects many statements, such as religious ones, for example: “There is a god”. There is no way to prove or disprove such a statement. Therefore, it is cognitively meaningless.

While there certainly is some merit to this way of thinking, it also leads to a human error, namely that of confirmation bias. Let us use an abstract example (yes, the computer scientist will use numbers of course). In scientific research, you will perform experiments to test certain theories. These experiments will give you empirical measurement data. To abstract this, let us use numbers as our abstract ‘measurements’.
Consider the following statement:

All numbers greater than -5 are positive.

Now, there are many numbers you can think of, which are both positive and greater than -5. In fact, there are infinitely many such numbers. So, everytime you find such a number, you confirm the statement. But does that make it true? No, it does not. We only need to find one number that is greater than -5, but not positive, and we have refuted the statement. In this abstract example, we already know beforehand that the statement is false. But in scientific research you do not know the outcome beforehand, which is why you perform the experiments. But if your experiments never explore the area in which the tests would fail, you will never see any evidence that the theory is flawed, even though it is.

This, in a nutshell, is the criticism that Karl Popper (1902-1994) had on logical positivism. Scientists would believe that a theory would become more ‘true’ the more tests were done to confirm it, and at some point it would be considered an absolute truth, a dogma. However, a theory would only need to fail one test to refute it. Falsifying is a much stronger tool than verifying. Which is exactly what we want from science: strong criticism.

Popper was partly inspired by the upcoming psychoanalysis movement (Freud, Adler, Jung), where it was easy to find ‘confirmation’ of the theories they produced, but it was impossible to refute these theories. Popper felt that there was something inherently unscientific, or even pseudoscientific about it, which led to him finding new ways to look at science and scientific theories. The scientific value of a theory is not the effects that can be explained by the theory, but rather the effects that are ruled out by the theory.

Thomas S. Kuhn

One great example of the power of falsifiability is Einstein’s theory of relativity. The Newtonian laws of physics had been around for some centuries, and they had been verified many times. Einstein’s theories predicted some effects that had never been seen before, and would in fact be impossible under the Newtonian laws. But, Einstein’s theory could be falsified by a simple test: According to Einstein’s theories, a strong gravitational field, such as from the sun, would bend light more than Newtonian laws would imply, because it would distort space itself. This would mean that during a solar eclipse it should be possible to see stars which are behind the sun. The light of stars that are close to the edge of the sun would have to pass through the sun’s ‘gravitational lens’, and their path would be warped.
Einstein formulated this theory in 1915, and in 1919 there was a solar eclipse where the theory could be put to the test. And indeed, when the astronomers determined the position of a star near the edge of the sun, its position was shifted compared to the ‘normal’ position, by the arc that Einstein had calculated. The astronomers had tried to falsify the theory of relativity, and it passed the test. This meant that the Newtonial laws of physics, as often as they had been validated before, must be wrong.

The main problem with Popper’s falsifiability principle however is that it is not very practical in daily use. Most of the time, you want to build on a certain foundation of knowledge, in order to apply scientific theories to solve problems (‘puzzles’). So Thomas S. Kuhn (1922-1996) proposed that there are two sides to science. There is ‘normal science’, where you build on the existing theories and knowledge. And then there are exceptional cases, such as Einstein’s theory, which are a ‘paradigm shift’, a revolution in that field of science.

During ‘normal science’, you generally do not try to disprove the current paradigm. When there are tests that appear to refute the current paradigm, the initial assumption is not that the paradigm is wrong, but rather that there was a flaw in the work of the researcher, it is just an ‘anomaly’. However, at some point, the amount of ‘anomalies’ may stack up to a level where it becomes clear that there is a problem in the current paradigm. This leads to a ‘crisis’, where a new paradigm is required to explain the anomalies more accurately than the current paradigm. This leads to a period of ‘revolutionary science’, where scientists will try to find a better paradigm to replace the old one.

This also leads to competing theories on the same subject. Kuhn laid out some criteria for theory choice:

  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory’s consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam’s razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena

Logic and fallacies

Scientific theories are all about valid reasoning, or in other words: logic. The problem here is once again human error. A scientist should be well-versed in critical thinking, in order to detect and avoid common pitfalls in logic, known as fallacies. A fallacy is a trick of the mind where something might intuitively sound logical, but if you think about it critically, you will see that the drawn conclusion cannot be based on the arguments presented, and therefore is not logically sound. This does not necessarily imply that the concluded statement itself is false, merely that its validity does not follow from the reasoning (which is somewhat of a meta-fallacy).
There are many common fallacies, so even trying to list them all goes beyond the scope of this article, but it might be valuable to familiarize yourself with them somewhat. You will probably find that once you have seen them, it is not that hard to pick them out of written or spoken text, you will get triggered by a fallacy quickly.
You should be able to find various such lists and examples on line, such as on Wikipedia. Which brings up a common fallacy: people often discredit information linked via Wikipedia, based on the claim that Wikipedia is not a reliable source. While it is true that not all information is accurate or reliable, this is no guarantee that all information on Wikipedia is unreliable or inaccurate. This is known as the ‘fallacy of composition’. Harvard has a good guide of how to use Wikipedia, and says it can be a good way to familiarize yourself with a topic. So there you go.

Fallacies are not necessarily deliberate. Sometimes your mind just plays tricks on you. But in pseudoscience (or marketing for that matter), fallacies are often used deliberately to make you believe things that aren’t actually true.

So, now that we have an idea of what science is, or tries to be, we should also be able to see when something tries to look like science, but isn’t.

As a nice example, I give you this episode of EEVBlog, which deals with a device called the ‘Batteriser’

It claims to boost battery life up to 800%, but the technology behind it just doesn’t add up. Note how they cleverly manipulate all sort of statements to make you think the product is a lot more incredible than it actually is. And note how they even managed to manipulate a Dr. from the San Jose State University to ‘verify’ the claims, so that give you the false impression that (all of) the claims of this product are backed up by an ‘authority’.

Posted in Uncategorized | Tagged , , , , , , , , , , | Leave a comment

The myth of HBM

It’s amazing, but AMD has done it again… They have managed to trick their customer base into believing yet another bit of nonsense about AMD’s hardware.

This time it is about HBM. As we all know, AMD has traded in GDDR5 memory for HBM on their high-end cards, delivering more bandwidth. The downside is that with the current state of technology, it is not feasible to put more than 4 GB on a GPU. This while even AMD’s own GDDR5 cards already have 8 GB on board, and the competing nVidia cards are available with 6 or even 12 GB of memory.

So far, so good. Now, the problem is that AMD somehow brainwashed their followers into believing that more bandwidth can compensate for less capacity. So everywhere on the forums you read people arguing that 4 GB is not a problem because it’s HBM, not GDDR5.

The video memory on a video card acts mostly as a texture and geometry cache. For the GPU to reach its expected level of performance, it needs to be able to access its textures, geometry and other data from the high-speed memory on the video card, rather than from the much slower system memory.

As long as your data fits inside video memory, the bandwidth determines your performance. However, as soon as you run into the capacity limit of your video memory, you need to start paging in data from system memory. Since system memory is generally an order of magnitude slower than video memory, the speed of the video memory is completely irrelevant. The speed at which the data can be transferred to video memory is completely bound by the system memory speed.

So, HBM does in no way make paging data in/out to system memory any faster than any other memory technology would. Therefore the only performance problem we’re dealing with here is the point at which the paging becomes necessary. Which is solely dependent on capacity. A card with 4 GB will hit that point sooner than a card with 6, 8 or 12 GB. And when that point is hit, performance will become erratic, because your game will have to page textures periodically, resulting in frame drops. That should be pretty easy to understand for anyone who bothers to think it through for a few moments.

The one thing you can say is that because the initial performance is higher, it can ‘absorb’ a frame drop slightly better. That is, if you only look at average framerates. If you look at frame times, you’ll still see nasty jitter, and the overall experience will be far from smooth. You will be experiencing stutter every time the system has to wait for a new texture or other data to be loaded.

Posted in Hardware news | Tagged , , , , , , , , , , , | 19 Comments

PC-compatibility, it’s all relative

Update 21-12-2015: I have updated some of the information after testing on an AT with old Intel 8259A chips, and added some extra information on EISA and newer systems.

I would like to pick up where I left off last time, and that is with the auto-end-of-interrupt feature of the Intel 8259A PIC used in PC-compatibles. At the time I had a working proof-of-concept on my IBM PC/XT 5160, but not much more. The plot thickened when I wanted to make my routine generic for any PC-compatible machine. Namely, as we have already seen with 8088 MPH, PC-compatibility is a very relative notion. For example, EGA/VGA cards have very limited backward compatibility with CGA. And even among original IBM CGA cards, there are some notable differences.

The story of the 8259A PIC is another case where there are some subtle and some not-so-subtle differences between machines.

Classes of PCs

I think we should start by defining what types of PCs IBM has offered. So let’s start at the beginning, and have a quick look at some of the defining hardware specifications.

IBM 5150 PC

The first PC was quite a modest machine:

  • 8088 CPU at 4.77 MHz
  • Single 8259A Programmable Interrupt Controller
  • Single 8237 DMA controller
  • 8253 Programmable Interrupt Timer
  • PC keyboard interface
  • 5 wide 8-bit ISA expansion slots
  • CGA and/or MDA video
  • IBM Cassette BASIC ROM
  • Tape interface

IBM 5160 PC/XT

The XT is a slight variation of the original PC, where the tape interface was dropped (but the Cassette BASIC ROM was kept, since the other versions of BASIC were not standalone, but extensions to this BASIC), and the ISA expansion slots were placed closer together, and increased to a total number of 8 slots. The XT became the standard, and most clones were modeled after the XT, with the main difference being the lack of the ROM BASIC. So:

  • 8088 CPU at 4.77 MHz
  • Single 8259A Programmable Interrupt Controller
  • Single 8237 DMA controller
  • 8253 Programmable Interrupt Timer
  • PC keyboard interface
  • 8 narrow 8-bit ISA expansion slots
  • CGA and/or MDA video
  • IBM Cassette BASIC ROM

In most cases, the PC and XT can be lumped together into the same class. The missing cassette interface only makes a difference if you actually wanted to use a cassette. But floppies and later harddrives became the storage of choice for PC, so cassette was never really used. Likewise, the slightly different form-factor of the ISA slots doesn’t make much difference either. The IBM 5155 Portable PC also uses the same motherboard as the PC/XT, and works exactly the same as well.

IBM 5170 PC/AT

The AT was quite a departure from the original PC and XT. It bumped up the platform to 16-bit, had more interrupt and DMA channels, and also introduced a new keyboard interface, with bi-directional communication. This was again the blueprint for many clones, and later 32-bit (386 and higher) largely maintained the same hardware capabilities (in fact, even your current PC will still be backward-compatible):

  • 80286 CPU at 6 MHz
  • Two 8259A PICs, in cascaded master/slave arrangement
  • Two 8253 DMA controllers, cascaded for 16-bit DMA transfers
  • 8253 Programmable Interrupt Timer
  • AT keyboard interface
  • 6 narrow 16-bit ISA expansion slots (backward compatible with 8-bit XT slots) and 2 narrow 8-bit ISA expansion slots
  • MC146818 real-time clock and timer
  • CGA, EGA or MDA video
  • IBM Cassette BASIC ROM

Aside from this, the AT also led to a standardization of power supply and case/motherboard form factors.

IBM also sold the 5162 XT/286, which, unlike the name suggests, had the same enhanced hardware capabilities as the AT, but housed in a PC/XT-style case.

Honourable mention: IBM PCjr

Although it never became a widespread standard, IBM made another variation on the PC-theme, namely the PCjr. It was not a fully PC-compatible machine, although it also runs a version of DOS, includes a version of BASIC, and its hardware is mostly compatible with that of the PC (8088 at 4.77 MHz and CGA-compatible video).

The biggest differences to a PC are:

  • Enhanced video chip with 16-colour modes
  • No dedicated video RAM, but system RAM shared with the video chip
  • SN76489 audio chip
  • No 8253 DMA controller on-board
  • ‘Sidecar’ interface instead of ISA slots for expansion
  • PCjr keyboard interface

Since it does have an 8259A PIC chip, the issues discussed here apply to PCjr as well. Also, the enhanced audio and video capabilities were cloned by Tandy (but not marketed as such, since PCjr was a commercial failure).

So, where are the problems?

Now that we have established that not all PC-compatible machines are quite equal in terms of hardware, let’s see how this affects us when programming the 8259A PIC.

The most obvious difference here is between the PC/XT and the AT. The AT uses two cascaded 8259A PICs. These PICs need to be initialized in a different way. The problem is that you can’t read back any of the settings from the PIC. So you can’t just save, restore or modify the current settings. You need to do a complete reprogramming of the chip, without being able to tell how it is configured beforehand.

Now, you may think that int 15h, ah=0Ch would be a nice way to check for this. It returns some feature-bytes, where there is a bit to indicate a second 8259A. But alas, this BIOS function was not present in the first revision of the AT BIOS. So you can not assume that if the BIOS doesn’t support this function, that the machine must be PC/XT class.

So I decided to check for the existence of a second PIC where the AT would normally have it, which is at I/O ports 0xA0 and 0xA1. The one thing you can read back and modify is the mask register. So, I used a simple trick to see if there was ‘memory’ at this port:

	// Check if we have two PICs
	in al, 0xA1
	mov bl, al	// Save PIC2 mask
	not al		// Flip bits to see if they 'stick'
	out 0xA1, al
	out 0xEE, al	// delay
	in al, 0xA1
	xor al, bl	// If writing worked, we expect al to be 0xFF
	inc al		// Set zero flag on 0xFF
	jnz noCascade
	mov al, bl
	out 0xA1	// Restore PIC2 mask
	...		// We have two PICs
noCascade:
	...		// We have one PIC

Now we can assume that there is a second PIC present. Which means we should be dealing with an AT-class machine. I wanted to make my application do a clean exit back to DOS, and restore the original state of the PICs. Now, we don’t know exactly how the PICs are initialized. All we know is that we are either dealing with a PC/XT-class machine or an AT-class machine.

So, I have studied the BIOS code for the PC, XT and AT. The PC and XT set up their PIC in the same way, so that should do for the PC/XT-class with a single 8259A. And in the other case, I took the setup code for the two PICs from the AT BIOS. This means that in both cases, the PICs should be the left in the same state as after a reset, when my program shuts down. We can only hope that all clones work the same way.

Buffered and unbuffered modes

When studying the BIOS code, I noticed another difference between the PC/XT and the AT setup code. The PC/XT code initializes the 8259A in ‘buffered’ mode in ICW1 (it actually sets it up as ‘slave’ as well, but this bit probably does not do anything, since you set whether it runs in cascaded mode or not with ICW1, and it is configured as single). In that case, it uses the SP/EN pin to signal that its data output is enabled, so that external hardware can buffer the data.

The AT initializes its 8259A’s in non-buffered mode. This also means that the master/slave mode of each chip is triggered by the SP/EN pin (so it is now an input instead of an output), rather than by setting the mode in software via ICW4. And if we study the circuit of the AT (see page 1-76 here), we see that indeed the master PIC is wired to +5v and the slave PIC is wired to GND at the SP/EN pin.

There should be no harm in enabling buffered mode on the PICs in an AT though, and in theory you can set up the first PIC as standalone, and just configure it the same as you would on a PC/XT, ignoring the second PIC. But since we know we have to reset the PICs to the AT-specific configuration anyway, we might as well do a more ‘correct’ setup to AEOI-mode while we are at it, and stick to buffered mode for PC/XT and non-buffered mode for PC/AT.

Intel Inside?

Another issue is that 8259A chips are not necessarily made by Intel. Just like with early x86 CPUs, there were various ‘second source’ manufacturers of these chips, namely AMD, NEC, UMC and Siemens. You can find any one of these brands, even in original IBM machines. And like with the Motorola/Hitachi 6845 chip encountered on IBM CGA cards, it could be that these alternative suppliers may have slightly different behaviour.

Moreover, on newer systems, even XT-class clones, you will not be dealing with actual 8259A chips at all, but the logic will be integrated in multifunction chips. My Commodore PC20-III has a Faraday FE2010 chip, and my 286 has a Headland HT18/C. Both have only one chip that takes care of all the basic motherboard logic. And I have found the Headland to be somewhat picky in how you set up AEOI. With this large variety of chips out there, it may well be that there are other chips that have picky/broken/missing AEOI support. This feature was rarely used, so such problems may go completely unnoticed for the entire lifetime of a system.

Old and new 8259A

Another ‘gotcha’ can be read in the 8259A spec sheet. It says the following:

The AEOI mode can only be used in a master 8259A and not a slave. 8259As with a copyright date of 1985 or later will operate in the AEOI mode as a master or a slave.

That is rather nasty. XTs were made from 1983 to 1987, and ATs from 1984 to 1987, so either revision could be in these systems.

What are the consequences? Well, the second PIC in the AT should be running in slave mode. If it is a pre-1985 chip, then it will not work in AEOI mode. With the help of modem7, we could actually verify this on real hardware. His AT is an early model, and has pre-1985 chips, where we could not get the slave into AEOI-mode, even though we tried a few different approaches, trying to bend the rules somewhat.

po65127651652

So we shouldn’t try to use AEOI on the second PIC, if we want to be compatible with all AT systems. Note that in cascaded mode, an EOI needs to be sent both to the master and the slave that generated the IRQ. We can still save one EOI here, when the master is running in AEOI mode, so there are at least some gains still.

Luckily the first PIC is the most interesting one, since it handles the things we are normally interested in, like the timer, the keyboard and disk interrupts. Early sound cards would generally also stick to the first PIC (generally IRQ5 or IRQ7), for compatibility with PC/XT systems.

It could also be that the ‘buffered slave’ setup in ICW4 may not work reliably on certain clone chips in stand-alone mode, so to be safe, you should set it to ‘buffered master’ instead, when you want to enable AEOI. I encountered this issue on a 286 clone of mine. It is a late model 286 (BIOS date 7/7/91), with integrated Headland chipset. I found that AEOI only worked when I set ‘buffered master’, or when I set it to non-buffered mode (where it would be hardwired to master). I know it was only the AEOI that did not work, because the system worked fine if I still sent manual EOI commands to the PIC.

Using AEOI with buffered master mode worked on all 8259A chips I’ve tried, old and new Intels, and various clones.

Can we detect whether enabling AEOI actually worked? Well, yes. Namely, if the PIC does not get an EOI, it will not issue a new interrupt. So what can we do? We can enable AEOI, and set up a timer interrupt. Inside the handler, we increment a counter.  Our application will then wait a while (e.g. by polling the counter to detect when it wraps around), so that multiple timer interrupts will have fired. Then we check whether the counter has a value greater than 1. If so, then an EOI has been issued after each interrupt, so AEOI worked.

You can do this for both PICs, because the master PIC has the standard 8253 PIT connected to it, and the slave PIC has the MC146818 CMOS timer connected to it. Both timers can generate interrupts at a fixed interval, so for both cases you can set up an interrupt handler with a counter.

And what about the PS/2?

After the AT, IBM decided to set up a new standard, the PS/2, which was not entirely backward-compatible with the PC platform. The PS/2 is very PC-like though, in that they still use x86 processors, most of the hardware is very similar (in fact, the VGA standard was introduced on the PS/2 line and adopted by PC clones), it has an AT-compatible BIOS (as well as a new Advanced BIOS) and it runs DOS as well.

And indeed, in the PS/2 we also find the trusty two 8259A’s that we know from the AT. However, because it doesn’t use the ISA-bus, but the new MCA-bus, there is a difference. On the ISA-bus, interrupts are edge-triggered. On the MCA-bus however, they are level-triggered. This means that PS/2 systems need yet another 8259A setup and restore routine. So you will need to detect whether you are running on a PS/2 system or not. You could use int 15h, ah=oCh for this (all PS/2 systems support it), and perhaps check for MCA support in the feature-bytes returned in the table.

What happened after that?

MCA was the first ‘new’ bus architecture for the PC platform, where the engineers figured that level-triggered interrupts were nicer than edge-triggered ones. For later buses, such as EISA and PCI, engineers came to the same conclusion. When they were working on EISA, they had to solve a problem: how do we maintain backward compatibility with ISA?

They solved this by modifying the 8259A design somewhat. Instead of having a global setting for edge-triggered or level-triggered interrupts, this could be set on a per-interrupt basis. An Edge/Level Control Register (ELCR) is added for each PIC. The master PIC ELCR is at 0x4D0, and the slave PIC ELCR is at 0x4D1. Like with the interrupt mask register, each bit corresponds to one of the interrupt lines. When set to 0, that interrupt line is edge-triggered, when set to 1, the line is level-triggered. The global setting in the legacy register for the 8259A is ignored (these systems never use real 8259A chips, but always have the logic integrated into the chipset).

So basically, we do not have to care about edge/level triggering for newer systems. We don’t reprogram those registers when we enable the AEOI flag in the PIC, so they should retain the proper configuration. If you are interested in EISA, you can read more about it in this book.

So what do we do?

We basically have three types of configurations:

  1. PC/XT: Single 8259A, edge-triggered
  2. AT: Cascaded 8259A, edge-triggered
  3. PS/2: Cascaded 8259A, level-triggered (although there are ISA-based PS/2 systems, I believe all MCA-based PS/2 systems are AT-class or better)

We know how to detect which configuration we have (check if the mask of the second PIC can be written and read back, or use int 15h, ah=C0h to get system information, if that function is supported). At the very least, we know that standalone and master 8259A’s can all run in AEOI mode. So we can make three different routines to initialize the first 8259a to AEOI mode. And we can also make three different routines to initialize them back to their default mode on application exit.

To make this easier to manage, I made a simple helper function to set the different ICW values. There are 4 in total, but ICW3 and ICW4 can be optional in some cases, which complicates things somewhat. So I created a function to deal with that. Note that I write to an unused port to delay IO somewhat. For PC/XT machines this is not required. For AT’s, it is. IBM uses jmp $+2 delays in its code, which works well enough on a real AT, but on faster/newer systems (386/486), it is better to delay with a write to a port. I use port 0xEE, because that port is not used by anything:

void InitPIC(uint16_t address, uint8_t ICW1, uint8_t ICW2, uint8_t ICW3, uint8_t ICW4)
{
	_asm {
		cli

		mov dx, [address]
		inc dx
		in al, dx	// Save old mask
		mov bl, al
		dec dx

		mov al, [ICW1]
		out dx, al
		out DELAY_PORT, al	// delay
		inc dx
		mov al, [ICW2]
		out	dx, al
		out DELAY_PORT, al	// delay

		// Do we need to set ICW3?
		test [ICW1], ICW1_SINGLE
		jnz skipICW3

		mov al, [ICW3]
		out dx, al
		out DELAY_PORT, al	// delay
skipICW3:
		// Do we need to set ICW4?
		test [ICW1], ICW1_ICW4
		jz skipICW4

		mov al, [ICW4]
		out dx, al
		out DELAY_PORT, al	// delay
skipICW4:
		mov al, bl		// Restore old mask
		out dx, al

		sti
	}
}

With this helper-function, it becomes reasonably easy to initialize the PICs to auto-EOI mode and set them back to regular operation:

void SetAutoEOI(MachineType machineType)
{
	switch (machineType)
	{
		case MACHINE_PCXT:
			InitPIC(PIC1,
				ICW1_INIT|ICW1_SINGLE|ICW1_ICW4,
				0x08,
				0x00,
				ICW4_8086|ICW4_BUF_MASTER|ICW4_AEOI );
			break;
		case MACHINE_PCAT:
			InitPIC(PIC1,
				ICW1_INIT|ICW1_ICW4,
				0x08,
				0x04,
				ICW4_8086|ICW4_AEOI );
			break;
		case MACHINE_PS2:
			InitPIC(PIC1,
				ICW1_INIT|ICW1_LEVEL|ICW1_ICW4,
				0x08,
				0x04,
				ICW4_8086|ICW4_AEOI );
			break;
	}
}
void RestorePICState(MachineType machineType)
{
	switch (machineType)
	{
		case MACHINE_PCXT:
			InitPIC(PIC1,
				ICW1_INIT|ICW1_SINGLE|ICW1_ICW4,
				0x08,
				0x00,
				ICW4_8086|ICW4_BUF_SLAVE );
			break;
		case MACHINE_PCAT:
			InitPIC(PIC1,
				ICW1_INIT|ICW1_ICW4,
				0x08,
				0x04,
				ICW4_8086 );
			InitPIC(PIC2,
				ICW1_INIT|ICW1_ICW4,
				0x70,
				0x02,
				ICW4_8086 );
			break;
		case MACHINE_PS2:
			InitPIC(PIC1,
				ICW1_INIT|ICW1_LEVEL|ICW1_ICW4,
				0x08,
				0x04,
				ICW4_8086 );
			InitPIC(PIC2,
				ICW1_INIT|ICW1_LEVEL|ICW1_ICW4,
				0x70,
				0x02,
				ICW4_8086 );
			break;
	}
}

If you set up a detection routine with a timer and a counter in the handler, as I mentioned before, you could try a few variations of setting up AEOI, and check if it worked, to make it more robust for ‘wonky’ 8259A clones, and perhaps to detect problems and bail out with a warning to the user, rather than crashing their system because you assume it just works. Of course there is still the risk that the system doesn’t use the ‘default’ setup you’ve assumed (I have taken the above values from the original IBM BIOSes for PC and AT). Which means that it’s already too late when you detected that AEOI doesn’t work. Because you changed the PIC configuration, and you don’t know the initial state. So it may be best to warn the user beforehand that he may have to reset his system if things go wrong.

If you’re interested, you can download the source and binary of my simple test-program here. If you find any strange quirks on your 8259A chips, please let me know in the comments what chips you tested, and what strange things you saw.

And what did all this earn us?

Well, we can just save on these two instructions in our interrupt handler now:

// Send end-of-interrupt command to PIC
mov al, 0x20
out al, 0x20

Somewhat of a Pyrrhic victory you say? Well, indeed, it’s a lot of trouble for very few gains, but any gains are welcome, and once you get this working, you can just stop worrying about it and reap the benefits, modest as they may be. Especially with high-freqency timer handling, such as with playback of digital audio, it may just give you that extra ‘push over the cliff’, to speak with Nigel Tufnel. These interrupts go to 11!

Bonus material

When I was playing around with the 8259A stuff, Trixter pointed me to an article in this old magazine, which covers programming the 8259A in great detail (see page 173 and further). It does not really go into AEOI much, but it covers pretty much everything else, such as the priority schemes, and is a great read. Priority is the trade-off you’re making when enabling AEOI: each interrupt is immediately acknowledged, and can fire again as soon as the CPU is ready to receive interrupts. If you acknowledge the interrupts manually, you get control over which interrupts may fire when. For games and demos this is not such a big issue, because we generally don’t need to service that many interrupts anyway, and we can mask out anything we’re not interested in at any given time.

Amazing how in-depth and technical articles were in regular PC magazines back in the day, compared to how dumbed-down everything is today in mainstream media.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , , , , , , , | 2 Comments

BHM File Format release 0.3b

I have made a new release of the BHM File Format project.

In release 0.3b there have been some minor code refactorings and bugfixes, but the most important new thing is the BHM Visualizer tool:

BHMVisualizer-1

This C# application (which should be Mono-compatible for multiplatform support) allows you to inspect the contents in a userfriendly graphical interface. It is a great help for debugging BHM import and export routines.

The preview-tab is a special case. It dynamically tries to load a .NET assembly by the name of ‘BHMViewer.dll’. It expects this assembly to contain an implementation of the IBHMViewer interface. This interface allows the BHM Visualizer to pass a Stream with BHM content to this plugin.

The plugin receives the Control-handle of the Preview-panel, and an Update()-method is called periodically. This allows the plugin to implement any kind of visualization of the BHM data.

As an example, I have taken the BHM3DSample OpenGL code, and wrapped the IBHMViewer interface around it. This way you can view the BHM files created by the 3dsmax exporter directly in the BHM Visualizer:

BHMVisualizer-2

As usual, all code is included under the BSD license. So feel free to use, extend and modify these tools in any way you like.

Posted in OpenGL, Software development | Tagged , , , , , , , , , | Leave a comment

Latch onto this, it’s all relative

Right, a somewhat cryptic title perhaps, but don’t worry. It’s just the usual 8088-retroprogramming talk again. I want to talk about how some values in PC hardware are latched, and how you can use that to your advantage.

Latched values in this context basically mean values that are ‘buffered’ in an internal register, and will not become active right away, but after a certain event occurs.
As you might recall in my discussion of 8088 MPH’s sprite part, I used the fact that the start offset register is latched in the scrolling of the background image. You can write a new start offset to the CRTC at any time, and it won’t become active until the frame is finished.

This has both downsides and upsides. The downside is that you can’t change the start offset anywhere on the screen, to do C64/Amiga like bitmap stretching effects and such (although as you may know, reenigne found a way around that, which is how he pulled off the Kefrens bars in 8088 MPH, among other things). The upside is that you don’t have to explicitly write the value during the vertical blank interval when you want to perform smooth scrolling or page flipping. Which is what I did in the sprite part: I only had to synchronize the drawing of the sprite to avoid flicker, and I could fire off the new scroll offset immediately.

Another interesting latched register is in the 8253 Programmable Interval Timer. As you know, we don’t have any raster interrupt on the IBM PC. And as you know, we’ve tried to work around that by exploiting the fact that the timer and the CGA card are running off the same base clock signal, and then carefully setting up a timer interrupt at 60 Hz (19912 ticks), synchronized to vsync.

In the sprite part, I just left the timer running. The music was played in the interrupt handler. The sprite routine just had to poll the counter values to get an idea of the beam position on screen. In the final version of 8088 MPH, reenigne used a slightly more interesting trick to adjust the brightness of the title screen as it rolled down, because the polling wasn’t quite accurate enough.

The trick exploits the fact that when the 8253 is in ‘rate generator’ or ‘square wave’ mode, if you write a new ‘initial count’ value to the 8253, without sending it a command first, it will latch this value, but continue counting down the old value. When the value reaches 0, it will use the new ‘initial count’ that was latched earlier.
The advantage here is that there are no cycles spent on changing the rate of the timer, which means there is no jitter, and the results are completely predictable. You can change the timer value any number of times during a frame, and you know that as long as the counts all add up to 19912 ticks, you will remain in sync with the screen.

Here is a demonstration of this trick:

What you see here is that the timer is fired a number of times per screen, changing the background colour everytime, to paint some raster bars. The timing of the start of the red bar is modified every frame, adding 76 ticks (one scanline), shifting it downward, which makes the cyan bar grow larger. To compensate, the timing for the purple bar is modified as well, subtracting 76 ticks, to compensate.

The result is that all the bars remain perfectly stable and in sync with the screen. If you reinitialized the timer everytime, you would get some drift, because it’s difficult to predict the amount of cycles it takes to send the new commands to the timer, and to compensate for that.
In 1991 donut, I was not aware of the latching trick yet, so I reinitialized the timer every time. As a result, there is a tiny bit of jitter depending on how fast your system is. This is mostly hidden by black scanlines in the area where the jitter would occur, but on very slow systems, you may see that the palette changes on the scroller are a scanline off, or more.
Because 1991 donut is aimed at VGA systems, I would have to reinitialize the timer at the end of every frame anyway, because I have to re-sync it to the VGA card, which runs on its own clock. But with the latch trick, I could at least have made the palette changes independent of CPU-speed (and there are various other tricks I’ve picked up since then, which could speed up 1991 donut some more).

Always think one step ahead

The catch with this trick is that you always have to be able to think one step ahead: the value will not become active until the timer reaches 0. So you can’t change the current interval, only the next one. In most cases that should not be a problem though, such as with drawing sprites, raster bars and such tricks.

The actual inspiration for this article was actually not because of anything graphics-related, but rather because of analog joysticks. That may be another application of this trick. Namely, to determine the position on a joystick axis, you have to initialize the joystick status register to 1, and then poll it to see how long it takes until it turns into a 0. This ties up the CPU completely.
So, my idea was to set up a system where you’d use a few timer interrupts to poll the joystick during each frame. If you don’t need a lot of accuracy (eg only ‘digital’ movement), you’d only have to fire it a few times, theoretically 3 would be enough to get left-middle-right or top-middle-bottom readings. But in practice you’d probably want to fire a few more. But still it would probably cost less CPU time than polling.

Another application that may be interesting is related to music/sound effects. Instead of having just a single rate, you can multiplex multiple frequencies this way, by modifying the counter value at each interrupt, and keeping track of how often it has fired to determine which state you are currently in (more or less like the coloured bars representing different ‘states’ in the above example).

Bonus trick

To finish off for today, I also want to share another timer-related trick. This trick came to us because someone showed interest in 8088 MPH, and had some suggestions for improvements. Now this is exactly the kind of thing we had been hoping for! We wanted to inspire other people to also do new and exciting things on this platform, and push the platform further and further. Hopefully we will be seeing more PC demos (as in original IBM PC specs, so 8088+CGA+PC speaker).

This particular trick is the automatic end-of-interrupt functionality in the 8259 Programmable Interrupt Controller. This functionality is not enabled by default on the PC, which means that you have to manually send an EOI-command to the 8259 in your interrupt handler everytime. So, if you reinitialize the 8259 and enable the auto-EOI bit in ICW4, you no longer have to do this, which saves a few instructions and cycles everytime. Interrupts on 8088 are quite expensive, so we can use any help we can get. The above rasterbar example is actually using this trick.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , | 1 Comment