The philosophy of science

And now for something completely different… I have been critical of hardware and software vendors and their less ethical actions in the past. But a while ago, something happened that did not have anything to do with computer science at all. But it did have to do with ethics and science. Or rather, pseudoscience.

Namely, someone who I considered to be a good friend got involved with a man who was selling Kangen water machines and performing “DNA activation”. As I read the articles on his website, I was reminded of the philosophy of science courses at my university. About what is real science and what is not. What is a real argument, and what is a fallacy.

There was also the ethical side of things. Clearly she did not realize that this man was a fraud. But should she be told? I believe that everyone is free to make their own choices in life, and to make their own mistakes. But at the least they should have as much information as possible to try and make the right decisions.

Anyway, so let’s get philosophical about science.

Demarcation problem

Before we can decide what is science and what is not, we must first define what science is. This is actually a very complicated question, which people have tried to answer throughout history, and can be traced back all the way to the ancient Greek philosophers. It is known as the ‘demarcation problem’.

A part of the problem is that science has been around for a long time, and not everything that was historically seen as being part of science may actually fit more modern views of science. If you try to approach the problem from the other side, then your criteria may be too loose, in order to fit the historical, or ‘de facto’ sciences as well, since scienctific methodologies have evolved over time, as human knowledge grew.

There are two ways to approach the problem. One way is a descriptive definition, trying to name characteristics of science. Another way is a normative definition, defining an ideal view of what science should be. This implies however that humans have some ‘innate sense’ of science.
Neither approach will lead to a perfect definition of science, but there is something to be said for both. So let us look at some attempts of defining science throughout the ages.

Sir Francis Bacon

Seen as one of the ‘fathers of modern science’, Sir Francis Bacon (1561-1626) wrote a number of works on his views of science, where he put forward various ideas and principles which are still valid and in use today. In his day, science was still seen as classifying knowledge, and being able to logically deduce particular facts from the larger system of knowledge, from more general facts, known as deductive reasoning. In such a system of science, progress in the sense new inventions and discoveries could never happen, because they would not fit into the existing framework of knowledge.

So instead of deductive reasoning, Bacon introduced a method of inductive reasoning: the exact opposite. Instead of deducing specifics from generics, specific observations are generalized into more universal theories by looking at connections between the observations.
However, the potential flaw here is that humans are fallible and their senses can deceive them. So Bacon and other philosophers have tried to identify the pitfalls of human error, and have tried to come up with methodologies to avoid error, and find the truth. You might have heard one of Bacon’s messages: “ipsa scientia potestas est” (knowledge itself is power).

Scientists have to be skeptic, objective and critical. They also have to form a collective, and check each other’s work, in order to ‘filter’ or ‘purify’ the knowledge from human error. Scientific knowledge is universal, and belongs to everyone. The goal of scientific research should be to extend our knowledge, not to serve any particular agenda. These ideals and principles were later characterized by Robert K. Merton in his four ‘Mertonian norms’:

  • Communalism
  • Universalism
  • Disinterestedness
  • Organized Skepticism

These norms are also known by the acronym of ‘CUDOS’, formed by the first letters of each word.

Karl Popper

In the early 1900s, a movement within philosophy emerged, known as logical positivism. It was mainly propagated by a group of philosophers and scientists known as the Wiener Kreis (Vienna Circle). Logical positivism promoted the verifiability principle: a statement only has meaning if it can objectively be evaluated to be true or false, in other words, only if it can be verified. This rejects many statements, such as religious ones, for example: “There is a god”. There is no way to prove or disprove such a statement. Therefore, it is cognitively meaningless.

While there certainly is some merit to this way of thinking, it also leads to a human error, namely that of confirmation bias. Let us use an abstract example (yes, the computer scientist will use numbers of course). In scientific research, you will perform experiments to test certain theories. These experiments will give you empirical measurement data. To abstract this, let us use numbers as our abstract ‘measurements’.
Consider the following statement:

All numbers greater than -5 are positive.

Now, there are many numbers you can think of, which are both positive and greater than -5. In fact, there are infinitely many such numbers. So, everytime you find such a number, you confirm the statement. But does that make it true? No, it does not. We only need to find one number that is greater than -5, but not positive, and we have refuted the statement. In this abstract example, we already know beforehand that the statement is false. But in scientific research you do not know the outcome beforehand, which is why you perform the experiments. But if your experiments never explore the area in which the tests would fail, you will never see any evidence that the theory is flawed, even though it is.

This, in a nutshell, is the criticism that Karl Popper (1902-1994) had on logical positivism. Scientists would believe that a theory would become more ‘true’ the more tests were done to confirm it, and at some point it would be considered an absolute truth, a dogma. However, a theory would only need to fail one test to refute it. Falsifying is a much stronger tool than verifying. Which is exactly what we want from science: strong criticism.

Popper was partly inspired by the upcoming psychoanalysis movement (Freud, Adler, Jung), where it was easy to find ‘confirmation’ of the theories they produced, but it was impossible to refute these theories. Popper felt that there was something inherently unscientific, or even pseudoscientific about it, which led to him finding new ways to look at science and scientific theories. The scientific value of a theory is not the effects that can be explained by the theory, but rather the effects that are ruled out by the theory.

Thomas S. Kuhn

One great example of the power of falsifiability is Einstein’s theory of relativity. The Newtonian laws of physics had been around for some centuries, and they had been verified many times. Einstein’s theories predicted some effects that had never been seen before, and would in fact be impossible under the Newtonian laws. But, Einstein’s theory could be falsified by a simple test: According to Einstein’s theories, a strong gravitational field, such as from the sun, would bend light more than Newtonian laws would imply, because it would distort space itself. This would mean that during a solar eclipse it should be possible to see stars which are behind the sun. The light of stars that are close to the edge of the sun would have to pass through the sun’s ‘gravitational lens’, and their path would be warped.
Einstein formulated this theory in 1915, and in 1919 there was a solar eclipse where the theory could be put to the test. And indeed, when the astronomers determined the position of a star near the edge of the sun, its position was shifted compared to the ‘normal’ position, by the arc that Einstein had calculated. The astronomers had tried to falsify the theory of relativity, and it passed the test. This meant that the Newtonial laws of physics, as often as they had been validated before, must be wrong.

The main problem with Popper’s falsifiability principle however is that it is not very practical in daily use. Most of the time, you want to build on a certain foundation of knowledge, in order to apply scientific theories to solve problems (‘puzzles’). So Thomas S. Kuhn (1922-1996) proposed that there are two sides to science. There is ‘normal science’, where you build on the existing theories and knowledge. And then there are exceptional cases, such as Einstein’s theory, which are a ‘paradigm shift’, a revolution in that field of science.

During ‘normal science’, you generally do not try to disprove the current paradigm. When there are tests that appear to refute the current paradigm, the initial assumption is not that the paradigm is wrong, but rather that there was a flaw in the work of the researcher, it is just an ‘anomaly’. However, at some point, the amount of ‘anomalies’ may stack up to a level where it becomes clear that there is a problem in the current paradigm. This leads to a ‘crisis’, where a new paradigm is required to explain the anomalies more accurately than the current paradigm. This leads to a period of ‘revolutionary science’, where scientists will try to find a better paradigm to replace the old one.

This also leads to competing theories on the same subject. Kuhn laid out some criteria for theory choice:

  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory’s consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam’s razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena

Logic and fallacies

Scientific theories are all about valid reasoning, or in other words: logic. The problem here is once again human error. A scientist should be well-versed in critical thinking, in order to detect and avoid common pitfalls in logic, known as fallacies. A fallacy is a trick of the mind where something might intuitively sound logical, but if you think about it critically, you will see that the drawn conclusion cannot be based on the arguments presented, and therefore is not logically sound. This does not necessarily imply that the concluded statement itself is false, merely that its validity does not follow from the reasoning (which is somewhat of a meta-fallacy).
There are many common fallacies, so even trying to list them all goes beyond the scope of this article, but it might be valuable to familiarize yourself with them somewhat. You will probably find that once you have seen them, it is not that hard to pick them out of written or spoken text, you will get triggered by a fallacy quickly.
You should be able to find various such lists and examples on line, such as on Wikipedia. Which brings up a common fallacy: people often discredit information linked via Wikipedia, based on the claim that Wikipedia is not a reliable source. While it is true that not all information is accurate or reliable, this is no guarantee that all information on Wikipedia is unreliable or inaccurate. This is known as the ‘fallacy of composition’. Harvard has a good guide of how to use Wikipedia, and says it can be a good way to familiarize yourself with a topic. So there you go.

Fallacies are not necessarily deliberate. Sometimes your mind just plays tricks on you. But in pseudoscience (or marketing for that matter), fallacies are often used deliberately to make you believe things that aren’t actually true.

So, now that we have an idea of what science is, or tries to be, we should also be able to see when something tries to look like science, but isn’t.

As a nice example, I give you this episode of EEVBlog, which deals with a device called the ‘Batteriser’

It claims to boost battery life up to 800%, but the technology behind it just doesn’t add up. Note how they cleverly manipulate all sort of statements to make you think the product is a lot more incredible than it actually is. And note how they even managed to manipulate a Dr. from the San Jose State University to ‘verify’ the claims, so that gives you the false impression that (all of) the claims of this product are backed up by an ‘authority’.

This entry was posted in Science or pseudoscience? and tagged , , , , , , , , , , . Bookmark the permalink.

3 Responses to The philosophy of science

  1. Pingback: The Pessimist Pamphlet | Scali's OpenBlog™

  2. Pingback: The Cult of Wokeness | Scali's OpenBlog™

  3. Pingback: The Cult of Wokeness, followup | Scali's OpenBlog™

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s