Thought: Neutral and objective versus politically correct.

I suppose most people don’t understand the difference between being neutral/objective and being politically correct. A neutral and objective observer can still be critical about what he observes. A politically correct observer cannot voice any criticism.

I suppose you could say that being neutral/objective is being at the origin, while being politically correct is being exactly halfway the two extremes. When one extreme is larger than the other, the two are not the same.

My aim is to be as neutral and objective as possible. I have no desire whatsoever to be politically correct though. I voice whatever criticism I see fit.

Posted in Uncategorized | 6 Comments

nVidia’s GeForce GTX 1080, and the enigma that is DirectX 12

As you are probably aware by now, nVidia has released its new Pascal architecture, in the form of the GTX 1080, the ‘mainstream’ version of the architecture, codenamed GP104. nVidia had already presented the Tesla-varation of the high-end version earlier, codenamed GP100 (which has HBM2 memory). When they did that, they also published a whitepaper on the architecture.

It’s quite obvious that this is a big leap in performance. Then again, that was to be expected, given that GPUs are finally moving from 28 nm to 14/16 nm process technology. Aside from that, we have new HBM and GDDR5x technologies to increase memory bandwidth. But you can find all about that on the usual benchmark sites.

I would like to talk about the features instead. And although Pascal doesn’t improve dramatically over Maxwell v2 in the feature-department, there are a few things worth mentioning.

A cool trick that Pascal can do, is ‘Simultaneous Multi-Projection’. Which basically boils down to being able to render the same geometry with multiple different projections, as in render it from multiple viewports, in a single pass. Sadly I have not found any information yet on how you would actually implement this in terms of shaders and API states, but somehow I think it will be similar to the old geometry shader functionality where you could feed the same geometry to your shaders multiple times with different view/projection matrices, which allowed you to render a scene to a cubemap in a single pass for example. Since the implementation of the geometry shader was not very efficient, this never caught on. This time however, nVidia is showcasing the performance gains for VR usage and such, so apparently the new approach is all about efficiency.

Secondly, there is conservative rasterization. Maxwell v2 was the first card to give us this new rendering technology. It only supported tier 1 though. Pascal bumps this up to tier 2 support. And there we have the first ‘enigma’ of DirectX 12: for some reason hardly anyone is talking about this cool new rendering feature. It can bump up the level of visual realism another notch, because it allows you to do volumetric rendering on the GPU in a more efficient way (which means more dynamic/physically accurate lighting and less pre-baked lightmaps). Yet, nobody cares.

Lastly, we have to mention Asynchronous Compute Shaders obviously. There’s no getting around that one, I’m afraid. This is the second ‘enigma’ of DirectX 12: for some reason everyone is talking about this one. I personally do not care about this feature much (and neither do various other developers. Note how they also point out that it can even make performance worse if it is not tuned properly, yes also on AMD hardware. Starting to see what I meant earlier?). It may or may not make your mix of rendering/compute tasks run faster/more efficiently, but that’s about it. It does not dramatically improve performance, nor does it allow you to render things in a new way/use more advanced algorithms, like some other new features of DirectX 12. So I’m puzzled why the internet pretty much equates this particular feature with ‘DX12’, and ignores everything else.

If you want to know what it is (and what it isn’t), I will direct you to Microsoft’s official documentation of the feature on MSDN. I suppose in a nutshell you can think of it as multi-threading for shaders. Now, shaders tend to be presented as ‘threaded’ anyway, but GPUs had their own flavour of ‘threads’, which was more related to SIMD/MIMD, where they viewed a piece of SIMD/MIMD code as a set of ‘scalar threads’ (where all threads in a block share the same program counter, so they all run the same instruction at the same time). The way asynchronous shaders work in DX12 is more like how threads are handled on a CPU, where each thread has its own context, and the system can switch contexts at any given time, and determine the order in which contexts/threads are switched in a number of ways.

Then it is also no surprise that Microsoft’s examples here include synchronization primitives that we also know from the CPU-side, such as barriers/fences. Namely, the nature of asynchronous execution of code implies that you do not know exactly when which piece of code is running, or at what time a given point in the code will be reached.

The underlying idea is basically the same as that of threading on the CPU: Instead of the GPU spending all its time on rendering, and then spending all its time on compute, you can now start a ‘background thread’ of compute-work while the GPU is rendering in the foreground. Or variations on that theme, such as temporarily halting one thread, so that another thread can use more resources to finish its job sooner (a ‘priority boost’).

Now, here is where the confusion seems to start. Namely, most people seem to think that there is only one possible scenario and therefore only one way to approach this problem. But, getting back to the analogy with CPUs and threading, it should be obvious that there are various ways to execute multiple threads. We have multi-CPU systems, multi-core CPUs, then there are technologies such as SMT/HyperThreading, and of course there is still the good old timeslicing, that we have used since the dawn of time, in order to execute multiple threads/asynchronous workloads on a system with a single CPU with a single core. I wrote an article on that some years ago, you might want to give it a look.

Different approaches in hardware and software will have different advantages and disadvantages. And in some cases, different approaches may yield similar results in practice. For example, in the CPU world we see AMD competing with many cores with relatively low performance per core. Intel on the other hand uses fewer cores, but with more performance per core. In various scenarios, Intel’s quadcores compete with AMD’s octacores. So there is more than one way that leads to Rome.

Getting back to the Pascal whitepaper, nVidia writes the following:

Compute Preemption is another important new hardware and software feature added to GP100 that allows compute tasks to be preempted at instruction-level granularity, rather than thread block granularity as in prior Maxwell and Kepler GPU architectures. Compute Preemption prevents long-running applications from either monopolizing the system (preventing other applications from running) or timing out. Programmers no longer need to modify their long-running applications to play nicely with other GPU applications. With Compute Preemption in GP100, applications can run as long as needed to process large datasets or wait for various conditions to occur, while scheduled alongside other tasks. For example, both interactive graphics tasks and interactive debuggers can run in concert with long-running compute tasks.

So that is the way nVidia approaches multiple workloads. They have very high granularity in when they are able to switch between workloads. This approach bears similarities to time-slicing, and perhaps also SMT, as in being able to switch between contexts down to the instruction-level. This should lend itself very well for low-latency type scenarios, with a mostly serial nature. Scheduling can be done just-in-time.

AMD on the other hand seems to approach it more like a ‘multi-core’ system, where you have multiple ‘asynchronous compute engines’ or ACEs  (up to 8 currently), which each processes its own queues of work. This is nice for inherently parallel/concurrent workloads, but is less flexible in terms of scheduling. It’s more of a fire-and-forget approach: once you drop your workload into the queue of a given ACE, it will be executed by that ACE, regardless of what the others are doing. So scheduling seems to be more ahead-of-time (at the high level, the ACEs take care of interleaving the code at the lower level, much like how out-of-order execution works on a conventional CPU).

Sadly, neither vendor gives any actual details on how they fill and process their queues, so we can only guess at the exact scheduling algorithms and parameters. And until we have a decent collection of software making use of this feature, it’s very difficult to say which approach will be best suited for the real-world. And even then, the situation may arise, where there are two equally valid workloads in widespread use, where one workload favours one architecture, and the other workload favours the other, so there is not a single answer to what the best architecture will be in practice.

Oh, and one final note on the “Founders Edition” cards. People seem to just call them ‘reference’ cards, and complain that they are expensive. However, these “Founders Edition” cards have an advanced cooler with a vapor chamber system. So it is quite a high-end cooling solution (previously, nVidia only used vapor chambers on the high-end, such as the Titan and 980Ti, not the regular 980 and 970). In most cases, a ‘reference’ card is just a basic card, with a simple cooler that is ‘good enough’, but not very expensive. Third-party designs are generally more expensive, and allow for better cooling/overclocking. The reference card is generally the cheapest option on the market.

In this case however, nVidia has opened up the possibility for third-party designs to come up with cheaper coolers, and deliver cheaper cards with the same performance, but possibly less overclocking potential. At the same time, it will be more difficult for third-party designs to deliver better cooling than the reference cooler, at a similar price. Aside from that, nVidia also claims that the whole card design is a ‘premium’ design, using high-quality components and plenty of headroom for overclocking.

So the “Founders Edition” is a ‘reference card’, but not as we know it. It’s not a case of “this is the simplest/cheapest way to make a reliable videocard, and OEMs can take it from there and improve on it”. Also, some people seem to think that nVidia sells these cards directly, under their own brand, but as far as I know, it’s the OEMs that build and sell these cards, under the “Founders Edition” label. For example, the MSI one, or the Inno3D one.
These can be ordered directly from the nVidia site.

Posted in Direct3D, Hardware news, OpenGL | Tagged , , , , , , , , , , , , , , | 42 Comments

That’s not Possible on this Platform!

8088MPH-Meteorik

The past year-and-a-half have been quite interesting. First there was the plan to form a team and build a demo for the original IBM PC with CGA. We weren’t quite sure where it would lead at first, but as the project progressed, it matured into something we considered worthy of competing. But we had no idea how the Revision 2015-crowd would respond to it. Much to my surprise, they totally loved it. It seems a lot of sceners had an XT clone back in the day, and could appreciate what we were doing.

So, after a very exciting vote, where the first 4 entries were nearly tied, we were declared the winners of the Revision 2015 oldskool compo. Mission accomplished! But as it turned out, this wasn’t the end of 8088 MPH, it was only the start.

The demo quickly crossed the boundaries of the demoscene and got picked up by other interested people across the web, such as gamers/game developers, embedded system developers and whatnot, on twitter, various blogs, forums and such. There was even some moderate ‘mainstream’ media coverage in the form of newspaper websites and tech-oriented websites.

While we were working on the project, I had made a reference to Batman Forever, by Batman Group, a few times.

This demo completely redefined the Amstrad CPC platform, using the hardware in new and innovative ways, and pushing the limits way further than any Amstrad CPC demo before it. This is what we also hoped to achieve with 8088 MPH. After all, before we started development on 8088 MPH, the best demo on a stock IBM 5150/5160 with CGA and PC speaker was probably the CGADEMO by Codeblasters.

Granted, shortly before 8088 MPH was released, there was Genesis Project, with GP-01.

This was an interesting release, as it showed some ‘new’ and ‘modern’ effects on the PC/XT with CGA and PC speaker platform. However, it wasn’t quite of the level of magnitude that we had planned. Aside from that, it didn’t actually run on real hardware, which was a shame.

8088 MPH was the opposite of this: it ran fine on real hardware, but not on any emulators. We were hoping that emulator developers would get inspired by the demo, and pick up the challenge to make an emulator that is accurate enough to run the demo. Initially there did not seem to be too much interest there, but over time (and after reenigne himself built in proper NTSC decoding into DOSBox, for the support of the new high-colour modes), a few developers started to get serious about it.

It seems that 8088 MPH has become ‘notorious’ as a compatibility benchmark, and it has been used to demonstrate an 8088/8086-compatible soft-core by MicroLabs.

And although it took a long time, someone has finally added a reference to 8088 MPH to the CGA-article on Wikipedia:

Later demonstrations by enthusiasts have increased the maximum number of colors the CGA is known to produce in a single image to approximately a thousand. Aside of artifacting, this technique involves the text mode tweak which quadruples its rows, thus offering the benefit of 16 foreground and 16 background colors. Certain ASCII characters such as U and ‼ are then used to produce the necessary patterns, which result in non-dithered images with an effective resolution of 80×100 on a composite monitor.[23]

Granted, it is not a very accurate description, and for some reason, the actual demo title or groups are not mentioned, nor did they show any screenshots of the demo to illustrate the new modes. But at least there is a mention there, which is a nice start.

And lastly, there was Revision 2016. We were nominated for two Meteoriks, “Best low-end demo” and “That’s not possible on this platform!”. We ended up winning the latter category.

We feel it is a great honour that people chose this demo as the most ‘impossible’ demo of 2015. I have always been an ‘oldskool’ scener, and to me, making demos is all about doing ‘impossible’ things on your machine. Pushing the machine to its limits, and beyond. So to me this category embodies the oldskool demoscene, and it is great to be the winner of this particular category. While we would also have liked to have won in other categories, we realize that our demo may not have been the slickest production around. Partly because of the limitations of our platform, but also partly because making demos of this scale on this platform is completely new, and as we said in the end-scroller, this is only the beginning. Techniques have to evolve and mature, to get closer to the sophistication of demos on other platforms. We couldn’t completely bridge the gap in just a single demo. But we hope that people build on our work, and take it further.

I really like what Urs says about our demo when announcing us as the winner. That the category was originally meant for just a single effect that was ‘impossible for the platform’, but how our demo stood apart in not having just a single effect like that, but basically all effects in the entire demo were like that, which is quite unique. And that this demo inspires others to do new ‘impossible’ things as well.

I also really liked that he said that he hopes for us to make more demos like this. It seems we have really succeeded in putting the IBM PC 5150/5155/5160 on the map as a valid demo platform.

I was also pleasantly surprised by the huge cheering from the crowd during the showing of the nominees. It seems they cheered loudest at 8088 MPH. It was very cool to have our demo in the spotlight a second time, a year after we originally made it.

Lastly, we had another nice surprise in the oldskool demo compo.

The C64 demo of Fairlight/Offence/Noice parodied the 8088 MPH intro screen. Imitation is the sincerest form of flattery? I think it is a sign that they loved our demo last year (or at least our ‘respect’ to the C64 platform, which we feared to be our biggest competitor in the competition), even though we beat them last year. Perhaps it actually means that this time they were afraid that another PC demo would be their biggest competitor? Well, sadly we did not have the time to make a demo this year, or even attend Revision.

At any rate, 8088 MPH has become everything we hoped it would be, and so much more than that, beyond our wildest dreams. We would like to thank everyone who has voted for our demo, commented on it, spread the word about it, or whatever else they may have done to make 8088 MPH as popular and well-known as it is today, and for all the recognition we received for our work.

Posted in Oldskool/retro programming, Software development | Tagged , , , , , , , , , , | Leave a comment

Rise of the Tomb Raider: DX12 update

An update for Rise of the Tomb Raider was released yesterday, adding support for the DX12 API: http://steamcommunity.com/games/391220/announcements/detail/870690772757808963

And: http://tombraider.tumblr.com/post/140859222830/dev-blog-bringing-directx-12-to-rise-of-the-tomb

As you can see:

Adds NVIDIA VXAO Ambient Occlusion technology. This is the world’s most advanced real-time AO solution, specifically developed for NVIDIA Maxwell hardware. (Steam Only)

So it seems like this is the first game which uses some actual DX12_1 features. VXAO uses the new conservative rasterization feature of DX12_1, which I personally mentioned as one of the most important new features in DX12 on various occasions, since it allows new/more advanced rendering algorithms. It will be interesting to see how this feature is received by the general public. It will also be interesting to see how many other games will follow this example.

Edit: According to this article on Tech Report, the VXAO feature is only available in DX11 mode, which would imply that it uses DX11.3, which gives access to the same DX12_1 features, through the older API.

This post from Nixxes on the Steam Community forum seems to imply that support for VXAO in DX12 will be added later:

You should see VXAO as the rightmost ambient occlusion option, assuming you are on Maxwell hardware, have up to date drivers, and are running on DX11. VXAO is not yet available on DX12 from NVIDIA.

 

Posted in Direct3D | Tagged , , , , , , , , , , | 1 Comment

Pseudoscience: DNA activation

The second ‘treatment’ offered on the site promoting Kangen Water is a thing called ‘DNA Activation’. Now in this case, they do not even try to keep up a scientific appearance. They just flatout reject the theories of DNA by the ‘establishment’, as in conventional science, and present their own. I suppose that this strategy is chosen because the two strands ‘double helix’ shape of DNA is an iconic shape, that most people will be familiar with. With Kangen water on the other hand, they probably think the clustering theory may actually sound plausible, since most people will not know anything about the molecular structure of water at that level.

What is it?

To be honest, it is not quite clear what it actually is. They claim that DNA actually has far more than just 2 strands, namely 12, which they refer to as ‘junk DNA’, and that these extra strands can be ‘activated’. However, it is not made clear how exactly these extra strands will be activated, and why this activation would have any kind of effect. There is a lot of talk about spiritual and even extra-terrestrial concepts, but it mostly sounds like some strange conspiracy theory, while it does not go into the details of the process of activation at all.

What are the claimed benefits?

Again, not a lot of concrete information here. Phrases such as “Connecting to your higher self and your divine purpose”. Quite hollow rhetoric. But perhaps that is the idea: anyone will add their own interpretation of these hollow phrases with something they actually desire.

I think in short it is supposed to make you feel better, what ever that means specifically.

So what is the problem?

Well, for starters, they do not even get the conventional theory of DNA correct. The site says: “At this moment most people on the planet have 2 double helix strands activated.”

No, conventional theory says that two strands of molecules bond together to form a single double-helix molecule.

Another flaw is that they take ‘junk DNA’ literally. As if it is useless. The term ‘junk DNA’ is in fact used by conventional science, but the term is meant to describe the parts of the DNA that are ‘non-coding’ in terms of genetic information. Science does not claim it is useless. In fact, since DNA is closely related to the evolution theory, this would imply that useless features would eventually evolve away. The fact that junk DNA still exists in all lifeforms would indicate that this particular form of DNA was preferred through natural selection. It is believed that junk DNA actually performs a role during the replication of DNA during mitosis. Aside from that, it is also believed that the non-coding parts of DNA help prevent mutations, since only a small part of the molecule actually carries the ‘critical’ genetic information. Think of it as the organic equivalent of a lightning rod.

Another issue is that they try to connect the function of the strands of DNA to ancient Indian concepts such as chakra’s. They also claim that DNA activation was already practiced by ancient civilizations.

The obvious problem there is that these ancient civilizations had no idea what DNA was, since it wasn’t discovered until 1869, and the current theory of the double-helix shape was not formed until 1953.

This theory was of course not just pulled out of thin air. The researchers used the technique of X-ray diffraction to study the structure of the molecule. The image known as “Photo 51” provided key information to the double-helix structure. If DNA in fact had far more strands, then this image would have looked very different. Also, this is ancient technology of course. Better methods for imaging DNA have been developed since, and more recently, with the work of some Italian researchers, it is now possible to take direct images of DNA. These images still confirm the double-helix model, with 2 strands.

direct-imaging-of-dna-fibers

Aside from that, the whole mechanics of DNA activation are not clear to me. Even if we assume that there were more than 2 strands, then how exactly would one activate them? The DNA is in every cell of your body, so you would have to activate millions of cells at once. And our cells are constantly dividing and duplicating, so how do you activate the DNA when it is being replicated all the time? It would have to be done almost instantly, else you get a new cell with new DNA that is not activated yet, and you have to start all over.

And even if you ‘activate’ this DNA, what exactly does that mean? It seems to imply that it suddenly unlocks genetic information that was ignored until now. But that does not make sense. Because if this genetic information can be unlocked through ‘activation’, it could also be unlocked ‘by accident’ through mutation. So some people would be born with ‘activated’ DNA. And if this ‘activated’ DNA is indeed superior, then these people would evolve through natural selection at the cost of the inferior ‘non-activated’ people. Unless of course they want to deny the whole theory of evolution as well. But then, why bother to base your theory on DNA and genetics in the first place?

The mind boggles…

Lastly, I think we can look back at Kuhn’s criteria. Similar to Occam’s Razor, Kuhn states that the most simple explanations are preferred over more complex ones. The theory of 12 DNA strands is certainly more complex than the theory for 2 DNA strands. They could have formulated their theory with just 2 DNA strands anyway, since they seem to base the activation on the ‘junk DNA’. There was no need to ‘invent’ extra strands for the junk DNA. Conventional science had already stated that 98% of the DNA molecule is non-coding. They could have just gone with that. It seems by the way, that this ‘DNA activation’ or ‘DNA perfection’ theory originated with Toby Alexander. That should give you some more leads.

Final words

But wait, you say. Just because certain theories do not fit into the current paradigm does not mean they are necessarily wrong. As Kuhn said, every now and then it is time for a revolution to redefine the paradigm. Yes, but Kuhn also said that there needs to be a ‘crisis’ with the current paradigm: observations that cannot be explained by the current theories. And new theories should be a better explanation than the current ones. Such as Einstein and Newton: the Newtonian laws were reasonably accurate, but Einstein ran into some things that could not be explained by them. Einstein’s new laws were a more accurate model, which could explain everything the Newtonian laws did, and more.

In this case, we do not have a crisis. And the new theories are not a refinement of our current paradigm, but are actually in conflict with observations that ARE explained properly by the current paradigm. It seems that these theories are mainly designed to solve the ‘crisis’ of trying to explain whatever it is that these people want to sell you.

And since the topics I have discussed here are scientific, as per Merton’s norm of communalism, all the knowledge is out there, available to everyone. You do not have to take my word for it, I merely point out some of the topics, facts and theories which you can verify and study yourself by finding more information on the subject, or talking to other scientists.

In the name of proper science, I also had these articles peer-reviewed by other scientists in relevant fields.

By the way, the aforementioned page doesn’t just stop at Kangen Water and DNA Activation. There is also a page on ‘Star Genie crystals’. I figured that topic is so obviously non-scientific that I do not even need to bother covering it. These pseudoscientists move quickly though. As I was preparing these articles, a new page appeared, about ‘Schumann Resonance’. Again, taking something scientific (Schumann resonance itself is a real phenomenon), and making it into some kind of spiritual mumbo-jumbo. And mangling the actual science in the process. For example, it makes the following claim:

“Raise your frequency, enrich your life!”

Well, then you don’t understand resonance. Resonance is the phenomenon where one oscillating system ‘drives’ another to increase its amplitude at a preferential frequency. So resonance can raise your amplitude, but not your frequency. Which is also what the Schumann resonance pseudoscience *should* be about. Namely, it is based on the fact that alpha-brainwaves are at a frequency very close to the strongest Schumann resonance of the Earth (around 7.83 Hz). So the pseudoscience-claim should along the lines that your brain can ‘tune in’ to this ‘Earth frequency’. But I suppose this particular website does not even know or care what they’re trying to sell, as long as it sells.

If any single one of these scams does not make you doubt the trustworthiness of this site, then still be the fact that every single ‘treatment’ offered on this site turns up tons of discussion on scientific/skeptic-oriented sites should at least make you think twice. This is not just coincidence.

Posted in Science or pseudoscience? | Tagged , , , , , , , , , , , , , , , , , | 4 Comments

Pseudoscience: Kangen Water

I suppose that most of you, like myself, had never heard of Kangen water before. So let’s go over it in a nutshell.

What is it?

Kangen water is water that is filtered by a special water purification device, sold by a company named Enagic.

The device uses electrolysis to ionize the water, making it alkaline (pH larger than 7), and it uses special filters, which claim to ‘cluster’ the water molecules.

What are the claimed benefits?

This depends greatly on who you ask. On Enagic’s site itself, you will see that there is not that much information about anything. More on that later. You have to find a distributor to find out the price, more on that later. You will find that the distributors tend to have claims about Kangen being beneficial to your health, because of things like hydratation, detoxification effects, restoring the acid-alkaline balance of your body, and anti-oxidants. Claims can go as far as Kangen water preventing cancer.

Click here and here for a typical example of a site promoting Kangen water.

So what is the problem?

On the surface, it all looks rather scientific, with all the technical terms, diagrams, and videos with simple demonstrations of fluid physics, references to books and people with scientific degrees. But is any of it real, can the claims be verified independently?

One of the first clues could be the water clustering. The site refers to the book “The water puzzle and the hexagonal key” by Dr. Mu Shik Jhon. The idea of hexagonal water is not accepted by conventional science. While water clusters have been observed experimentally, these clusters are volatile, because hydrogen bonds form and break continuously at an extremely high rate. It has never been proven that there is a way to get water into a significantly stable form of clusters. Another name that is mentioned is Dr. Masaru Emoto. That is ‘doctor of alternative medicine’, at some open university from India. He takes the water cluster/crystal idea further, and even claims that speech and thought can influence the energy level of these water crystals. Clearly we have stepped well into the realm of parapsychology here, which again is not accepted by conventional science, due to lack of evidence and verification.

This idea of water structures or ‘water memory’ is actually quite old, and has often been promoted in the field of homeopathy, as a possible mechanism to explain homeopathic remedies. You could search for the story of Jacques Benveniste and his paper published in the science journal Nature, Vol. 333 on 30 June 1988. When Benveniste was asked to repeat his procedures under controlled circumstances in a double-blind test, he failed to show any effects of water memory.

A similar story holds for anti-oxidants. A few years ago there was a ‘hype’ about anti-oxidants, connected to the free-radical theory of aging, which later turned into the ‘superfoods’-craze. Studies showed very good health-benefits of anti-oxidants, and many food companies started adding anti-oxidant supplements and advertising with them.

More recently however, studies have shown that anti-oxidant therapy has little or no effect, and in fact can be detrimental to one’s health in certain cases. The current stance of food authorities is that the free-radical theory of aging has no physiological proof ‘in vivo’, and therefore the proclaimed benefits of anti-oxidants have no scientific basis.

Likewise there is no scientific basis for any health effects of alkaline water. Physiologically it even seems unlikely that it would have any benefit at all. As soon as you drink the water, it comes into contact with stomach acid, which will lead to a classic acid-base reaction, neutralizing the ionization of the water immediately. Which is probably a good thing, because if it actually did have an effect on your body pH, drinking too much of this water could be dangerous.

Because the body pH is important, the body is self-regulating, with a process known as acid-base homeostasis. The body has several buffering agents to regulate the pH very tightly. In a healthy individual there is no need for any external regulation of body pH, and in fact it is your body that decides the pH. This can be done mainly by two processes:

  1. By controlling the rate of breath, changing CO2 levels in the blood.
  2. Via the kidneys. Excess input of acid or base is regulated and excreted via the urine.

Note also that this is a balance. The claims generally imply that acidic is bad and alkaline is good. But in reality too alkaline (alkalosis) is just as bad as too acidic (acidosis). It is all about the balance.

You can find various information on this and other water scams on the net, from reputable sources such as this overview by the Alabama Cooperative Extension System.

So, there is no scientific basis whatsoever that the Kangen water has any positive effect on your health, or even that the Kangen machine can give the water certain of the claimed properties, such as hexagonal clustering. Which might explain how these machines are marketed. The distribution is done through a multi-level marketing scheme, also known as a pyramid scheme. You can find the agreement for European distributors here. Note the very strict rules about advertisement. They do not want the Enagic-name used anywhere, unless they have specifically checked your content, and have given you written approval. Apparently they most certainly do NOT want you to make claims that the product or company can not back up.

Another red flag is that the advertisements are generally done via ‘testimonials’: people who tell about their experiences with the product. They tend to be the ones that make claims about health and that sort of thing. The key here is that a seller is never responsible for what anyone says in a testimonial. So beware of that: any claims the seller does not explicitly make, but are only put forward in a testimonial, can likely not be backed up. Otherwise the seller would just make these claims himself, in order to promote the product.

The pyramid scheme also makes these machines very expensive, because each seller at each level will want to make profit, causing a lot of inflation of the price. Based on the parts used in a Kangen machine, the whole thing could probably be built for under 100 euros. But these machine actually go for prices in excess of 1000 euros. This is why they don’t list prices on the main website, but tell you to contact your distributor. They do have a web store, but as you see, these prices are very high as well. If you are lucky enough to find a distributor who is high up in the hierarchy, you can find these machines for less. Try searching Ebay for these machines, for example.

This so-called Kangen water and machines for ionizing water can be traced back to the 1950s in Japan. One would expect that if this Kangen water was indeed as healthy and beneficial as claimed, that there would be plenty of empirical evidence to support the claims by now, and these machines would have become mainstream, and would just be sold in regular shops.

So, the Kangen machines appear to be a case of ‘cargo cult’ science: they make everything look and feel like legitimate science, but if you dig a little deeper, you will find that there is no actual scientific basis, and the references are mostly to material that is not accepted by conventional science, but considered to be of a pseudoscientific nature.

In fact, this particular seller seems to push things a bit TOO far, by also mentioning the ‘Bovis scale’, which is a common concept in dowsing… Which is a more ‘conventional’ type of pseudoscience. Similar to the topic I will be covering next time.

Posted in Science or pseudoscience? | Tagged , , , , , , , , , , , , , , , | 1 Comment

The philosophy of science

And now for something completely different… I have been critical of hardware and software vendors and their less ethical actions in the past. But a while ago, something happened that did not have anything to do with computer science at all. But it did have to do with ethics and science. Or rather, pseudoscience.

Namely, someone who I considered to be a good friend got involved with a man who was selling Kangen water machines and performing “DNA activation”. As I read the articles on his website, I was reminded of the philosophy of science courses at my university. About what is real science and what is not. What is a real argument, and what is a fallacy.

There was also the ethical side of things. Clearly she did not realize that this man was a fraud. But should she be told? I believe that everyone is free to make their own choices in life, and to make their own mistakes. But at the least they should have as much information as possible to try and make the right decisions.

Anyway, so let’s get philosophical about science.

Demarcation problem

Before we can decide what is science and what is not, we must first define what science is. This is actually a very complicated question, which people have tried to answer throughout history, and can be traced back all the way to the ancient Greek philosophers. It is known as the ‘demarcation problem’.

A part of the problem is that science has been around for a long time, and not everything that was historically seen as being part of science may actually fit more modern views of science. If you try to approach the problem from the other side, then your criteria may be too loose, in order to fit the historical, or ‘de facto’ sciences as well, since scienctific methodologies have evolved over time, as human knowledge grew.

There are two ways to approach the problem. One way is a descriptive definition, trying to name characteristics of science. Another way is a normative definition, defining an ideal view of what science should be. This implies however that humans have some ‘innate sense’ of science.
Neither approach will lead to a perfect definition of science, but there is something to be said for both. So let us look at some attempts of defining science throughout the ages.

Sir Francis Bacon

Seen as one of the ‘fathers of modern science’, Sir Francis Bacon (1561-1626) wrote a number of works on his views of science, where he put forward various ideas and principles which are still valid and in use today. In his day, science was still seen as classifying knowledge, and being able to logically deduce particular facts from the larger system of knowledge, from more general facts, known as deductive reasoning. In such a system of science, progress in the sense new inventions and discoveries could never happen, because they would not fit into the existing framework of knowledge.

So instead of deductive reasoning, Bacon introduced a method of inductive reasoning: the exact opposite. Instead of deducing specifics from generics, specific observations are generalized into more universal theories by looking at connections between the observations.
However, the potential flaw here is that humans are fallible and their senses can deceive them. So Bacon and other philosophers have tried to identify the pitfalls of human error, and have tried to come up with methodologies to avoid error, and find the truth. You might have heard one of Bacon’s messages: “ipsa scientia potestas est” (knowledge itself is power).

Scientists have to be skeptic, objective and critical. They also have to form a collective, and check each other’s work, in order to ‘filter’ or ‘purify’ the knowledge from human error. Scientific knowledge is universal, and belongs to everyone. The goal of scientific research should be to extend our knowledge, not to serve any particular agenda. These ideals and principles were later characterized by Robert K. Merton in his four ‘Mertonian norms’:

  • Communalism
  • Universalism
  • Disinterestedness
  • Organized Skepticism

These norms are also known by the acronym of ‘CUDOS’, formed by the first letters of each word.

Karl Popper

In the early 1900s, a movement within philosophy emerged, known as logical positivism. It was mainly propagated by a group of philosophers and scientists known as the Wiener Kreis (Vienna Circle). Logical positivism promoted the verifiability principle: a statement only has meaning if it can objectively be evaluated to be true or false, in other words, only if it can be verified. This rejects many statements, such as religious ones, for example: “There is a god”. There is no way to prove or disprove such a statement. Therefore, it is cognitively meaningless.

While there certainly is some merit to this way of thinking, it also leads to a human error, namely that of confirmation bias. Let us use an abstract example (yes, the computer scientist will use numbers of course). In scientific research, you will perform experiments to test certain theories. These experiments will give you empirical measurement data. To abstract this, let us use numbers as our abstract ‘measurements’.
Consider the following statement:

All numbers greater than -5 are positive.

Now, there are many numbers you can think of, which are both positive and greater than -5. In fact, there are infinitely many such numbers. So, everytime you find such a number, you confirm the statement. But does that make it true? No, it does not. We only need to find one number that is greater than -5, but not positive, and we have refuted the statement. In this abstract example, we already know beforehand that the statement is false. But in scientific research you do not know the outcome beforehand, which is why you perform the experiments. But if your experiments never explore the area in which the tests would fail, you will never see any evidence that the theory is flawed, even though it is.

This, in a nutshell, is the criticism that Karl Popper (1902-1994) had on logical positivism. Scientists would believe that a theory would become more ‘true’ the more tests were done to confirm it, and at some point it would be considered an absolute truth, a dogma. However, a theory would only need to fail one test to refute it. Falsifying is a much stronger tool than verifying. Which is exactly what we want from science: strong criticism.

Popper was partly inspired by the upcoming psychoanalysis movement (Freud, Adler, Jung), where it was easy to find ‘confirmation’ of the theories they produced, but it was impossible to refute these theories. Popper felt that there was something inherently unscientific, or even pseudoscientific about it, which led to him finding new ways to look at science and scientific theories. The scientific value of a theory is not the effects that can be explained by the theory, but rather the effects that are ruled out by the theory.

Thomas S. Kuhn

One great example of the power of falsifiability is Einstein’s theory of relativity. The Newtonian laws of physics had been around for some centuries, and they had been verified many times. Einstein’s theories predicted some effects that had never been seen before, and would in fact be impossible under the Newtonian laws. But, Einstein’s theory could be falsified by a simple test: According to Einstein’s theories, a strong gravitational field, such as from the sun, would bend light more than Newtonian laws would imply, because it would distort space itself. This would mean that during a solar eclipse it should be possible to see stars which are behind the sun. The light of stars that are close to the edge of the sun would have to pass through the sun’s ‘gravitational lens’, and their path would be warped.
Einstein formulated this theory in 1915, and in 1919 there was a solar eclipse where the theory could be put to the test. And indeed, when the astronomers determined the position of a star near the edge of the sun, its position was shifted compared to the ‘normal’ position, by the arc that Einstein had calculated. The astronomers had tried to falsify the theory of relativity, and it passed the test. This meant that the Newtonial laws of physics, as often as they had been validated before, must be wrong.

The main problem with Popper’s falsifiability principle however is that it is not very practical in daily use. Most of the time, you want to build on a certain foundation of knowledge, in order to apply scientific theories to solve problems (‘puzzles’). So Thomas S. Kuhn (1922-1996) proposed that there are two sides to science. There is ‘normal science’, where you build on the existing theories and knowledge. And then there are exceptional cases, such as Einstein’s theory, which are a ‘paradigm shift’, a revolution in that field of science.

During ‘normal science’, you generally do not try to disprove the current paradigm. When there are tests that appear to refute the current paradigm, the initial assumption is not that the paradigm is wrong, but rather that there was a flaw in the work of the researcher, it is just an ‘anomaly’. However, at some point, the amount of ‘anomalies’ may stack up to a level where it becomes clear that there is a problem in the current paradigm. This leads to a ‘crisis’, where a new paradigm is required to explain the anomalies more accurately than the current paradigm. This leads to a period of ‘revolutionary science’, where scientists will try to find a better paradigm to replace the old one.

This also leads to competing theories on the same subject. Kuhn laid out some criteria for theory choice:

  1. Accurate – empirically adequate with experimentation and observation
  2. Consistent – internally consistent, but also externally consistent with other theories
  3. Broad Scope – a theory’s consequences should extend beyond that which it was initially designed to explain
  4. Simple – the simplest explanation, principally similar to Occam’s razor
  5. Fruitful – a theory should disclose new phenomena or new relationships among phenomena

Logic and fallacies

Scientific theories are all about valid reasoning, or in other words: logic. The problem here is once again human error. A scientist should be well-versed in critical thinking, in order to detect and avoid common pitfalls in logic, known as fallacies. A fallacy is a trick of the mind where something might intuitively sound logical, but if you think about it critically, you will see that the drawn conclusion cannot be based on the arguments presented, and therefore is not logically sound. This does not necessarily imply that the concluded statement itself is false, merely that its validity does not follow from the reasoning (which is somewhat of a meta-fallacy).
There are many common fallacies, so even trying to list them all goes beyond the scope of this article, but it might be valuable to familiarize yourself with them somewhat. You will probably find that once you have seen them, it is not that hard to pick them out of written or spoken text, you will get triggered by a fallacy quickly.
You should be able to find various such lists and examples on line, such as on Wikipedia. Which brings up a common fallacy: people often discredit information linked via Wikipedia, based on the claim that Wikipedia is not a reliable source. While it is true that not all information is accurate or reliable, this is no guarantee that all information on Wikipedia is unreliable or inaccurate. This is known as the ‘fallacy of composition’. Harvard has a good guide of how to use Wikipedia, and says it can be a good way to familiarize yourself with a topic. So there you go.

Fallacies are not necessarily deliberate. Sometimes your mind just plays tricks on you. But in pseudoscience (or marketing for that matter), fallacies are often used deliberately to make you believe things that aren’t actually true.

So, now that we have an idea of what science is, or tries to be, we should also be able to see when something tries to look like science, but isn’t.

As a nice example, I give you this episode of EEVBlog, which deals with a device called the ‘Batteriser’

It claims to boost battery life up to 800%, but the technology behind it just doesn’t add up. Note how they cleverly manipulate all sort of statements to make you think the product is a lot more incredible than it actually is. And note how they even managed to manipulate a Dr. from the San Jose State University to ‘verify’ the claims, so that gives you the false impression that (all of) the claims of this product are backed up by an ‘authority’.

Posted in Science or pseudoscience? | Tagged , , , , , , , , , , | Leave a comment