After having shown the ways in which Richard Dembski’s ‘explanatory filter’ (EF) in support of the ‘intelligent design theory’ (ID) misconceives and misapplies the nature of scientific explanation, I shall devote the last entries of this series to discuss another mistake in Dembski’s work: the way in which he employs the ‘no free lunch’ theorems (NFL).

According to NFL, no search algorithm performs, on average, better than blind search (given certain assumptions about the structure of the stochastic process on which the algorithm works, for example, that it is based on a uniform distribution), and hence, selection algorithms (e.g., Darwinian processes) only ‘work’ if they are ‘intelligently programmed’ with the necessary information to perform better than random. This is a valid mathematical theorem, and hence it cannot be discussed empirically. As a mathematical truth, it can also not contradict any state of affairs that happens to be true in some possible world: both possible worlds in which god exists and worlds in which she doesn’t, possible worlds in which there are animals or other living beings and worlds in which there are none, would be worlds in which the NFL theorem is valid, and this simple fact raises suspicions over the use of the theorem to draw any *factual* conclusion.

But, of course, even if the theorem is logically valid, what can be contested is the *interpretation* that ID’s make of it. For example, Dembski assumes that, if there is a process that leads to the emergence of something complex enough, then the process must have been ‘consciously programmed’; but there are other alternatives: for example, it can be the case that some assumptions of the theorem do not obtain in the real process (e.g., the underlying probability distribution may not be uniform – or ‘bingo-like’), *or* it can be the case that the real world contains indeed the information that the process needed to make the complex entity emerge, though this information has not been ‘introduced’ in the world by something like ‘a mind’ (cf. Häggström, 2007). To understand this possibility, consider, for example, the emergence of a planetary system out of a nebula made out of gas and dust; the system originates out of the simple laws of mechanics (plus nuclear physics –in the star– and chemistry –in the planets–); the planets can have a lot of marvellous details, from seas and caves to volcanoes and dawns; hence, we must conclude that the laws of physics and chemistry, plus the ‘initial’ distribution of dust and gas in the original nebula, contained *all* the information needed to create such improbable details, but this does not entail that the molecules of the nebula were placed in exactly where they should be ‘by an intelligent mind’.

Dembski’ interpretation is founded on a confusion deserving to be mentioned: ‘information’ is too messy a concept, and very often it is employed as if it had something to do particularly with *minds*, even more, as if it might only be *created* by minds. Of course, there is absolutely no theory (neither mathematical, nor empirical) about the *creation ex nihilo* of information (thermodynamics says perhaps something about its ‘destruction’), in the sense that what mathematical theories say about information is simply how would a system evolve *given* that it has such and such amounts or types of information, and how would its information be ‘distributed’ among the parts of the system given certain conditions. But the mathematics of information is as agnostic about the ‘ontological origin’ of information as arithmetic is about the ‘creation’ of numbers; and, exactly in the same way, it is as silent about whether information has something essential to do with ‘minds’ as arithmetic is about whether numbers have something fundamental to do with bank accounts. ID’s assert that ‘mechanistic’ processes cannot ‘create new information’, and derive from this dubious premise the conclusion that this information ‘must have been created by a mind’; but the NFL theorems do not make any relevant distinction between mechanistic and cognitive systems: if *no* algorithm can perform better than mere chance, it is irrelevant whether it is the algorithm describing the emergence of complex things by means of random mutation and natural selection, or it is the algorithm describing the functioning of someone’s mind. So, any argument indicating that ‘mechanistic’ processes cannot make complex entities to emerge, would apply *exactly in the same way* to ‘mental’ processes. Defenders of ID are, hence, committing the fallacy of assessing with different criteria ‘mechanistic’ and ‘mental’ explanations: they do not demand to the latter what they demand to the former, i.e., a clear explanation of *how* it is possible that a cognitive system (be it a physical brain, or a supernatural agent) can *reach* the cognitive state consisting in having the idea, the intention and the capability of producing the kind of complex entity whose existence we are trying to explain.

To put an illustrative example: imagine that Mozart had lived a creative life till his sixties. What is the probability of one of us being capable of replicating exactly one of the new symphonies Mozart would have composed in those extra third or four decades? Surely, it is as close to zero as we want. But Mozart himself *would*! Is this due to some ‘non-mechanistic’ influence of Mozart’s mind on the physical universe (a universe that wouldn’t have ‘by itself’ the capacity of producing that piece of information –the new symphony–)? Or is it simply because Mozart’s *brain* (taking into account both its amazing and unique structural peculiarities and the information it gained from Mozart’s education and experience) contained the information needed to give birth to those marvelous (and now inexistent) works? Obviously, it is because of this second reason, and this lead us to the scientifically relevant question of ‘how was it possible for a human brain to develop those capabilities’, a question for which the ‘information-always-comes-from-a-mind’ theory is even not able of addressing.

In conclusion, Dembski’s use of the NFL theorem wrongly assumes the axiom that ‘information always comes from some mind’. In combination with what we saw in the previous entries, we can affirm that this is again a fallacy of *petitio principi*: the argument uses as a premise what it is assumed to have the duty of proving, or at least an essential part of it.

**REFERENCES**

Dembski, W., 2002, *No Free Lunch. Why Specified Complexity Cannot be Purchased Without Intelligence*, Lanham, Rowman & Littlefield.

Häggström, O., 2007, “Intelligent Design and the NFL theorems”, *Biology and Philosophy*, 22, 27-230.