One last aspect in which Dembski’s use of the ideas of information and probability are confusing is the fact that it seems to ignore what is an inescapable consequence of a theory (T) being an explanation of certain phenomena or empirical laws (E).
Since a necessary requisite for T to explain E is that E logically derives from T (or, at least, from T and certain initial conditions or limiting constraints), it follows that the prior probability of T can be at most as high as that of E. I.e., the prior probability of the explanation is always not higher than that of the explained phenomena (for it is an elementary truth of probability calculus that, if A entails B, then p(A) ≤ p(B); the conclusion does not change, of course, if we substitute prior by posterior probabilities). This means that the explanation (or the set of all causes) of E always contains, necessarily, more information than E itself (or, at least, the same amount of information).
This is completely analogous to the NFL theorem, but can be interpreted in a very different way than Dembski’s: what this shows is that the universe is essentially unexplainable! Let me explain it: one fact ‘demands’ more strongly an explanation, the more improbable it is according to the rest of our knowledge; and the more ‘strange’ a phenomenon is (or a combination of phenomena), the more improbable must it be the theory that succeed to explain it. So, the more facts our theories happen to explain (the more successful they are), the more improbable a priori it will be that the real world happens to be a world in which those theories are true, instead of a world that is governed by different laws (or by no laws at all), and so, the more difficult it will be to find an explanation of why the world happens to obey precisely those laws.
Hence, recognising that the explanations we offer must already contain all the information that we find out in what we want to explain, does not lead us necessarily to the conclusion that this information comes from one ‘mind’, or from some supernatural entity, but is a simple reminder of our fate as travellers in an unended quest: we live in a universe that happens to be like it is, while it could have been of an incomputably high number of different ways, and we shall simply ignore forever why it is that our universe is the way it is. The more successful science is, the more we will be able to condense our description of the world into simple and nice theories, but this will not approach us a single millimetre to the ‘metaphysical source’ of the ‘information’ that our world contains, if the notion of such a ‘source’ has any sensible meaning at all.
Let me illustrate this point with a remake of the classic ‘clock-on-a-beach’ example (Paley, 1802). As defenders of ‘Intelligent Design’ have asserted from centuries, finding out something as complex as a mechanic clock would lead us to infer the existence of someone who has created, through an intelligent and conscious process, that marvellous piece of engineering. So, what to say about the existence of entities incredibly more complex than clocks, as living beings are? Well, we can reformulate the example in two opposed directions. First, imagine that what we discover on the beach is not a mechanic clock, but a Palaeolithic arrow point. Second, imagine that what we discover is a PET scanner. In the three cases we shall be led to the conclusion that ‘somebody’ has deliberately created our astonishing finding, but who?
Reconsider the clock case: it has really not been created by one single person, but by a society able of developing such a profession as that of clockmakers, with all the division of labour and the accumulation of technical, cultural and scientific knowledge that allows a person to become a clockmaker. The stone point, being much simpler than the clock, would have been created by a much less complex society (or we don’t need to suppose than it is more complex than the former). Instead, the PET scanner demands a society still more complex than the one that created the clock, with still more division of labour and more accumulation of knowledge. So, the more complex (and less probable a priori) is our finding, the more complex will necessarily be the ‘explanation’ of it. Hence, if discovering a living being led us to think that it is so complex that it must be the product of some intelligent mind, this mind would need to be incredibly more sophisticated than the society that is able of producing the PET scanner. And lastly, this ‘creator of living beings’ (or, to be more precise, the fact that this entity has arrived to the idea of creating something like a living being, has developed the purpose of doing it, and has collected the means necessary to do it) would be much more difficult to explain than the creator of the scanner (or, again, than the fact that in our society somehow the idea and the decision of creating such a thing, together with the gathering of all the means to build it). Or, to connect it again with the discussion of the ‘No Free Lunch’ theorem: the fact that all the information needed to successfully starting the process of creating a living being was actually existing within the entity that actually started it, is a fact that demands at least as much explanation as the existence of living beings themselves, and this is true no matter what our hypotheses about the nature of that process are. This is a point that has been recently insisted upon by Richard Dawkins in his discussions about religion. And, by the way, the fact that living beings are so much more complex than artificial things may also lead us to suspect that they are due to a completely different kind of process, one that is able of producing not only things as complex as snails or sequoias, but also animal societies able of designing arrow points, clocks or PET scanners.
Dawkins, R., 2006, The God Delusion, Black Swan.
Dembski, W., 2002, No Free Lunch. Why Specified Complexity Cannot be Purchased Without Intelligence, Lanham, Rowman & Littlefield.
Paley, W., 1802, Natural Theology.