Deconstructing intelligent design (2): Dembski’s “explanatory filter” is not a filter at all

intelligent design

Besides confusing what a scientific explanation is, as we saw in the previous entry, Dembski’s ‘explanatory filter’ (‘anything must be explained by law, by chance, or by design’) also commits the worst mistake that can be committed while using the logical rule known as ‘disjunctive syllogism’ (“either p or q; not p; ergo q”) as a method of inference: not ensuring in the first place that the proposed alternatives are mutually exclusive and jointly exhaustive. We shall see that the three horns of Dembski’s filter fail to obey both conditions.

Ignoring for a moment the question of deliberate purpose (the third horn of the filter), and also the question of whether the two first horns are exhaustive, which we shall examine in next entries, it is simply false that explanations in science respond to some fundamental alternative between ‘law’ and ‘hazard’. To say the least, there is nothing like ‘explanation from mere law’. Rather on the contrary, a typical scientific explanation of a fact (or set of facts) always contains both elements (‘law’ and ‘hazard’), usually in a very well integrated way, in what is customarily known as a model. A scientific model usually consists in a number of deterministic equations or other constraints, together with some assumptions about the statistical distribution of the ‘mistakes’ (or, if the model is indeterministic, of the variables’ values themselves); to this we add some empirical information (e.g., measures) about concrete entities or systems, information that, combined to those equations and statistical assumptions, allows to infer other items of information (e.g., predictions). What serve to explain the facts we want to explain is the peculiar combination of our deterministic equations and our statistical assumptions about the deviations from the solutions of those equations. This means that there is simply no example in empirical science of ‘explanation from (mere) laws’, even in the case of deterministic theories, for there is always a stochastic element (due, e.g., to measurement or specification errors) in the application of the models to the empirical facts.

In a similar way, there is nothing in science like explanation ‘from mere chance’. When scientists infer that some data are ‘random’, what they are saying is that it has been possible to proof that the data respond to a particular statistical distribution, or, more exactly, to what might be expected from some specific stochastic process. This means that scientists have discovered in this case a particular regularity, only that it consists in a statistical regularity, not in a deterministic one, and hence it becomes possible to calculate the probability that single data or sets of data show such and such properties. Obviously, different assumptions about the stochastic process that is actually generating the observed data will lead to different predictions, and the statistical success or failure of these predictions will make scientists accept or reject those assumptions. Alternatively, when scientists reach the conclusion that no known stochastic process can lead to the statistical distribution of events they empirically know, then they do not assert that ‘these events are explained by hazard’; rather, what such a situation indicates is that they do not know the explanation of those events, for scientists have been able of offering neither a theoretical model about the mechanism according to which the events are produced, nor even a stochastic model about how they are generated, i.e, they have not been able of reducing the phenomena to any known regularity, neither deterministic nor statistical.

So, when Intelligent Design (ID) theorists talk about ‘explanation from chance’, they should make explicit what particular mathematical assumptions about the stochastic process are they referring to, and check whether the scientific models that are actually used to try to explain what they say that cannot be explained ‘by chance’, fulfil those assumptions or not. The case is that usually they do not do anything like that: for example, when ID’s ‘calculate’ the probability of a particular protein being formed by computing the possible sequences of DNA, they are assuming that the stochastic process leading to the existence of the protein is mathematically equivalent to having an urn with infinite balls for each of the four DNA bases, and from which we extract a number of balls equal to the length of the sequence needed for our protein. Of course, the causal process leading to the existence of a given protein is not mathematically (and hence, probabilistically) equivalent to such ‘bingo-like’ stochastic fiction, and the inferences that can be derived from this absurd model about the probabilities that in the real world (e.g., in a world submitted to the stochastic processes associated to Darwinian replication) such and such protein is formed are patently nonsense.

Hence, contrarily to what Dembski’s ‘explanatory filter’ says, scientific models do not explain ‘either’ through laws, ‘or’ through chance, but always through some specific combination of deterministic equations and statistical regularities (the latter can affect either to the measurement processes, or to the ‘real’ variables, or both). Perhaps this would not be considered as very relevant by ID’s, for, in any case, they think that biological phenomena cannot be explained by any ‘combination’ of deterministic laws and statistical regularities, though, in fact, Dembski treats the two first horns of his three-horned dilemma as separate: he talks of ‘explaining by laws’ as if it were something like ‘finding a natural law stating that every time that life emerges, it must always have such and such type of protein’, which is patently absurd (scientific models employ ‘general’ laws, but the specific combination of laws that a model employs is assumed to affect to a particular type of situations, and so the model itself is not a ‘universal law’), and he talks of ‘explaining through chance’ as if it simply consisted in the ‘bingo-like’ model I have just criticize. No space is given in Dembski’s rhetoric to allow to think in the mathematical possibilities of a combination of several universal laws and several statistical regularities applied to specific circumstances with specific constraints, which is the way scientific models proceed when trying to explain anything.

REFERENCES

Dembski, W., 1998, The Design Inference, Cambridge, Cambridge University Press.

Dembski, W., 2002, No Free Lunch. Why Specified Complexity Cannot be Purchased Without Intelligence, Lanham, Rowman & Littlefield.

Written by

2 comments

  • […] Jesús Zamora sigue desmontando las argumentaciones del Diseño Inteligente, lo que sirve además para repasar qué entendemos por explicación científica. En Deconstructing intelligent design (2): Dembski’s “explanatory filter” is not a filter at all. […]

Leave a Reply

Your email address will not be published.Required fields are marked *