If in the two previous entries of this series we have seen that (contrarily to what Dembski’s filter suggests and needs) ‘law’ and ‘hazard’ are not different types of explanations, but necessary and complementary elements of basically all explanatory models, I will try to show here that ‘explanation from purpose’ is not as significantly different from ‘explanation from laws’ (or from models, to be more faithful to real scientific explanatory activity) as defenders of ‘Intelligent Design’ want us to accept, and as to entitle the separation of intelligent purpose as an independent horn of the EF.
Remember that ‘Dembski’s filter’ consists in the claim that the explanation of any phenomenon P should consist in an explanation by (a) ‘laws’, (b) ‘chance’, or (c) ‘purpose’. My argument in this entry is that our empirical knowledge of the world show us that ‘purpose’ and ‘design’ are just a particular type of capabilities that some empirically given entities or physical systems manifest, and hence that ‘explanation from purpose’ is just a specific type of ‘explanation by laws-(plus-hazard)”. Diamonds have the capacity of cutting crystal, stars have the capacity of transmuting chemical elements, tree leafs have the capacity of photosynthesising sugar, and certain types of animals (including us) have the capacity of conceiving plans and acting through making and following purposes. Empirically, there is, hence, no reason at all to separate intelligent action as an ontologically different type of ‘causal force’, or ‘explanation’. Of course, many things that can be done through the action of an intelligent animal (who, besides its intelligence, has a musculo-skeletal system capable of physically interacting with its environment in order to carry out the plans and goals it has conceived), cannot be done through any other known (or even conceivable) physical process. But this does not legitimise the conclusion that the claim ‘this is due to intelligent action’ is a scientifically valid explanation per se; what would be necessary to add to such a claim to transform it into a real explanation, is information about how the existence of the intelligent system initiated an empirically testable process whose final step was the result to be explained, for it is the theory about the causalprocess (not just about the causal ‘principle’ in which ‘intelligence’ would consist it) what has explanatory power.
This means that ‘intelligence’, if it has to have any meaning as a legitimate concept within empirical science, refers to the peculiarities of the processes that take place within some empirically given entities or systems; for example, we say that intelligence refers to the characteristic way in which our organisms solve the problem of finding food, but not to the way in which they solve the problem of keeping blood in circulation. Of course, we do not know all the details about how our bodies solve any of these problems, and it is true that we know much less about the first than about the former, but we do know at least that altering such and such parts of our brains severely distorts our capability of behaving intelligently, and that having some types of nerve system allows to display a wider range of intelligent behaviours. So, we are able to attribute a causal power to ‘intelligence’ in the case of animals because there is an empirically testable causal link between the assumed cognitive states of their brains, on the one hand, and their physical, observable behaviour, on the other hand. If there were absolutely no way of inferring that a physical system contains something like ‘cognitive states’, then, no matter how complex and apparently purposeful its behaviour were, our natural response would be that this behaviour is not caused by a cognitive process, but by other properties of the system, with nothing to do with ‘intelligence’.
The scientific attitude towards intelligence and purpose is, hence, to take them as empirical phenomena, and limiting our claims about how they are linked to other events to the regular connections that we can empirically discover (through the common pack of scientific research strategies, from experimentation to model building to statistical analysis) between those psychological properties and these other facts (see., e.g., Narby (2005)). From this point of view, the regularities we happen to discover about physical systems that manifest intelligent and purposeful behaviour, will be included in a natural way in the scientific models with which we’ll try to explain whatever empirical facts we like. So, in a nutshell, to separate ‘explanation by intelligent design’ from ‘explanation by natural laws’ is exactly as absurd as to claim that ‘being the result of a digestion’, or ‘being the result of a nucleosynthesis’, are processes that deserve to be considered as a type of scientific explanation essentially different than ‘being the result of the operation of natural laws’. Explanation in science is always explanation by laws (cum statistical assumptions), independently of whether those laws are the laws of chemistry, the laws of geology, the laws of physics, or the laws of psychology.
To sum up, Dembski’s explanatory filter fails to fulfil the minimal demand of a ‘disjunctive syllogism’ argument because, in the first place, the three options he presents are not independent alternatives. The real options (i.e., the one which are relevant in science) are not ‘this is explained by laws’, ‘this is explained by hazard’, and ‘this is explained by design’, but, instead, something like the following: ‘this is explained by this specific model, or it is explained by this second specific model, or by this third specific model…, or by this n-th specific model’, where each model is a particular combination of laws and statistical assumptions, and each law can belong to any possible branch of science. How many horns does such a filter contain? Obviously, at least as many as we are able to invent. But, in the second place, even such a magnified filter still fails to fulfil one of the essential requisites of the type of logical argument it exemplifies: exhaustiveness. For we are not able, of course, of imagining all the possible explanations of a fact, and it can be the case (and it is, in many cases!) that we fail tofind any acceptable explanation for the phenomena we are trying to understand: perhaps there are many unknown types of causal process that are responsible of those facts, i.e., our list of explanatory models can be (and it usually is!) incomplete, simply because of our ignorance of the whole list of possible explanations. So, a complete version the ‘real explanatory filter’ would be : ‘this fact is explained by this specific model…, or by this n-th specific model…, or the fact must remain as still unexplained’.
Dembski, W., 1998, The Design Inference, Cambridge, Cambridge University Press.
Narby, Jeremy., 2005, Intelligence In Nature. New York: Penguin.