# Searching for the lost causes

The notion of ‘cause’ has a bipolar personality within the sciences. On the one hand, it is difficult to resist to the old Aristotelian dictum that real scientific knowledge (*episteme*) is knowledge of the causes; after all, one of the goals of scientific theories is to give explanations, and most people assume that to have an explanation of something is just to understand its causes (see Zamora Bonilla (2013) ^{1} for a brief discussion of explanation, causation and understanding). But, on the other hand, the concept of causality has been subjected to some of the most demolishing philosophical attacks, at least since David Hume argued in the 18th century that we can never observe that between two correlated events there is a ‘necessary connection’, nor can we infer this from some previously known premises. Probably Hume did not attempt to show that it is nonsense to say that some events (e.g., water heating) are the causes of some effects (evaporation), but simply aimed to deny that when we assert this, we are not legitimate to assert that there is a necessary connection between both events, in addition to a mere empirically determinable correlation, and that the truth of even this correlation is just a hypothesis that cannot be logically or mathematically proved from the data we happen to have. However, this stronger claim, i.e., that there is no causality in the natural world, and that ‘mature’ scientific theories do not contain (and are even incompatible with) the notion of cause-and-effect, has been defended by prominent scientists and philosophers at least in the last century.

One of the main reasons to be sceptical about the existence of causal relations (i.e., an essential asymmetry between the cause and the effect) is, of course, the well known fact that most physical theories are *time-invariant*. This is a concept many people have lots of problems to grasp, though it is really very simple. Most fundamental physical theories (like Newton mechanics, Maxwell electrodynamics, or quantum mechanics) basically consist in a set of equations that allow to specify the state of a system in any arbitrary moment of time. For example, given the positions and masses of two bodies at moment t, and given Newton’s gravity law and his second law (F = m*a), we can use these equations to calculate both objects’ positions at a different moment t’. Given a theory, *possible* *descriptions* of physical systems (e.g., trajectories of bodies together with their masses) can be divided into two groups: those that satisfy the theory’s equations, and those that do not. Take now the description of a particular system (real or just imaginable), and construct out of it a new system description which is exactly equal than the former, save for the fact that in it times goes backwards (i.e, the final state of the original system is now the initial state of the new system, and viceversa). The property of time-invariance simply means that the first system satisfies the equation of the theory *if and only if* the second one also satisfies them. Stated differently, a physical theory is time-invariant if, given that a system obeys its laws, a ‘time reversed projection’ of the first system would also obey the theory’s laws.

Why can the time-invariance of fundamental physical theories constitute a problem for the notion of causality? The main reason is that the equations of a time-invariant theory cannot distinguish the future from the past, both *directions of time* are equivalent from their ‘point of view’; but the relation of cause-and-effect is essentially asymmetric: the boiling of water is the effect of its heating, but not viceversa. Though time reversibility had been noticed during the XIXth century (for example, by Laplace in his famous image of the ‘demon’ for whom the future and the past would be as real as the present, as well as in the discussion on Boltzmann’s entropy law in thermodynamics), it was Bertrand Russell (1918) ^{2} who pointed to the fact that nothing in the equations of fundamental physical theories can be taken as a representation of the (time asymmetric) causality relation, and hence, we have no reason to assert that such a relation exists in the physical reality. The problems that quantum mechanics put to the classical notion of causality also added to the general scepticism amongst physicists and philosophers of physics towards that traditional concept.

But causal notions can still be shown to have an important role in physical knowledge, as Mathias Frisch has recently argued ^{3}. Frisch starts by noticing that a physical theory cannot be reduced to its equations (or to the set of its abstract models, as some philosophers from the ‘semantic’ approach do), i.e., it is not identical to its *mathematical* part. The theory is rather the claim that that piece of mathematics can be *applied* to some empirically real systems. So, perhaps it is the *empirical interpretation* of the formulae, or in the kind of *processes we have to perform in order to make empirical predictions* out of those equations, when we can find the ‘niche’ of causal notions within our theories. The latter, in particular, the way in which we employ empirical data and theory to make inferences about non-observed phenomena, is Frisch’s preferred locus. He takes the example of a radiating electro-magnetic wave: how do we *infer*, from the observation of certain waves across a very limited space and time, the existence of the source of that radiation (e.g., a star)? Because of time reversibility, Maxwell equations alone don’t allow to distinguish between a process in which the star emits a light wave that diffuses towards the rest of the universe, or the opposite processes in which a wave comes from everywhere in the space in a co-ordinated way and collapses simultaneously in, and is absorbed by, the star. So, what is the *premise* that allows us to infer that we are observing (e.g., when examining spectra taken simultaneously from different telescopes) several light waves *caused by a star*, instead of several identical waves emitted by our telescopes and collapsing in the future in the same place within our galaxy? Frisch points to the (Reichenbach’s) principle of common cause ^{4} (two events statistically correlated but causally unrelated must have a common cause): since there is a correlation between the observations from different and causally unrelated telescopes (i.e., they observe the same spectrum), in order to choose between the hypothesis that those spectra come from the same source, or the hypothesis that those spectra are caused by a some coincidence in the particular histories of each individual telescope, astronomers, etc., a coincidence that makes them emit independent waves that will exactly hit the same point in the galaxy a lot of years later, the first hypothesis demands to assume a statistical distribution of events that is much simpler:

“(the first process) appears to be normal and entirely to be expected, since the correlations among the disturbances can be explained by their common cause. (The reversed) process, by contrast, seems ‘contrived’, ‘mysterious’ or ‘improbable’, since the correlations do not have a common cause. A causal representation of the phenomena, thus, can explan why we observe diverging waves in nature but not their temporal inverse –perfectly converging waves- even though both kinds of process are compatible with the dynamical laws” (Frisch, 2013, p. 331.)

So, it is the *embedding* of our time-reversible equations within a *causal framework* not contained in the equations themselves, what allows to use our statistical data in order to make predictions that can later be empirically confirmed. Causes, after all, seem to have been lost only for a while.

## References

- Zamora Bonilla, Jesús (2013) “¿Puede la ciencia explicarlo todo?”,
*Investigación y Ciencia*, 436, enero, 50-51. http://www.investigacionyciencia.es/investigacion-y-ciencia/numeros/2013/1/puede-la-ciencia-explicarlo-todo-10700 ↩ - Russell, Bertrand (1918), “On the notion of cause”, in
*Mysticism and logic, and other essays*, New York, Longmans, Green & Co. ↩ - Frisch, Mathias (2012), “No place for causes? Causal skepticism in physics”,
*European Journal for Philosophy of Science*, 2, 313-336 ↩ - Reichenbach, H. (1949).
*The Theory of Probability*. Berkeley: University of California Press. ↩

2commentsNice post. The problem of reversibility, causality and the arrow of time is indeed a very interesting problem.

Even in classical mechanics microscopical reversible theories, as Newton’s, give an irreversible dynamic when you extend the number of particles to infinity. That is the phenomenology that arises from Boltzmann’s equation.

The problem is more complicate with quantum mechanics. The time evolution of an isolated system is given by Schrödinger’s equation, that is reversible. But if the system is measured, the evolution changes, giving a quantum jump. This jumps are not easy to understand, and are usually modeled by stochastic differential equations, that are irreversible. As an example, the dynamics of an excited atom is reversible until it decay, that is an random process. Then it is more complicate.

Of course, in quantum mechanics the concept of causality is also weirder. What makes the atom to decay? Why if I measure a spin-1/2 system I will obtain +1/2 or -1/2?

“…the first hypothesis demands to assume a statistical distribution of events that is much simpler…”

Ok, that’s true, but we are still talking about probability, about what is the more probable belief.

And that’s the point from David Hume: The human being seems not be able to reach any certain knowledge of the world, just beliefs more or less probable.

That cause of Frisch, is just a belief based in the habit of the empirical observations. We have just a more probable relation but not the certainty in the real cause.

It seems that the causal notion remains lost since Hume.