Closer to the truth (1)

Probably, the most common assumption about scientific knowledge amongst the general public, and certainly amongst a substantial percentage of the philosophers of science, is that science attempts to find out the truth about the world (just like many other institutions or professions, like the courts, the police or whoever that needs to look for information, also do). This is one way of describing what we can call scientific realism (though not the only one). Philosophers, however, are famous by their scepticism, and this naïve or intuitive realism has been the subject of much criticism from a lot of different angles; we have recently seen here, for example, one of the most recent proposals to replace ‘truth’ as the goal of science by other values, like, e.g., ‘understanding’, though there are other approaches even less congenial to the claim that science pursuits something like ‘an objective truth’.

Photo: Michael Carruth / Unsplash

One of the most evident reasons (at least from at least one century ago) why this could be reasonably doubted is the fact that scientists have very, very often being wrong in their theories about the world. For example, by 1900 it was perhaps possible to assume that Newtonian physics was ‘the last word’ in the realm of material bodies, a perfect universal description of how matter, forces and energy behave at all scales. But just a few decades later, scientists had discovered that at very high speeds, or at very small distances, or with very big masses, Newton’s laws were just more or less good approximations. Newtonian dynamics was replaced, hence, by Einstein’s theory of relativity and by the still more mysterious quantum mechanics, which, in their turn, were mutually incompatible, so that very likely at least one of them was as false as classical physics was. A view of the history of science as a succession of untrue theories became the standard creed. This was also more dramatic for those (and both in philosophy and in science were a lot) who accepted Karl Popper’s proposal (known as ‘falsificationism’) that, it is not even that all scientific theories are very probably false and will be superseded by other theories, but that real scientists have the methodological obligation of trying to prove that their theories are false, deriving from them empirical predictions that have the chance of being not fulfilled, showing in that way that the theories are not correct.

The same Karl Popper suggested, almost thirty years after the original publication in German of his Logik der Forschung (1934, published in English as The Logic of Scientific Discovery), an interesting way of ‘having the falsificationist cake and eating its realist filling’. As other authors expressed it later, modifying a quote from Orwell’s famous novel Animal Farm, the question is that “all scientific theories are false, but some of them are falser than others”. Or, stated otherwise, false theories can perhaps be ordered according to how far from the truth they are. (One important technical point: the idea is not ‘how far from being true’ a theory is, but rather ‘how far from being the whole truth it is’; this move would allow even to talk about true theories as being some of then closer or further from the whole truth). Popper baptized this concept of ‘closeness to the whole truth’ with an existing term –verisimilitude-, which already possessed a more or less clear meaning (something like ‘credibility’, or ‘appearance of truth’), different from the new, more technical one; and hence other authors renamed it in the following decades with the more appropriate neologism truthlikeness.

Popper offered a quantitative definition of verisimilitude (which I shall dispense now), based on a more primitive notion of probability, and also a simpler, qualitative one, which is the one I will discuss here (by the way, the numerical definition ended having as direr problems as we shall see the other also has). We have to recognise, be it as it may, that Popper’s qualitative (or comparative) definition has a lot of logical and commonsensical appeal. In the first place, Popper invites us to interpret a theory A as a conjunction of statements about the world, together with all other statements that logically follow from it (or, as logicians would say, A would be a set of propositions closed by the relation of logical consequence). In the second place, he also distinguishes two particular sets of propositions or statements (say, about one particular scientific field, which is the object of the theories we want to compare): T, which is the set of all true statements, and F, which obviously is the set of all the false ones. (By the way, T is a ‘theory’ in the sense just defined, but F is not: from the statements in F many propositions that are true logically follow, and hence F does not contain all the consequences of some statement and only those consequences).

Having these concepts in mind, we say that of two scientific theories, A and B, the former is more verisimilar (or ‘closer to the -whole- truth’) than the latter if and only if all the false consequences (‘mistakes’) of A are also consequences of B, and all the true consequences of B (‘successes’) are also consequences of A (and the same is not true of B in relation to A). More technically, A is more verisimilar than B if and only if A∩F ⊆ B∩F and B∩T ⊆ A∩T (and at least one of the two inclusions is strict).

This definition, besides being rather intuitive, had the additional advantage of providing some kind of logical support to Popper’s falsificationist methodology, in a clever and elegant way, as we already saw in this entry. In a nutshell, even if both theories, A and B, have been empirically falsified, if it happens that all experiments or observations that falsify A also falsify B, but A still makes some correct predictions that B does not make, then this can be taken as a confirmed prediction of the meta-hypothesis according to which A is closer to the truth than B.

Unfortunately for Popper, just one decade after publishing these ideas it was independently and almost simultaneously demonstrated by two authors (David Miller and Pavel Tichy) that no false theories A and B can be in the logical relation of being one of them closer to the truth than the other (in Popper’s qualitative sense). Here is the proof:

Suppose that A and B are both false, and that A contains some true proposition (a) not contained by B. Let f be any falsehood entailed by A. Since A entails both a and f, their conjunction, a&f, is a falsehood entailed by AA, and so part of A∩F. If a&f were also part of B∩F, then B would entail both a and f. But then it would entail a, contrary to the assumption. Hence, a&f is in A’s falsity content and not in B’s. So A’s truth content cannot exceed B’s without A’s falsity content also exceeding B’s.

Suppose now that B’s falsity content (B∩F) exceeds A’s (i.e., A∩F). Let g be some falsehood entailed by B but not by A, and let f, as before, be some falsehood entailed by A. The sentence f→g is a truth, and since it is entailed by g, it is in B’s truth content (i.e., B∩T). If it were also in A∩T, then both f and f→g would be consequences of A, and hence so would g, contrary to the assumption. Thus, A’s truth content lacks a sentence, f→g, which is in B’s truth content. So B’s falsity content cannot exceed A’s falsity content without B’s truth content also exceeding A’s truth content.

In the next entries we shall examine some of the ideas other authors came with since the late 70s in order to rescue (at least) the logical coherence of the idea of some false theories being closer to the truth than others.

References

Miller, D., 1974a, “Popper’s Qualitative Theory of Verisimilitude”, The British Journal for the Philosophy of Science, 25: 166–177.

Popper, K. R., 1963, Conjectures and Refutations, London: Routledge.

Tichý, P., 1974, “On Popper’s definitions of verisimilitude”, The British Journal for the Philosophy of Science, 25: 155–160.

Written by

Leave a Reply

Your email address will not be published.Required fields are marked *