One of the most obvious differences between modern science and other kinds of knowledge, both present and past, is its massive use of mathematics, and in particular, its relying on calculations based on numerical formulae (for there can be mathematics without numbers, like in set theory, topology, or many branches of algebra, but these parts of mathematical knowledge are less patently applied and used in science than calculus, analysis, statistics, or algebraic relations between equations, matrices, etc., all of them essentially connected to numbers).
But when we directly experience the world in front of us, or our inner world, by the way, we hardly encounter anything resembling numbers at all; so, the highly sophisticated mathematical nature of modern science seems not to fit well with another of its most precious properties: its empirical character. Actually, the first, and few, philosophers that in Ancient times preached the importance of numbers or geometry for our knowledge of the world (the Pythagoreans and some Platonians) were all of them strong metaphysicians who tended to distrust the information captured by our senses, which basically reduces to qualitative data and observations.
So, one of the deepest and most important philosophical problems related to modern science has always been to understand how it is that it combines in such an efficient and productive way the qualitative information provided by our senses and the mathematical information only understandable by our intellect. Till the nineteenth century, and probably beyond, philosophers and scientists of a rationalist or Platonic affiliation started from the idea that our mind has some more or less mysterious capacity of grasping the hidden numerical laws of the universe, but this idea has frequently proved to be an elusive one, since most presumed ‘Platonic visions’ had in the end gone to the paper bin as conjectures falsified by the data (though some histories of science tend only to select the successful ‘visions’, forgetting about the myriad of unsuccessful ones). Instead, philosophers and scientists of the most violently empiricist lineage had systematically failed to make the jump from the sensorial to the mathematical, always waiting that some time in the future a purely empirical proof of the applicability of mathematics to the world would be devised by someone.
The Representational Theory of Measurement (RTM), developed around the middle of the twentieth century, was, from the point of view of many empiricists, the expected El Dorado, the solution of this centuries-long philosophical problem. Though not a very abstruse branch of mathematics in itself, RTM is nevertheless a considerably technical branch of philosophy, what has made it that its knowledge between aficionados to the history of philosophy is rather marginal. One of the dreams of the positivists since the last decades of the nineteenth century (the generation of Ernst Mach and his ‘empirio-criticism’) had been that of ‘reconstructing’ the complete edifice of scientific knowledge from the most elementary items of knowledge of the world, which, according to them, consisted in what Bertrand Russell had named ‘sense data’.
By the turn of the century, some German physicists and mathematicians, like Helmholtz and Hölder, as well as the North-American Campbell, had established a few formal conditions for the applicability of quantitative concepts to empirical data, but it was clear that those conditions were still too demanding from what really could be derived from the data.
The members of the most famous branch of positivism in the twentieth century (the so called ‘Vienna Circle’) didn’t deal too much with this question, besides Carl Hempel’s efforts to offer a classification of scientific concepts according to their degree of ‘quantitativeness’, so to say: there are, according to Hempel, qualitative concepts (represented by predicates like “…it is green’, “…it is female”), comparative concepts (like “…is wormer than…”, “…is denser than…”), and finally quantitative concepts (those representable by numbers that can be subjected to algebraic operations, like “measures x kgs”, or “is at x ºC”). Amongst the latter, different measuring scales can be distinguished, according to what operations make sense or not with the involved numbers (e.g., we can say that something that weights 100 kgs is twice as heavy as something that weights 50 kgs, but it makes no sense to say that something that is a 10 ºC is twice as hot has something that is at 5 ºC; instead, it does make sense is to say that the difference between something that is a 100 ºC and something that is at 80 ºC is twice the difference between something that is a 30 ºC and something that is at 20 ºC).
But what was missing was, still, some theory that allowed passing ‘directly’ from data that might be described in a purely qualitative way (including ‘qualitative –i.e., non metrical– comparisons) to numerical, non-arbitrary assignments, a ‘philosopher’s stone’ that could transform quality into quantity without resorting to some kind of ‘transcendental’ or ‘metaphysic’ intuition of a ‘world written in the language of mathematics’ (to use Galilei’s expression). This step (or something that many philosophers of science has taken as such) was offered by a young Patrick Suppes in 1951, with his paper “A set of independent axioms for extensive quantities”, a paper that initiated a boom of work on ‘measurement theory’ in the following decades, and transformed Suppes into the intellectual leader of the ‘semantic approach’ to philosophy of science.
The problem Suppes tackles is the following: let’s suppose we have a collection of entities empirically given (they can be objects, processes, or whatever), on which two physical operations can be defined: comparison and combination. Comparison consists in determining whether one of the object has a certain quality in a higher, lower, or equal degree as another; for example, it can consist is putting two rods one along the other and see whether one is longer or shorter, or whether their ends coincide; the results of this comparison can be described by a binary predicate like Q, so that “aQb” means that “a is at least as long as b”, for example. If it happens that aQb and bQa, this amounts to saying that a and be are equally long, for example. Combination consists in the physical operation of creating a new object out of two given ones, for example, adjoining two rods in order to create a longer one, or joining two weights on the same plate of a weighing scale; if a and b are such two objects, a*b will represent its combination. The question is, what set of facts about these two operations are sufficient to proof that a number can be assigned to each object, so that this number represents a quantitative property (an ‘extensive quantity’) that can be subjected to the usual mathematical operations we perform with length, weigh, electric charge, etc.? Suppes presents the following seven facts as the axioms of a theory that allows to do just that:
1) Q is transitive (i.e., if xQy and yQz, then xQz, for every x, y and z in the empirical domain we are considering)
2) Any two elements x and y of the domain can be combined through *
3) Q and * have the associative property: (x*y)*zQx*(y*z), for any x, y and z.
4) For any x, y and z, if xQy, then x*zQz*y
5) For any x and y, if it is not the case that xQy, then there is a z such that xQy*z and y*zQx
6) For any x and y, it is not the case that x*yQx
7) For any x and y, if xQy, then there is a natural number n such that nyQx
As it is easy to check, nothing in these axioms entails the use of numbers… save in the case of the last one, where the expression ‘ny’ means simply ‘the iterated combination of n objects y1, y2, …, yn such that for every i between 0 and n, yiQy and yQyi. I.e., the only ‘mathematical’ operation this system of axioms demands is that we are able to count (not to ‘measure’) equal copies of an object. What Suppes was able of proving from these assumptions is the following:
Representation theorem: If a system of objects together with operations Q and * satisfies axioms 1 to 7, then there exists a function f such that:
A) f assigns a positive real number to each object
B) for every x and y, xQy if and only if f(x) ≥ f(y)
C) for every x and y, f(x*y) = f(x)+f(y)
D) for every other function g that has the properties B and C, there is a real number k such that, for every x, g(x)=kf(x)
This is called a representation theorem because it allows a function like f to represent with numbers the qualitative relations condensed in the operations Q and * (or, in a more technical sense, to represent the system <D, Q, *> -in which D is the domain of objects- by means of a homomorphism -f- into the system <R, ≥, + >). The last of these four properties (D) establishes the kind of scale f is, in this case, a ratio scale, meaning that it is equivalent to represent, say, lengths through units like meters, kilometers, feet or miles: what is needed to pass from an scale to the other is just a number that establishes the ratio between the units of measurement.
Suppes result seemed to open the door of the empiricist paradise: we finally had a totally clear sense in which it could be said that numbers in science (magnitudes) do nothing but represent in a mathematically tractable way the information that could be defined in merely empirical, qualitative terms. But this was only the rise of the representational theory of measurement. As it happens with everything in philosophy, fall was awaiting just around the corner.
Diez, J.A., 1997, “A Hundred Years of Numbers. An Historical Introduction to Measurement Theory 1887–1990—Part 1”, Studies in History and Philosophy of Science, 28(1): 167–185.
Hempel, C.G., 1952, Fundamentals of concept formation in empirical science, International Encyclopedia of Unified Science, Vol. II. No. 7, Chicago and London: University of Chicago Press.
Suppes, P., 1951, “A set of independent axioms for extensive quantities”, Portugaliae Mathematica, 10(4): 163–172.