The rise and fall of the representational theory of measurement (and 3)

As we saw in the previous entries (1,2), the representational theory of measurement (RTM), mainly developed around the mid of the 20th century, was one of the main warhorses of the by then vigorous positivist ideal of scientific knowledge. According to that theory, the application (and the applicability) of numbers and other mathematical concepts to the world didn’t depend on any kind of Platonic intuition of eternal forms, but was simply a way of summarizing in an economic fashion a big set of purely empirical data. Ideas such as mass, temperature or length could, in this way, be reduced to a collection of fundamentally qualitative measurement procedures (a slightly older, but tightly related vision of scientific concepts had received the name of operationalism). The fact is, however, that nowadays almost nobody accepts RTM as a right theory about the nature of measurement, though, since it basically consists in a collection of mathematical theorems (remember: the ‘representation’ and ‘unicity’ theorems mentioned in the previous entries), and these theorems are right in themselves, they are still useful for certain analytical purposes; for example, to offer a classification of measurement scales and the differences and relationships between them.

measurement

Why did RTM started to be rejected as the proper philosophical understanding of measurement. I guess that the main reason was the it tried to offer a definition of measurement as a deeply a-theoretical operation, whereas by the 60’s and 70’s of the 20th century almost every relevant philosopher of science had accepted that no scientific concept, or at least no important scientific concept, was ‘theory-free’. All concepts, even that allow us to describe in the most ‘neutral’ way our empirical observations, are to some extent ‘theory-laden’. The very same Patrick Suppes, who was the main figure in the history of RTM, ended accepting that what we have in science is a kind of open-ended stratification of models, the most basic being what he called ‘data models’, i.e., something that already is a kind of ‘writen-out’ description of observations (e.g., showing a collection of points as a continuous curve), expressed in terms to which some increasingly more abstract ‘theoretical models’ can be applied. Some hypotheses about the formal connections, homomorphisms and ‘translations’ between the concepts employed at the level of each stratum are necessary, but usually the concepts employed in a data-model must be strong and abstract enough to support statistical manipulation (e.g., assuming that the points are observation of a statistical variable that obeys a precise distribution).

A similar reason to doubt that RTM could be taken as a deep philosophical understanding of the relation between experience and maths was the fact that, after all, the very same axioms from which the theorems were derived could not be identified as ‘collections of data’ in any realistic way, but were, instead, very strong idealisations. For example, the Archimedian axiom (i.e., that for any x and y, if xQy, then there is a natural number n such that nyQx, where Q represents ‘being as least as big as’, and ‘ny’ stands for ‘a concatenation of n exact copies of y’) is obviously not empirically confirmable by any finite amount of data. It is, hence, a conjecture, a theoretical assumption. In a nutshell, what the Archimedian axiom makes is to deny the existence of objects that are infinitely small, or that contain a property in some infinitesimal but positive amount. Of course, if the axiom were false, it would imply that calculations with the relevant variable should be made with the help of some kind of non-standard analysis.

A related problem, one that has plagued the efforts of representational theorists till now, is the recalcitrant inability of the theory to offer a natural place to the obvious facts that almost all measurements are not error-free. For example, we can measure object A against object B and observe that they are equal in length, but if we repeat the observation a number of times, it usually will happen that some times the result is that B seems slightly longer, or slightly shorter. Which ‘data’ should we include in the empirical structure that represents our measurements? One possible approach to this problem would be to interpret the empirical structure as something that ‘emerges’ out of a specific probability space, but, as some of the main champions of RTM recognized,

“from a fundamental measurement perspective, this approach is not fully satisfactory because it assumes as primitive a numerical structure of probabilities and thus places the description of randomness at a numerical, rather than qualitative, level.” (Luce and Narens, 1994, 227)1.

I. e., introducing error as a fundamental concept demanded to assume that there is a quantitative concept (probability) that is not reducible to qualitative, purely empirical operations, ruining on the way the very same goal of the RTM.

Curiously, the dethronement of the operational understanding of measurement has been a kind of ‘joint venture’ in which two of the most important, but conflicting contemporary views of the nature of scientific knowledge have shared a common interest. I am referring to the realist and the constructivist interpretations of scientific theories, respectively. According to the former, theories are conjectures that try to capture the true structure of the world, and physical (or any other kind of) magnitudes would just be the real properties that real systems have, or at least, the ones our own provisional concepts would tend to reflect in the long term. The progress of science would consist, hence, in the discovery of those real magnitudes and their true properties. Of course, these properties have a mathematical structure (by the way, the expression ‘mathematical structure’ would be redundant), but, at least according to most contemporary realists, it is not that we have the capacity of capturing by a kind of a priori intuition; rather, our best theories are just our best conjectures about what the mathematical structure of the world is. One good example of this realist approach to measurement is Michell (2005)2. Constructivism, instead, denies that the world has any predefined structure that we are disentangling little by little, but concepts and theories are our owns creations, that allow us to systematise and improve better and better a radically heterogeneous set of experiences, observations, experiments, technologies, etc. Constructivism shares a big portion of the anti-Platonist flavour of operationalism, but sees the progression of knowledge as theory-guided, more than experience-guided. One of the best recent illustrations of this view of measurement, applied to the history of the concept of temperature, is Chang (2004)3.

References

  1. Luce, R.D., and L. Narens, 1994, “Fifteen Problems Concerning the Representational Theory of Measurement”, in I. Hurnphreys (ed.), Patrick Suppes: Scientific Philosopher. Vol. 2, 219-249.
  2. Michell, J., 2005, “The logic of measurement: A realist overview”, Measurement, 38(4): 285–294.
  3. Chang, H., 2004, Inventing Temperature: Measurement and Scientific Progress, Oxford: Oxford University Press.

Written by

Leave a Reply

Your email address will not be published.Required fields are marked *