The rise and fall of the representational theory of measurement (2)

representational-theory-of-measurement

In the previous entry we saw how the so called ‘Representational Theory of Measurement’ appeared to solve one of the deepest problems in the empiricist account of scientific knowledge: how to justify the use of numbers in science (and the calculations that used them), taking into account that all knowledge was supposed to be grounded on empirical data, and these were supposed to be purely qualitative. The solution consisted in the proof of a ‘representation theorem’, which was basically divided into a proof of existence (i.e., that it exists some function –an homomorphism– assigning a number to every object in a domain that fulfils some purely qualitative conditions), and a proof of uniqueness (i.e., that any other function that satisfies the same formal properties is related to the former one by some formula –for example, a proportional change of scale, like that transforming measures in kilograms into measures in pounds). In particular, in the case of ‘extensive measurements’ (those for which the sum is a well defined operation), it must be the case that:

– for every x and y, xQy if and only if f(x) ≥ f(y)

– for every x and y, f(x*y) = f(x)+f(y)

– for every other function g that has the properties B and C, there is a real number k such that, for every x, g(x)=kf(x)

where Q means ‘…is at least as big as…” and * is a combination operation. According to this radically empiricist account, hence, to ‘measure’ would simply consist in representing (or ‘summarising’) some qualitative relations with the help of numbers.

The representational theory was received with hope and joy in many sciences (particularly psychology and economics) that wished to become as ‘scientific’, ‘objective’ and ‘successful’ as those in which scientists where able to make astonishing prodigies of precision and prediction, like physics. For example, economists dreamed of establishing a ‘measure’ of their old concept of ‘utility’, a numerical representation of individual preferences and their intensity. In principle, it seemed there were no much difficulty in translating preference relations into a numerical order: just assign two numbers U(x) > U(y) to options x and y just in case an agent prefers x to y, i.e., if she always chooses x when offered a choice between x and y. But function U is totally arbitrary save for having to respect this ‘monotonicity’ condition, in the sense that, if a function U represents the preference of an agent for x over y and y over z (such that, for example, U(x)=10, U(y)=9 and U(z)=2), the functions that assign values 100, 3 and 2, or 100, 99 and 75 to x, y and z respectively, work equally well, and so no one of them can be called a ‘measure’.

Things are different when we consider the choices an agent makes under uncertainty, i.e., when she does not directly choose between options, but between ‘lotteries’, or combinations of options in which each option has a definite probability. Suppose, for example, that the agent prefers lottery A to B, where A consists in option x occurring with a 50% probability, y with a 25% probability, and z with a 25%, and B consists in option x occurring with a 10% probability, y with a 8% probability, and z with a probability of 10%. Perhaps there is some set of conditions that can be verified between this more complex type of preference comparisons, conditions that are purely qualitative (save for the trivial fact that they are preferences over entities that may contain a quantitative description –probabilities, in this case–, but it is the concept of preference the one we want to measure quantitatively, not the concept of those things that are preferred), but that guarantees that the functions U that we assign to the agent’s preferences are not as arbitrary as we saw in the previous paragraph, i.e., that they are ‘measurable’. Perhaps the greatest success of the representational theory of measurement was the proof, published in 1954, by the mathematician Leonard Savage (a former assistant of von Neumann), that this was exactly the case1.

The conditions are slightly more complicated than the ones for extensive properties we saw in the previous entries (see 2 for a brief summary), but the idea is the same, with an important difference: now, the numerical function representing preferences (‘utility’) is not unique up to scalar transformations, but only up to linear transformations, i.e., if assignments U and V both satisfy the qualitative conditions of Savage theorem, then there are numbers a and b such that for any x it happens that U(x) = aV(x) + b (in the case of ‘ratio scales’, like those of mass and length, b is necessarily 0; now, it can be any real number), and such that the agent’s choices (whose systematic mutual relations are what is described in the theorem’s axioms) are consistent with the maximisation of what is called ‘expected utility’, i.e., the expected value of the functions U or V, given a function of probability that is also uniquely determined by these same axioms. This made of ‘subjective utility’ a concept with exactly the same mathematical properties as the usual scales of temperature (but not absolute temperature): though it does not make sense to say that, if the utility you get from something is 10 and the one you get from something else is 20, the latter gives you ‘twice’ satisfaction than the former, it can be correct to say that the difference between the satisfaction you get from those two things is twice the difference between something that gives you 8 units of utility and something that gives you 3 units. The logic of the expected utility function already suggested this last mathematical properties: if you are indifferent between choosing option x and a lottery in which you win prize y with probability p and prize z with probability (1-p), and such that you prefer y to x, and x to z, this can be represented as:

EU(x) = EU(lottery) = pU(y) + (1-p)U(z)

but, since

EU(x) = U(x) = p(Ux) + (1-p)U(x),

it follows that

p(Ux) + (1-p)U(x) = pU(y) + (1-p)U(z)

pU(x) – pU(y) = (1-p)U(z) – (1-p)U(x)

p[U(x) – U(y)] = (1-p)[U(z) – U(x)]

And hence,

[U(x) – U(y)]/[U(z) – U(x)] = (1-p)/p

I.e., the ratio of the differences between the utilities of the three options has to be equal to the ratio of probabilities in the lottery, no matter what function U you have used to represent the agent’s preferences for x, y and z.

An additional prize of Savage theorem is that the qualitative conditions stated in its axioms (remember, conditions about what an agent prefers to what) can be interpreted as conditions of rationality: it would be irrational to behave, i.e., to choose, in a way contrary to what the axioms state. For example, if you violate the axiom of transitivity (if you prefer x to y, and y to z, but prefer z to x), then you have irrational preferences. In this case, you will be turned into a ‘money pump’: suppose you start having y, but prefer x, then you can consider to pay some little amount of money for the right of exchanging x for y; now you have x, but since you prefer z to x, you will be happy to pay a little bit so that you can change z for x; once you have z, since you prefer y to z, you will pay something to make the change. So you end having y, as in the beginning, but has given out three amounts of money. No matter how small these amounts are, the process can be repeated till you loss all you fortune. Something similar happens if other of the axioms of Savage are violated. In conclusion, rationality (in the sense of having ‘consistent’ preferences and choices) not only can be summarised in a simple list of mathematical axioms, but these axioms seemed to entail that a necessary condition of being rational is that you choose in such a way that you are maximising a mathematical function that can be as well defined as the temperature of your room is definable in Celsius degrees, for example.

This result opened great expectations in the social sciences and in psychology, during a time dominated by a radically empiricist, behaviourist paradigm in most part of those areas, with the hope that many other concepts and properties could be subjected to some similar treatment. Measurement theory was then practiced mainly during the next decades by mathematical psychologists and social scientists in cooperation with philosophers and logicians (e.g.,3), aspiring to put on firm foundations what can be objectively asserted and measured about those fields. Of course, if you know a little bit about the state of those disciplines you will know that successful measurement is still a very far objective. One of the reasons of that failure is that the representational theory of measurement was not capable of delivering what was expected from it, but we will see this in more detailed in the next entry.

References

  1. Savage, Leonard J., 1954, The foundations of statistics. New York, John Wiley and Sons.
  2. Karni, Edi, 2005, “Savage’s Subjective Expected Utility Model”(PDF)
  3. Krantz, David H., R. Duncan Luce, Amos Tversky, and Patrick Suppes. 1971. Foundations of Measurement. Vol. 1, Additive and Polynomial Representations. Mineola: Dover.

Written by

1 comment

Leave a Reply

Your email address will not be published.Required fields are marked *