The Grand Bazaar of Wisdom (3): Epistemic utility approaches

Bowls with an utility function

Another route that has been followed to apply economic thinking to scientific methodology has consisted into trying to define a specific (‘cognitive’, or ‘epistemic’) utility function which rational scientific research should maximise. This has been the strategy of what is usually called cognitive decision theory, which is basically an adaptation of the Bayesian theory of rational choice to the case when the decisions to be made are those of accepting some propositions or hypotheses instead of others. Hence, in the case of scientific research, it is assumed that scientists decide (or should decide, if we give this approach a normative interpretation) to accept a particular solution to a scientific problem, instead of an alternative solution, if and only if the expected utility they derive from accepting the former is higher than the expected utility they would attain from accepting any other solution to that problem. The expected utility of accepting the hypothesis h given the ‘evidence’ e is defined as:

(1) EU(h,e) = sϵX u(h,s)p(s,e)

where the s’s are the possible states of the world, u(h,s) is the epistemic utility of accepting h if the true state of the world is s, and p(s,e) is the probability of s being the true state given the evidence e. One fundamental problem for a cognitive utility theory is, of course, that of defining an ‘appropriate’ epistemic utility function u; but, before discussing this problem, there is a still more basic conceptual difficulty that has to be mentioned: standard decision theory is a theory about what actions an agent will perform, given her options, her preferences, and the knowledge, beliefs, or information she has about how the relevant things are. It may sound even absurd to say that one can choose what to know, or what to believe. Of course, one can do things in order to gain more or less information, and one can as well allocate more effort to look for information about some topics than about others, but, once the results of this search are in front of you, you usually do not ‘choose’ what to believe: you just happen to have certain beliefs. Indeed, the fact that a person’s beliefs have been ‘chosen’ by her is frequently a very strong reason to doubt of their truth, or at least, to doubt of the epistemic rationality of that person. Cognitive decision theorists counterargue that the object of an epistemic utility function is not really an agent’s system of beliefs: these are represented in (1) by the (subjective) probability function p. The ‘acts’ whose cognitive utility is relevant are, rather, those of accepting or rejecting (or suspending judgement on) a given proposition (the hypothesis h). As it has been cogently defended by Patrick Maher (1, pp. 133 ff.), the acceptance of a scientific hypothesis is logically independent of our belief in its truth: attaching probability 1, or any other ‘high’ level of probability, to a theory is neither a sufficient nor a necessary condition for its acceptance (for example, most scientific theories are accepted even though scientists actually believe they are not literally true). We may add that, as it will be evident in the next sections, scientists usually have (‘social’) reasons to accept a hypothesis that have nothing to do with how confident they are about its truth.

Other possible objection is that, even assuming that acceptance and belief are not the same thing, the only relevant thing from the point of view of a sound epistemology is the latter, and not the former; for example, van Fraasen 2 made precisely this point in discussing the ‘inference to the best explanation’ approach: once you has concluded that h has a higher probability (but less than 1) than any rival theory, accepting h would entail to go further than what your evidence allows. This criticism, however, seems to be based on the assumption that accepting a theory is identical with attaching probability 1 to it, what is not the case, as Maher has argued. Nevertheless, the idea that once you have subjective probabilities you don’t need acceptance may still have a point, particularly for Bayesian epistemologists. Maher’s answer is to point to the fact that scientists (and ordinary people as well) do actually accept and reject theories and other types of propositions (an empirical phenomenon that calls for some explanation), and even more importantly:

Much of what is recorded in the history of science is categorical assertions by scientists of one or another hypothesis, together with reasons adduced in support of those hypotheses and against competing hypotheses. It is much less common for history to record scientists’ probabilities. Thus philosophers of science without a theory of acceptance lack the theoretical resources to discuss the rationality (or irrationality) or most of the judgements recorded in the history of science (…) Without a theory of acceptance, it is also impossible to infer anything about scientists’ subjective probabilities from their categorical assertions. Thus for a philosophy of science without a theory of acceptance, the subjective probabilities of most scientists must be largely inscrutable. This severely restricts the degree to which Bayesian confirmation theory can be shown to agree with pretheoretically correct judgements of confirmation that scientists have made.

(Maher, 1993, pp. 162f.)

Once we have seen some of the reasons to take acceptance as an act scientists can perform, we can turn to the question of what is the utility function they are assumed to be maximising when they decide to accept some propositions instead of others. Cognitive decision theory is grounded on the idea that this utility function is of an epistemic nature, i.e., the utility of accepting h only depends on the ‘epistemic virtues’ h may have. Or, as the first author in using the epistemic utility concept stated:

the utilities should reflect the value or disvalue which the outcomes have from the point of view of pure scientific research, rather than the practical advantages or disadvantages that might result from the application of an accepted hypotheses, according as the latter is true or false. Let me refer to the kind of utilities thus vaguely characterized as purely scientific, or epistemic, utilities.

(Hempel, 3, p. 465).

Of course, it was not assumed by Hempel, nor by other cognitive decision theorists, that a real scientist’s utility function was affected only by epistemic factors; after all, researchers are human beings with preferences over a very wide range of things and events. But most of these authors assume that scientists, qua scientists, should base their decisions on purely epistemic considerations (and perhaps often do it). So, what are the cognitive virtues an epistemic utility function must contain as its arguments? One obvious answer is ‘truth’: coeteris paribus, it is better to accept a theory if it is true, than the same theory if it is false. This does not necessarily entail that accepting a true proposition is always better than accepting a false one, for other qualities, which some false theories may have in a higher degree that some true theories, are also valuable for scientists, as, e.g., the informative content of a proposition. So, one sensible proposal for defining the expected epistemic utility of h is to take it as a weighted average of the probability h has of being true, given the evidence e, and the amount of information h provides. This leads to a measure of expected cognitive utility like the following (Levi 4):

(2) EU(h,e) = p(h,e) – qp(h)

where the parameter q is a measure of the scientist’s attitude towards risk: the lower q is in the epistemic utility function of a researcher, the more risk averse she is, for she will prefer theories with a higher degree of confirmation (p(h,e)) to theories with a high degree of content (1 – p(h)). If formula (2) reflects the real cognitive preferences of scientists, it entails that, in order to be accepted, a theory must be strongly confirmed by the empirical evidence, but must also be highly informative. Scientific research is a difficult task because, usually, contentful propositions become disconfirmed sooner than later, while it is easy to verify statements that convey little information. One may doubt, however, that these are the only two cognitive requisites of ‘good’ scientific theories. For example, (2) leads to undesirable conclusions when all the theories scientists must choose among have been empirically falsified (and hence p(h,e) is zero): in this case, the cognitive value of a theory will be proportional to its content, what means that, in order to find a theory better than the already refuted h, you can simply join to it any proposition (it does not matter whether true or false) which does not follow from h. For example, Newtonian mechanics joined with the story of Greek gods would have a higher scientific value than Newtonian mechanics alone.

In order to solve this difficulty, one interesting suggestion has been to introduce as an additional epistemic virtue the notion of closeness to the truth, or verisimilitude (cf. Niiniluoto 5 and 6, Maher (1993)), a notion that was introduced in the philosophy of science as a technical concept in Popper 7: amongst false or falsified theories (and perhaps also amongst true ones) the epistemic value does not only depend on the theories’ content, but also on how ‘far from the full truth’ they are. The main difference between Niiniluoto’s and Maher’s approaches is that the former is ‘objective’, in the sense that it assumes that there exists some objective measure of ‘distance’ or ‘(di)similarity’ between the different possible states of nature, and the value of accepting a theory is then defined as an inverse function of the distance between those states of nature that make the theory true and the state which is actually the true one. Maher’s proposal, instead, is ‘subjective’ in the sense that it starts assuming that there is an undefined epistemic utility function with the form u(h,s), perhaps a different one for each individual scientists, and the verisimilitude of a hypothesis is then introduced as a normalised difference between the utility of accepting h given what the true state is, and the utility of accepting a tautology. In Maher’s approach, then, epistemic utility is a primitive notion, which is only assumed to obey a short list of simple axioms: (i) accepting a theory is better when it is true than when it is false, (ii) the utility of accepting a given true theory does not depend on what the true state is, (iii) accepting a true theory is better than accepting any proposition derivable from it, (iv) there is at least a true theory accepting which is better than accepting a tautology, and (v) the utility of accepting a full description of a true state of nature is a constant and higher than the utility of accepting a logical contradiction. Maher assumes that different scientists may have different cognitive utility functions, and hence, they can give different verisimilitude values to the same theories, even if the principles listed above are fulfilled. Actually, Niiniluoto’s approach is not completely objective, because the definitions of distance between states depend on what factors of similarity each scientist values more or less. This is not a bad thing: after all, cognitive preferences are preferences, and these are always the preferences of some particular agent.

References

  1. Maher, P., 1993, Betting on Theories, Cambridge, Cambridge University Press.
  2. van Fraasen, B., 1980, The Scientific Image, Oxford, Clarendon Press
  3. Hempel, C. G., 1960, ‘Deductive Inconsistencies’, Synthese, 12:439-69 (reprinted in Aspects of Scientific Explanations, New York, The Free Press, pp. 53-79).
  4. Levi, I., 1967, Gambling with Truth, New York, Knopf.
  5. Niiniluoto, I., 1987, Truthlikeness, Dordrecht, D. Reidel.
  6. Niiniluoto, I., 1998, ‘Verisimilitude: The Third Period’, British Journal for the Philosophy of Science, 49:1-29.
  7. Popper, K. R., 1963, Conjectures and Refutations. The Growth of Scientific Knowledge, London, Routledge and Keagan Paul.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>