The Grand Bazaar of Wisdom (and 6): Mathematical models in the economics of science

The most distinctive feature of modern economics is probably its reliance on the methodology of mathematical model building. The final aim of scientific model building is illuminating real phenomena; furthermore, models are basically logical arguments, whose main virtue is that they allow us to see very clearly what follows, and also what does not follow, from a definite set of premises. These premises describe an imaginary world, and mathematical analysis allows to decide in an unambiguous way what would happen in that world under some conceivable circumstances. The most important question is, hence, to what extent that imaginary world represents well enough the relevant aspects of the way things are. Mathematical models of scientific knowledge production are not common, however: sociologists of science tend to think it is ‘economics imperialism’; philosophers don’t know too much about economics for even considering seriously the possibility of engaging into an economics of scientific knowledge; economists don’t like to lose their time investigating such a ‘minor’ question; and methodologists of economics may have been the scholars where the right interests and the right resources were combined in an optimal way, but most of them are either too much critical of standard economic theory for considering worthy the effort, or fear that an economics of scientific knowledge would be dangerously close to social constructivism and relativism.

One of the first applications of an economic model to a problem clearly falling within the philosophy of science was Cristina Bicchieri’s “Methodological Rules as Conventions” (1988)1, using David Lewis’ theory of conventions. The main element in the decision of a scientist to adopt a method or procedure is the fact that he expects that his colleagues also obey it. In this sense, the choice of a rule or systems of rules instead of other is arbitrary, for individual scientists would have been equally happy if some different norms were collectively adopted. Being equilibria of a coordination game explains the (relative) stability of scientific methods, for once the individuals expect others to comply, it is costly for each agent to follow a different rule. I think however, that the choice of a scientific procedure is not a game of pure coordination: different rules may have more value for scientists (e.g., in many cases the game can be of the ‘Battle of the Sexes’ type). A more sophisticated model of a similar choice situation has been developed by Paul David2: researchers have to adopt or reject a theory, T, which in the long run will be either accepted as right by the community, or rejected as wrong, but which it is now under discussion, so that some researchers are currently accepting it, and others rejecting it; if a scientist thinks with probability p that the theory will be collectively adopted in the end (considering her own private knowledge and the opinion expressed by her neighbour colleagues), the utility she expects to get will also depend on whether now there is a majority or a minority of colleagues accepting T, in the following way: let a be the utility of adopting T if it is now majoritarily rejected, but collectively adopted in the future (‘being right with the few’); let b be the utility of rejecting T under the same conditions (‘being wrong with the crowd’); let c the utility of adopting T if it is now majoritarily accepted, and collectively adopted in the future (‘being right with the crowd’), and let d the utility of rejecting T under these conditions (‘being wrong with the few’); lastly, let us assume that a > c > b > d. It follows that the scientist will adopt the majority opinion if and only if (1-p)/p < (c – d)/(a – b). This entails that, if the difference between being eventually right having defended a minority opinion, and being eventually wrong but having defended the majority opinion, is low enough in reputation terms, then conformity to the majoritarian opinion will be a dominant strategy. About the same time, Brock and Durlauf (1999)3, and Zamora Bonilla (1999, 2006, 2007)4567 offered some models in which the scientist’s decision depends both on the researcher’s ‘private’ assessment of the theory, and on a ‘conformity effect’, which takes into account the (expected) choices of her colleagues. The main conclusions of all these models are: first, more than one social equilibrium (i.e., a distribution of individual choices such that nobody has an interest in making a different choice, given the choices of her colleagues) are possible; second, path-dependence is a significant factor in the attainment of an equilibrium (e.g., two scientific communities having the same empirical evidence about a couple of alternative theories might end making different choices, if their data had just been accumulated in a different order); but, third, contrarily to what happened in Bicchieri’s and David’s model, some equilibrium states can correspond to a non unanimous choice (i.e., diversity of individual judgements can take place in the equilibrium). Another important conclusion of these models is that, as the factors influencing the individual effects change (e.g., by finding new empirical or theoretical arguments which affect the assessment each scientist makes), the number of scientists accepting a theory can suffer a sizeable change at some point, even though those influencing factors have accumulated by small marginal increments (i.e., the dynamics of scientific consensus is not necessarily linear). The two last papers additionally consider the possible effects of collective choices, i.e., the forming of (not necessarily universal) coalitions in which every member would be interested in adopting the theory if and only if the other members did the same; in this case, when there are two stable equilibria, only one of them becomes stable under collective choice (i.e. no other coalition can force a move to the other equilibrium), and, if it happened that one of the equilibria was Pareto-superior with respect to the other, the former one will be coalition proof. This last conclusion suggests that there is a middle ground between ‘free market’ and ‘social planning’ approaches to the economics of scientific knowledge: the epistemic efficiency of science perhaps would not mainly come from the unintended coordination of individual choices, nor from the calculations of a single planner, but from the free constitution of groups.

Another important contributor to the economic modelling of epistemological problems has been the philosopher Philip Kitcher, who has tried to develop a social epistemology’ based on a methodologically individualist conception of social processes and on a reliabilist conception of knowledge (which takes progress towards the truth as the epistemic goal of science). The role for social epistemologists would be “to identify the properties of epistemically well-designed social systems, that is, to specify the conditions under which a group of individuals, operating according to various rules for modifying their individual practices, succeed, through their interactions, in generating a progressive sequence of consensus practices” (Kitcher, 19938, p. 303). His strategy is divided into two stages. In the first place, he discusses how individual scientists act when taking into account their colleagues’ actions, in particular, how they take their decisions about how much authority to confer those colleagues, as well as about how to compete or cooperate with them. In the second place, Kitcher analyses what epistemic consequences different distributions of researchers’ efforts may have. In order to elaborate this strategy, Kitcher employs models from standard and evolutionary game theory, as well as from Bayesian decision theory. Although this strategy is grounded on methodological individualism, when it goes to normative problems it finally rests on the idea that there is some kind of collective (or ‘objective’) standard of epistemic value against which to measure the actual performances of a scientific community, that would correspond to the impartial preferences of something like a ‘philosopher monarch’.

One important topic in Kitcher’s models is ‘the division of cognitive labour’. The main question that concerns this author is the difference between the distribution of efforts which is optimal from a cognitive point of view, and the distribution that would arise if each researcher were individually pursuing her own interest, and hence the problem is basically one of coordination. Kitcher considers several cases, according to whether individual scientists are motivated just by the pursuit of truth, or by the pursuit of success, or by a mix of both goals, and also according to whether all scientists are assumed to have the same utility preferences and estimations about the probability of each theory being right, or there is some motivational or cognitive diversity. The most relevant conclusion is that in a community of researchers whose members were just motivated by professional glory (i.e., they didn’t mind about whether the finally accepted theory is true or false), they would always choose that theory with the highest probability of being finally accepted, and so no one would pursue other theories or methods, but, just with a slight weight attached to the goal of truth in the scientists’ utility function, a distribution of efforts close to the optimum will be attained.

A couple of mathematical models of scientific knowledge production were developed by Philip Mirowski910, one of them in collaboration with Steve Sklivas. They attempt to explain the observation made by sociologists of science, according to which researchers almost never perform exact ‘replications’ of the experiments made by others (contrarily to what positivist expositions of the scientific method prescribed), though the results of those experiments are actually employed in ensuing research practice (and in this sense, they are ‘reproduced’). Mirowski and Sklivas develop a game theoretic account of this behaviour: the first performers of an experiment gain nothing from independent replications if these are successful, and loose if they fail, but they gain from the further use of the experiment by others; use (or ‘reproduction’) is costly for those researchers who perform it, though exact replication is even more costly; on the other hand, only failed replication gives a positive payoff to replicators; lastly, the more information is conveyed in the report of the original experiment, the less costly both use and replication become. From these assumptions, Mirowski and Sklivas derive the conclusion that the optimum strategy of the scientist performing an original experiment is to provide such an amount of information that is enough to incentivate use, but not enough to make replication worthwhile; replication will only have a chance if the editors of the journals command to provide still more information in the experimental reports. In the second of the models referred to above, Mirowski compares the process of measuring a physical constant to the process that determines prices in the markets: as differences in the price of the same good at different places create an opportunity to arbitrage, inconsistencies in the measured values of a constant (derivable from the use of some accepted formulae -e.g., physical laws- and the values of other constants) create an opportunity to make further relevant measurements. Graph theory is employed at this point to describe the interconnection between the measured values of several constants (the ‘nodes’ of the graph) and the formulae connecting them (the ‘edges’), and to suggest an index measuring the degree of mutual inconsistency the existing values display. Interestingly enough, the application of this index to several branches of science shows that economics has been much less efficient in the construction of consistent sets of measures, a fact Mirowski explains by the reluctance of neo-classical economists to create an institutional mechanism capable of recognising and confronting this shortcoming (an explanation which could be tested by comparing the measures assembled by economic agencies with, say, a neo-classical or a Keynesian orientation).

I personally have developed other models of scientific activity in recent years. Zamora Bonilla (2002)11 presents a model in which the members of a scientific community can choose the ‘confirmation level’ (or other measure of scientific quality) a theory must surpass in order to become acceptable, under the assumption that scientists are motivated not only by the quality of the theories, but mainly by being recognised as proponents of an accepted theory. The chances of getting recognition are too small both if the chosen level is very low (for then too many successful theories will exist to compete with), and if the level is very high (for then it would be very difficult to discover an acceptable theory). Zamora Bonilla (2006a) offers a game theoretic analysis of how the interpretation of an experimental result is chosen, whereas his (2006b) describes the general features of the process of scientific research as a ‘persuasion game’, a framework which is applied in his (2014)12 to the analysis of the process of co-authorship.

References

  1. Bicchieri, C., 1988, “Methodological Rules as Conventions”, Philosophy of the Social Sciences, 18:477-95.
  2. David, P., 1998, ‘Communication Norms and the Collective Cognitive Performance of ‘Invisible Colleges’’, in G. Barba et al. (eds.), Creation and Transfer of Knowledge Institutions and Incentives, Berlin, Springer, pp. 115-63
  3. Brock, W. A., and S. N. Durlauf, 1999, ‘A Formal Model of Theory Choice in Science’, Economic Theory, 14:113-30
  4. Zamora Bonilla, J. P., 1999, ‘The Elementary Economics of Scientific Consensus’, Theoria, 14:461-88
  5. Zamora Bonilla, J. P., 2006a, “Rhetoric, Induction, and the Free Speech Dilemma”, Philosophy of Science. 73, 175-93
  6. Zamora Bonilla, J. P., 2006b. Science as a Persuasion Game. An Inferentialist Approach, Episteme, 191
  7. Zamora Bonilla, J. P., 2007, “Science Studies and the Theory of Games”, Perspectives on Science, 14, 639-71
  8. Kitcher, Ph., 1993, The Advancement of Science: Science without Legend, Objectivity without Illusions, Oxford, Oxford University Press
  9. Mirowski, Ph., 2004, The Effortless Economy of Science?, Durham, Duke University Press
  10. Mirowski, Ph., and S. Sklivas, 1991, ‘Why Econometricians Don’t Replicate (Although They Do Reproduce)?, Review of Political Economy, 31:146-63
  11. Zamora Bonilla, J. P., 2002, “Scientific Inference and the Pursuit of Fame: A Contractarian Approach”, Philosophy of Science, 69, 300-23
  12. Zamora Bonilla, J.P., 2014, “The Nature of Co-Authorship”, Synthese, 191:97-108

Written by

2 comments

Leave a Reply

Your email address will not be published.Required fields are marked *