The progress that physics experienced during the 20^{th} century was probably one of the greatest and most everlasting successes of the humankind. Discovering the hidden and minute composition of matter and energy, as well as realising that the rules they obey are as further from common sense as quantum theory has revealed, are amongst the things that make you be proud of being human. The amount of talent, of cooperation, and of (mostly public) investment needed to climb so towering summit has been enormous by every measure, and though some exceptional individuals’ contributions (like Einstein, Heisenberg, Schrödinger, Gell-Mann, etc.) were extraordinarily important, in the end the construction of the now called ‘Standard Model of Particle Physics’ (SM) has been a multitudinous process of collaboration to a much bigger extent that all the previous breakthroughs in the history of the discipline. However, after all this terrific success, we might now dedicate to theoretical physics those famous verses of Rubén Darío: *“The princess girl is sad, what will the princess ail?”*.

A very good description and diagnostic of the situation is offered in a recent book by physicist Sabine Hossenfelder: *Lost in Math: How Beauty Leads Physics Astray* ^{1}. Not being myself part of the physics community, I don’t have a direct knowledge of how it has been received by her colleagues, but the topic it deals with has been of much interest in the recent years also amongst philosophers of science, so I feel I have also something to contribute. Actually, one of the events that, according to Hossenfelder triggered the writing of the book was a meeting celebrated in one of the most important research centres in philosophy of science, to which I have some personal connections (though not in that area): the Center for Mathematical Philosophy at the Ludwig Maximilians University. The topic of the meeting was “Why trust a theory?”, and it was held in December 2015. One might wonder why physicists are putting now this question, after having created in the last three centuries the most trustful theories ever invented by people.

Of course, the answer to this last question (the question why theoretical physicists are putting the other question) is well-known to everyone that has followed, even superficially, the developments of the field during the last three decades or so: the golden criterion that helped to cement physics as the queen of the sciences has always been * predictive success*. Surely, it is not the only criterion, and it is far from being a one-dimensional one, but, o brother, when your theory predicts something totally unexpected and experiments show it was right almost to the furthest decimal, then everybody

*must*accept the theory is

*good*. Similarly, as all of us have learned good-old-uncle Richard Feynman was always repeating, no matter how beautiful your theory is, if the experiments falsify it, then you

*have*to eat it. But, though predictive success had been paving the way of physics to the throne of science since the astounding success of Kepler’s prediction of the transit of Venus and of Galilei’s prediction of the rhythm at which the bells attached to an inclined plan would be made to sound by a falling ball, and though every triumph of physics had helped since then to discover an increasingly simpler and more beautiful order behind the mess of empirical data and experimental laws, the sad truth is that the SM has become a fortified wall that has stubbornly resisted all attempts to overpass it.

As Hossenfelder says, nobody in theoretical physics *likes* the Standard Model (the most empirically successful theory ever created by humans), with its twenty-five different particles nobody understands why are the ones they are nor why they are that big or that small, with its strange numerical ratios between constants, and with its apparent incapacity to account for the existence of gravitation, of dark matter and of dark energy. The SM is, besides many other things, remarkably *ugly *(or, with another word that appears hundreds of times in the book, ‘*unnatural’*), and to a guild educated generation after generation in the idea that deep truth in physics was always accompanied by mathematical beauty, the suspicion that we might be just unable to uncover a more symmetric and mathematically pleasant set of laws explaining why the SM has the features and shortcomings it has, is a suspicion that many of them just cannot swallow.

Hossenfelder describes in her lovely book her many trips visiting some of the most important theoretical physicists of our time, asking them about their views about the current situation of the field, about the role they think mathematical beauty should have in the development of physics, and about the promises of each candidate theory to overcome the Standard Model, particularly as gigantic experiments like those in the LHC have found no traces of supersymmetric particles nor of other events that might offer some empirical hint about what kind of theory could we found beyond the reign of the SM (and, instead, has gifted us the almost only remaining confirmation that the SM, this ugly monster, was right after all: the existence of the Higgs boson).

There is much in the book I cannot spoil here; I have especially enjoyed (because my previous total ignorance about the topic) Hossenfelder’s interview with Garrett Lisi, a maverick scientist who a decade ago proposed a purely geometric unification of all forces and particles, which created a nasty polemic amongst some people in the profession. The most important aspect of the book is, however, the author’s clear explication of why she thinks the pursuit of mathematical beauty is spoiling the theoretical physicists’ efforts to go beyond the Standard Model and the recommendations she offers to overcome the situation. But, again, I shall not give you here a summary of those: better go to the book, which makes a fascinating reading. In the remaining of this entry, I will, instead, provide a glimpse of my own diagnosis of the situation.

In a paper ^{2} whose title resonated in the Munich conference mentioned above, I proposed a simple formula to ‘measure’ how good does a theory or hypothesis look to a scientist. Let H be the hypothesis, E be the total empirical evidence we have to assess the theory’s epistemic value, and let T stand for the (still, and probably always unknown) complete description of the *whole truth* about the aspects of reality H is trying to describe. For two propositions A and B, let’s define the similarity or coherence between A and B as p(A&B)/p(AvB), where p is the scientist’s subjective (i.e., Bayesian) prior probability function, i.e., the ratio between the set of, so to say, possible worlds in which A and B are *both* true, and the set of possible worlds in which *at least one* of A or B are true. Hence, I defined the empirical verisimilitude of H under the light of E as the product of the similarity between H and E and the similarity between E and T (i.e., the product of ‘how close’ H seems to E, and ‘how close’ E seems to the whole truth); formally:

Vs(H,E) = [p(H&E)/P(HvE)]*[p(E&T)/p(EvT)]

Since we assume that E is true, this means that E is entailed by T, what reduces the formula to the following:

Vs(H,E) = [p(H&E)/P(HvE)]*[p(T)/p(E)]

= [p(H&E)/p(E)]*[p(T)/p(HvE)]

p(H,E)/p(HvE)

(since p(T) is a constant, and so, equal for all possible H’s and E’s, and so we can dispense of it).

When our hypothesis H has correctly explained or predicted all the experimental findings E, then the formula reduces to this:

Vs(H,E) = p(H)/p(E)^{2}

This entails that there are just three ways of enhancing the epistemic value of H: first, discover more empirical data and hope they are correctly predicted or explained by H. This is the ‘golden criterion’ we mentioned above, and is, by far, the more expedient way of getting a high value of our function Vs. The second and third methods are completely different and have to do with convincing yourself of something (and this is why I have termed this procedures ‘rhetorical’ in other writings, though they do not limit to rhetoric properly speaking). Remember that p is a *subjective* probability function; hence, if you manage to persuade yourself that, for example, one of the empirical predictions made by H and that belonged into E was much more unexpected of what you and your colleagues previously thought, this will make the denominator of the formula a little bit smaller than it was. That is, magnifying the *surprisingness* of the empirical successes of a hypothesis contributes to attributing it a higher epistemic value, a higher impression of how good it looks to you (or to the people you have managed to persuade about that). This is the second strategy. As for the third and last one, any other manipulation of your subjective probabilities that has the effect of attributing a higher *prior* probability to H will also enhance the value of Vs. Or, alternatively, amongst competing theories that all of them explain perfectly the available data (for example, theories that happen to explain the SM, but are not capable of predicting more), one will tend to attach a higher epistemic value to the theories that, for him or her, have a higher prior probability.

I suggest that the ‘search for beauty’ has to do with this last property of Vs. Theories that are perceived as mathematically more beautiful, or as based on assumptions that have a higher formal symmetry, or the like, tend to be perceived by most scientists as more probably true than less ‘pretty’ hypotheses. The common psychological illusion that a theory is ‘too beautiful not to be true’ would be a sign of this phenomenon. It’s logical to expect, then, that when all the efforts to improve the verisimilitude of a theory through manipulations in the denominator of our formula (i.e., through all that has to do with ‘*empirical virtues’*) have become unfruitful, scientists start considering with more hope the strategy of assessing theories by non-empirical means, i.e., by resorting to *‘theoretical virtues’*. But my impression, as well as, I guess, Sabine Hossenfelder’s, is that the gains of epistemic value that might be reached by means of theoretical tinkering are necessarily more limited than those obtained through empirical success: after all, the maximum value p(H) may have is just 1, though the value of Vs may be any positive real number. Hence, a new, unexpected empirical discovery might just make jump the epistemic value of some theory to a level that would show the negligibility of the effects of such theoretical tinkering. The kiss of the empirical facts is, after all, the only thing that can awaken the dormant princess from her too much enduring sleep.

## References

- Hossenfelder, S. (2018), Lost in Math: How Beauty Leads Physics Astray. New York. Basic Books. ↩
- Zamora Bonilla, J. (2013), “Why are good theories good? Reflections on epistemic values, confirmation, and formal epistemology”,
*Synthese*, 190.9, 1533-53. doi: 10.1007/s11229-011-9893-9 ↩