The brief life of longtermism (& 2)

We saw in our past entry how the moral philosophy theory known as longtermism is rooted in the historical trend towards ‘the expansion of the circle of empathy’, in order to include all future human beings (or ‘persons’, we might say, including non-human, or post-human persons), and also commented on some flaws of the theory. We shall now end our review with other possible criticisms. To begin with, I also find troubling William MacAskill’s inference from the premise (which, as we have seen, was itself dubious) that we find it morally preferable that there be more people who are not too unhappy in the future than that there be fewer of them, to the conclusion that it is an absolute moral obligation to contribute to bringing about the largest possible number of people in the future. One problem is that, in principle, the premise would also seem to hold if we applied it to non-human animals: having a few more happy puppies in the world also seems morally preferable to their not existing, since total happiness would increase as a result. MacAskill himself confesses that becoming a vegan at eighteen was one of the best moral decisions he has made in his life, and I suppose that is because he thinks it helps improve animal welfare. Thus, following his argument, we would also have to conclude that we should contribute to there being the greatest possible number of non-human animals in the world from now until the end of the universe. But that conclusion does not seem to appeal to us very strongly, not even to the author himself, who does not even give the impression of thinking about it. So the argument it rests on cannot be that solid either.

longtermism
Photo: Fey Marin en Unsplash

Moral judgments as arithmetic propositions

A more fundamental problem in this argument is that this whole philosophical speculation about the hypothetical moral value of future lives comes down, at bottom, to having fallen into the illusion that our moral judgments can be treated as arithmetic propositions — what, in more technical terms, is known as utilitarianism. But even more serious, to my mind, is the second part of MacAskill’s argument: the thesis that not only does the very, very long-term future possess deep moral relevance for us, but that we can currently do something to determine with considerable certainty that that very, very long-term future will be better rather than worse. According to this author, the two most important things we can supposedly do in that regard are, first, to prevent humanity’s complete disappearance as far as possible (what longtermists call existential risks), and second, to help ensure that in coming generations the ‘right’ moral values (that is, MacAskill’s values) become, so to speak, entrenched in society, so that a reversal of moral ideas would be practically impossible — much like how it is now almost impossible for societies to reaccept slavery as acceptable.

To infinity and beyond

In this regard, one of the main flaws of longtermism is that its fixation on “existential risks” is so strong that its adherents consider devoting resources to other grave and more pressing social problems far less important. Its defenders usually counterargue that they do, in fact, also care about many other issues (poverty, school shootings…), and so they are not proposing that we stop trying to solve them. But it is hard to escape the conclusion that, insofar as the resources we devote to these “short-term” problems cannot be redirected to “guaranteeing humanity’s existence for ages to come,” to that extent we should stop allocating them to those other ends, however embarrassed longtermists may be to admit that those small matters should also concern them. The criticism is similar to the one Peter Singer received when it became known that he was spending a large amount of money caring for his elderly mother, when with that same money the lives of hundreds of people in the developing world, or thousands of animals slaughtered on factory farms, could be saved. In short: the idea of extending the circle of empathy “to infinity and beyond” is little more than words on a page, with no psychological force when it comes to dealing with the real problems of everyday life.

Intellectual arrogance

Finally, and as I have also suggested, longtermists are guilty of a kind of intellectual arrogance (what the ancient Greeks called hubris) almost as infinite as their imaginary expanse of their circle of empathy. The sad truth is that, no matter how much our science, technology, and politics have advanced compared with the limited level they could have had centuries ago, to think that we will be able to control the future evolution of the human species from some sort of central console in the basements of Oxford is simply to have taken a few science-fiction films far too seriously. In reality, the reason longtermism strikes me as a particularly absurd point of view is that, over a time horizon as long as several thousand years — not to mention several million — the level of uncertainty about what the actual consequences of our decisions will be is so high that, in practice, all our current decisions have exactly the same degree of what utilitarians call expected utility (that is, the weighted average of the degree of ‘utility’ or total welfare that would arise in each possible scenario, using as the weighting factor the probability that the scenario would occur if we were to make the decision in question). Keynes said that in economic policy one should not worry excessively about the long term, because “in the long run we are all dead.” Similarly, I think we should not worry much about the very, very long term because, in the very, very long term, we cannot have the faintest idea of what will happen, whatever we do. History, both human and of living species, is above all the domain of the unpredictable. Designing and successfully carrying out actions whose effects will occur within only a few months or a few years already requires considerable effort — and far too often we fail in those cases — for us to take seriously the speculations of a handful of armchair engineers of history. And if, on top of that, the gang proposing such ideas invites you to donate money to their cause — which, surprise!, is exactly what happens in this case: they call it “effective altruism”— the best response is to slam the door in their faces.

References

MacAskill, W., 2022, What we owe to the future, One World Publications.

Written by

Leave a Reply

Your email address will not be published.Required fields are marked *