Longtermism. Take II

Political discussions are normally centered around the here and now, focused on the latest scandal or the next election. When a pundit takes a “long-term” view, they talk about the next five or ten years. With the exceptions of climate change and nuclear waste and setting aside science fiction writers and futurists, we essentially never think about how our actions today might influence civilisation hundreds or thousands of years hence.

Longtermists believe (and so I) that this neglect of the very long-run future is a serious mistake.

Longtermism versus strong longtermism

Longtermism is the view that positively influencing the longterm future is a key moral priority of our time. Strong longtermism is the view that impact on the far future is the key most important feature of our actions today.

The distinction is relevant, but it’s being blurred, as it usually happens when ideas reach the “mainstream”

The modifier in the title of the 2021 paper—“strong longtermism”—is significant. It positioned MacAskill within the longtermist camp that sees protecting the distant future as “the” key moral priority of our time, rather than one of many. It’s noteworthy, too, that MacAskill relegates this distinction to an appendix in his new book, calling longtermism “a” moral priority in the main body. “Strong” longtermism is a tougher sell than EA, or caring about the future as most people understand it. In EA forums, MacAskill has been explicit that “strong longtermism” should be downplayed for marketing reasons, and a soft-serve version offered up instead as a kind of gateway drug.

Alexander Zaitchik, October 24, 2022, The Heavy Price of Longtermism

Existential Risks

With the advent of nuclear weapons, humanity entered a new age where we face existential catastrophes, those from which we could never come back. Since then, these dangers have multiplied, from climate change to engineered pathogens, nanotechnology and the gray goo and artificial intelligence. The study of existential risk has been gradually taking shape for years, but with an increasing interest, starting with current century, and figures like Nick Bostrom

Our future, and whether we will have a future at all, may well be determined by how we deal with these challenges. In the case of radically transforming technologies, a better understanding of the transition dynamics from a human to a “posthuman” society is needed. Of particular importance is to know where the pitfalls are: the ways in which things could go terminally wrong.

Nick Bostrom, Existential Risks, 2001

Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability.

Nick Bostrom, Existential risk prevention as the most important task for humanity, 2011

According to Bostrom, these are the policy implications:

  • Existential risk is a concept that can focus long-term global efforts and sustainability concerns.
  • The biggest existential risks are anthropogenic and related to potential future technologies.
  • A moral case can be made that existential risk reduction is strictly more important than any other global public good.
  • Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state.
  • Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity’s ability to deal with the larger existential risks that will arise later in this century. This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordinated response to anticipated existential risks.
  • Perhaps the most cost-effective way to reduce existential risks today is to fund analysis of a wide range of existential risks and potential mitigation strategies, with a long-term perspective.

There are good reasons to think that Bostrom has been / is a strong longtermist.

The risk of extinction and putting a value to such potential event has become a productive area of speculation. This estimate of $600 trillion (opportunity cost) is from judge Richard Posner (made in 2004, not sure how recent inflation enters here ;))

I make the extremely conservative estimate, which biases the analysis in favor of RHIC’s passing a cost-benefit test, that the cost of the extinction of the human race would be $600 trillion and that the annual probability of a strangelet disaster at RHIC is 1 in 10 million. (Posner 2006)

New academic institutions have appeared with focus on existential risk, interestingly around Oxford and Cambridge.

Nick Bostrom is professor and Director of the Future of Humanity Institute, at University of Oxford.

The central focus of Global Priorities Institute, at the University of Oxford, is ‘global priorities research’, i.e. research into issues that arise in response to the question, ‘What should we do with a given amount of limited resources if our aim is to do the most good?’ This question naturally draws upon central themes in the fields of economics and philosophy.

The Centre for the Study of Existential Risk (CSER) is a research centre at the University of Cambridge, founded in 2012 to study possible extinction-level threats posed by present or future technology. The co-founders of the centre are Huw Price (Bertrand Russell Professor of Philosophy at Cambridge), Martin Rees (the Astronomer Royal and former President of the Royal Society) and Jaan Tallinn (co-founder of Skype).

Toby Ord and the Precipice

In the first month of the pandemic, Toby Ord published a book called “The Precipice.” Drawing on over a decade of research, The Precipice, explores the cutting-edge science behind the risks we face. It puts them in the context of the greater story of humanity: showing how ending these risks is among the most pressing moral issues of our time.

One of my principal aims in writing this book is to end our neglect of existential risk—to establish the pivotal importance of safeguarding humanity, and to place this among the pantheon of causes to which the world devotes substantial attention and resources.

These are 3 definitions of existential risk proposed by Owen Cotton-Barratt & Toby Ord in 2015:

  • Definition (i): An existential catastrophe is an event which causes the end of existence of our descendants.
  • Definition (ii): An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.
  • Definition (iii): An existential catastrophe is an event which causes the loss of a large fraction of expected value.

This is his (mostly speculative) concise summary of the risk landscape, clearly monopolized by anthropogenic risks:

I’d say Toby Ord is only a longtermist (not strong), and this is the fundamental reason why I respect and endorse him;

Protection from existential risk is a public good: protection would benefit us all and my protection doesn’t come at the expense of yours. So we’d expect existential risk to be neglected by the market. But worse, protection from existential risk is a global public good—one where the pool of beneficiaries spans the globe. This means that even nation states will neglect it.

Some Other key ideas

Traversing the Garden of Forking Paths More Wisely

A practical implication of lontermist thinking, the very short term priority in the very long term planning horizon, related to my fundamental reason to endorse Toby Ord, is the following:

The challenge is to devise forms of politics and institutions that embed long-term considerations into our everyday comings and goings.

You can find a sensible exposition here:

This chapter discusses the challenges of inserting long-term thinking into the way we deal with current affairs and decision-making. The fact that our decision-making remains mired in short-term considerations is not an accident. Human individuals and societies have built-in constraints — biological, psychological, political and institutional — that make operating in a fully rational and far-sighted manner very difficult. But taking the long term into account is not only preferable, it is imperative due to increasingly mounting catastrophic and existential risks. This chapter considers various political reforms as solutions to short-termism, and argues that while such reforms are imperative, they are also insufficient. What is ultimately needed is a deep cultural change that empowers and even compels us to factor the long term into the very fabric of our lives. To this end, a new cultural tradition is proposed for the whole of humanity, one which reminds us to be custodians of this planet and its long-term potential so that we can together ensure a viable, open, aspirational future.

Hiski Haukkala, in Cargill, Natalie; John, Tyler M. (eds.). The Long View

The Long Reflection

Toby Ord thinks that at the highest level we should adopt a strategy proceeding in three phases:

  1. Reaching Existential Security
  2. The Long Reflection
  3. Achieving Our Potential

If we steer humanity to a place of safety, we will have time to think. Time to ensure that our choices are wisely made; that we will do the very best we can with our piece of the cosmos. We rarely reflect on what that might be. On what we might achieve should humanity’s entire will be focused on gaining it, freed from material scarcity and internal conflict. Moral philosophy has been focused on the more pressing issues of treating each other decently in a world of scarce resources. But there may come a time, not too far away, when we mostly have our house in order and can look in earnest at where we might go from here. Where we might address this vast question about our ultimate values. This is the Long Reflection.

Somehow, the long reflection is complement and counterpoint to current focus on the acceleration of change, exponentiality, the singularity and/or the idea that we are living, not just a pivotal moment, but the century that, according to Holden Karnofsky, could determine the entire future of the galaxy for tens of billions of years, or more –which to me is the mother of all anthropocentrisms.

What not to do. Fanatism

  • Don’t regulate prematurely
  • Don’t take irreversible actions unilaterally.
  • Don’t spread dangerous information
  • Don’t exaggerate the risks
  • Don’t be fanatical
  • Don’t be tribal.
  • Don’t act without integrity
  • Don’t despair
  • Don’t ignore the positive

Among those who argue in favor of quantitative comparisons of options & strategies, there are those who think that we should maximize expected value, relative to our uncertainty over normative theories. It is natural to object that this kind of strategy gives too much weight to extreme, low probability theories. This is how Nick Beckstead defines fanatism:

Fanaticism: Any non-zero probability of an infinitely good outcome, no matter how small, is better than any probability of a finitely good outcome.

Beckstead, Nicholas. ‘On the Overwhelming Importance of Shaping the Far Future’. Rutgers University – Graduate School – New Brunswick, 2013. https://doi.org/10.7282/T35M649T.

Don’t be fanatical!

Safeguarding our future is extremely important, but it is not the only priority for humanity. We must be good citizens within the world of doing good. Boring others with endless talk about this cause is counterproductive. Cajoling them about why it is more important than a cause they hold dear is even worse.

Toby Ord, Op. cit

MacAskill & Sam Bankman Fried. Earning to Give & Longtermism

William David MacAskill is a Scottish philosopher and ethicist, and one of the originators of the effective altruism movement. You can find all you wanted to know about MacAskill in a (by now) incredible number of articles and posts published only this year, with the excuse of the publication of his “What we owe the future”, a marketing campaing comparable only to the launch of one of the big movies of Hollywood.

Let me pick just one of them to introduce the Fried MacAskill affair:

When Ord first mentioned existential risk, MacAskill thought that it was a totally crackpot idea. He was uneasy about how it related to his own priorities,

In 2012, while MacAskill was in Cambridge, Massachusetts, delivering his earning-to-give spiel, he heard of a promising M.I.T. undergraduate named Sam Bankman-Fried and invited him to lunch. Bankman-Fried’s parents are scholars at Stanford Law School, and he had been raised as a card-carrying consequentialist.

Gideon Lewis-Kraus, 8 August 2022, The Reluctant Prophet of Effective Altruism

Defining ‘effective altruism’ is a matter of engineering, according to MacAskill.

Effective altruism is about using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis

After graduating from Massachusetts Institute of Technology in 2014, Sam Bankman Fried(1) (SBF) decided to try out earning to give. He has made an all-in commitment to longtermism. He is the founder and CEO of FTX, a cryptocurrency exchange, and in July 2021, the Finantial Times estimated that he was the world’s richest person under 30.

SBF is only one of the most active founders of the longtermist movement and its many different initiatives, through his Future Fund.

Effective altruism, which used to be a loose, Internet-enabled affiliation of the like-minded, is now a broadly influential faction, especially in Silicon Valley, and controls philanthropic resources on the order of thirty billion dollars.

All in all, I’d would say MacAskill is above all a long opportunist.

At the end of the day, it’s all about money.

Growth at all Cost?

Very interestingly, the strong longtermism (opportunism) of MacAskill aligns with strong growth (growth at all cost).

The future could be very big,” according to Mac­Askill. “It could also be very good—or very bad.” The good version, he argues, requires us to maintain and accelerate economic growth and technological progress, even at great cost, to facilitate the emergence of artificial intelligence that can, in turn, scale growth exponentially to fuel cosmic conquest by hyperintelligent beings who will possess only a remote ancestral relationship to homo sapiens. https://newrepublic.com/article/168047/longtermism-future-humanity-william-macaskill.

Alexander Zaitchik, October 24, 2022, The Heavy Price of Longtermism

I hope to have the opportunity to write in detail about the question of degrowth in a not too distant future 😉

Putting it all together. Longtermism, Existential Risk and Effective Altruism.

This is from Haydn Belfield at the Effective Altruism Forum:

Keep tuned. This story will go on, It’s a very long one!

____________________

Bankman-Fried is one of those names who makes you think twice.

Featured Image: Pieter Bruegel the Elder, The Conversion of Paul (1567)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.