
Longtermism is the view that positively influencing the long-term future is a key moral priority of our time. The case for longtermism rests on three simple related ideas.
- Future people matter.
- The vast majority of people that will ever exist, if Earth-originating intelligence is not prematurely extinguished, exist in the future.
- People alive today can predictably influence whether these people exist, and how well their lives go.

Max Roser depicted the idea in one post last March in Our World in Data: If we manage to avoid a large catastrophe, we are living at the early beginnings of human history. Longtermism: The future is vast – what does this mean for our own life?
Longtermism has emerged from the Effective Altruism movement over the last decade, and might well turn out to be one of its most important findings so far. Many of the key advances have been made by philosophers who have spent time in Oxford, like Derek Parfit, Nick Bostrom, Nick Beckstead, Hilary Greaves and Toby Ord. The recent book by Toby Ord, The Precipice, is an interesting introductory reading, and the Global Priorities Institute one the leading academic references.
This August longtermism seems to be making its general public debut, with the launch (or the excuse) of a new book, What We Owe the Future by William MacAskill, one of the promoters and current leading figures of the movement. He’s been very active with the publication of a number of articles in the big media outlets, like The New York Times or BBC. And the critical response has been immediate.
The arguments for and against longtermism are a fascinating new area of research, but what’s even more interesting is that it is becoming an area of heated debate (google for “against longtermism”). It is specially curious to me because, in my humble experience, ideas about the future and future studies in general are always contemplated by most people as an area of speculation well beyond the priorities and concerns of the day to day. However, the idea of looking to the potentially vast (yet inexistent) population of the future and our responsibilities toward them is feeding a conflict not too different than the one between two neighbouring countries.
I have been following with great interest the Effective Altruism movement and the rise of longtermism for the last seven years, and it is difficult for me to make a short statement on the question. But let me share three preliminary ideas.
- A very long term vision of possible futures is more necessary than ever. The challenges we face and the solutions we adopt today will have very long term implications, and it is impossible to make smart decisions without considering the long term.
- To create a (to a large extent fictious) conflict between present and future generations (people) may be (judging by the heated debate) a provocative idea to ignite the necessary debate and consciousness in the general public and society at large. Beyond that, to declare a new war between us and future would-be us it is the most preposterous idea I can imagine. As if the autocrats and stupid that populate the present were not enough!
- In my particular case, let me be very clear: To empathize with my own alleged species, humans, is not a strong motivation. I do not have any particular reason to believe that a universe populated by (intelligent evolved) humans will be much better or fun that another one populated by (unconscious) planet and stars, or any other kind of more or less sophisticated being.
Therefore, I have to say I am near Peter Singer’s view. To be as smart as we can and do our best, we have to look into the future as far as possible, but.
When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do; and if we are not at that critical point, it will have been a good thing to do anyway.
I suspect or perhaps want to believe that the most virtuous path for a species made of extremely short living self-aware “individuals” is one we can only craft with a greedy algorithm.
____________________
Featured Image: Stephan’s Quintet from Webb, Hubble, and Subaru. NASA
[…] Longtermists believe (and so I) that this neglect of the very long-run future is a serious mistake. […]