The Precipice by Toby Ord

A few weeks ago I was walking around campus at Brandeis University (where I teach) and a couple of guys were standing outside with a table full of copies of this book. I stopped for a second to take a look, and the two guys asked me if I was into “effective altruism”, or if I had read either 80,000 Hours or Doing Good Better. I had in fact read Doing Good Better a number of years ago, very much enjoyed it, and I was informed that Toby Ord was one of the founders of the “effective altruism” movement, and had also helped to co-author the other books as well. They were handing out free copies of The Precipice, so I figured I would give it a shot. For starters, I had to remind myself what effective altruism was, and, as Ord defines it, it is when people use evidence and reason to do as much good as they possibly can with their lives. The Precipice isn’t necessarily about effective altruism, directly, but it does explore the science behind the risks that we (humans) face, both now and into the future, and he gives his thoughts on the actions and strategies we can use to better safeguard the future of humans as we know it. The precipice, defined by Ord, is a level of unsustainable risk that we have already achieved, and he thinks that we are unwilling to confront certain issues like climate change, nuclear weapons, environmental degradation, and pandemics — to name a few — because we don’t understand the probability and risk involved if we fail to properly confront these challenges head-on. Ord has a good mix of science, data, research, common-sense, and human history in his narrative, and, being the philosopher he is (he teaches the subject at Oxford University), he offers his rationale for the importance of understanding the risks we face, and makes clear that he is not a pessimist for humans but quite the opposite — IF we get our acts together quickly. Some of the book is a bit too philosophical for my tastes, but looking back on my notes, there are some interesting viewpoints, and I think he does a good job of letting the facts speak for themselves, and is far from preachy and finger-pointy in any way — which is very much a good thing. He is a believer in human good and wellbeing, and he simply thinks there is a better path for our future. I think his ideas are worth a look.

  • book argues that safeguarding humanities future is the defining challenge of our time
  • humanity lacks the maturity, coordination, and foresight necessary to avoid making mistakes from which we could never recover
  • greatest risks are caused by human action, and they can be addressed by human action
  • existential risks — threaten the destruction of humanities longterm potential; extinction is most obvious way humanity’s entire potential could be destroyed
  • we live in the most important era of human history
  • major reorientation in the way we see the world, and our role in it
  • closing the gap between our wisdom and power
  • effective altruism: people aspire to use evidence and reason to do as much good as possible (author helped to create this term)
  • what set humans apart was not physical but mental — our intelligence, creativity, and language
  • each human’s ability to cooperate with the dozens of other people in their band was unique among large animals.
  • we learned from our ancestors, added minor innovations of our own, passed this all down to our children
  • Agricultural Revolution: reduced the amount of land needed to support each person by a factor of a hundred, allowing large permanent settlements to develop, which began to unite together into states
  • Scientific Revolution: replace a reliance on received authorities with careful observation of the natural world, seeking simple and testable explanations for what we saw
  • idea of progress: working together to build a better future
  • Industrial Revolution: modern era of sustained economic growth, income grew faster than population and ushering in an unprecedented rise in prosperity that continues to this day
  • so many have so little is among the greatest problems of our time, and has been a major focus of the author’s work
  • every single country nowadays has a life expectancy above 50
  • our planet will remain habitable for roughly a billion years
  • with the detonation of the atom bomb, a new age of humanity began
  • humanity has risen to a position where we control the rest of the world precisely because of our unparalleled mental abilities
  • trend toward increase in the power of humanity which has reached a point where we pose a serious risk to our own existence
  • very fact that these risks stem from human action shows us that human action can address them
  • risks are complex (so not amenable to simple mathematical analysis) and unprecedented (so cannot be approximated by a longterm frequency)
  • unsustainable level of risk — Precipice
  • highest levels of risk, humanity opening its eyes, coming into maturity and guaranteeing its long and flourishing future
  • existential catastrophe: destruction of humanity’s longterm potential
  • existential risk: risk that threatens the destruction of humanity’s longterm potential
  • need to preserve our vast potential, protect it against the risk of future destruction
  • allow our descendants to fulfill our potential
  • our potential is a matter of what humanity can achieve through the combined actions of each and every human
  • even if civilization did collapse, it is likely that it could be re-established
  • civilization has already been independently established at least seven times by isolated peoples
  • absent catastrophe, most generations are future generations
  • recognizing that people matter equally, regardless of their geographic locations, is a crucial form of moral progress; and also recognizing that people matter equally, regardless of where they are in time
  • longtermism: especially concerned with the impacts of our actions upon the longterm future
  • partnership between those that are living, those who are dead, those who are to be born
  • paying it forwards: duties to future generations may thus be grounded in the work our ancestors did for us when we were future generations
  • insight into systematic strengths or weaknesses in humanity’s ability to achieve flourishing
  • humanity spends more in ice cream every year than on ensuring that the technologies we develop do not destroy us
  • economic theory tells us that existential risk will be undervalued by markets, nations, and even entire generations
  • protection from existential risk is a public good: would benefit all of us and my protection doesn’t come at the expense of yours — global public good
  • protection from existential risk is an intergenerational global public good
  • when citizens are empathetic and altruistic, identifying with the plight of others, they can be enlivened with the passion and determination needed to hold their leaders to account
  • availability heuristic: tendency for people to estimate the the likelihood of events based on their ability to recall examples
  • with existential risk it fails completely
  • more expansive compassion: acts over the long term
  • scope neglect: lack of sensitivity to the scale of benefit or harm
  • almost all the large asteroids have been tracked
  • no other existential risk is as well handled as that of asteroids and comets
  • willingness to think seriously about imprecise probabilities of unprecedented events is crucial to grappling with risks to humanity’s future
  • supernova: explosion of a star, releases as much energy as our Sun will radiate over its ten-billion-year lifetime
  • premature to conclude that we have discovered all of the possible mechanisms or natural extinction while major mass extinction events remain unexplained
  • detailed fossil record starts 540M years ago with the Cambrian explosion: rapid diversification of complex life into most of the major categories we see today
  • risks are substantially greater now for humans than they were for early humans or related species
  • chief among these is pandemics
  • natural risks are dwarfed by those of our own creation
  • thousand times more anthropogenic his over the next century than natural risk, so it is the anthropogenic risks that are the focus
  • CO2 release in the industrial period: more being released since 1980 than in the entire industrial period before that
  • can climate change itself directly threaten our extinction or permanent collapse?
  • most extreme effect is runaway greenhouse effect — driven by the relationship between heat and humidity
  • more vapor in the atmosphere produces more warming, which produces more water vapor, an amplifying feedback
  • runaway greenhouse effect: warming continues until the oceans have mostly boiled off, leaving a planet incompatible with complex life
  • most greenhouse effect: stops short of boiling the oceans
  • cause for more research on these two issues
  • unrecoverable could be if we trigger other feedback effects which release more carbon into the atmosphere, emit more carbon ourselves, or a given amount of carbon may cause much more warming than we thought
  • melting arctic permafrost and release of methane from the deep ocean are concerning
  • human activity has already releases more than an entire biosphere worth of carbon into the atmosphere
  • climate sensitivity is the number of degrees of warming that would eventually occur if greenhouse gas concentrations were doubled from their pre-industrial baseline of 280 PPM
  • major effects don’t necessarily predict extinction or irrevocable collapse
  • when we look at many of the past cases of extremely high global temps or rapid warming we don’t see a corresponding loss of biodiversity
  • obvious is heat stress
  • geoengineering = carbon dioxide removal and solar radiation management — limiting the amount of sunlight absorbed by earth
  • massive unintended consequences
  • amount of food per person has risen over the last 50 years
  • Norman Borlaug, invented new, high yield varieties of wheat, may be responsible for saving more lives than anyone else in human history
  • population is now increasing in a linear manner, fixed number of people being added each year instead of a fixed proportion
  • fraction of species that have gone extinct is much lower than a mass extinction
  • toll on biodiversity within each region may be much higher, and this may be what matters most
  • loss of ecosystem services — purifying water and air, providing energy and resources, improving soil, that animals and plants currently do for us, but we may find costly or impossible to do ourselves
  • risks technology imposes on humans have been outweighed by the benefits it has brought
  • Black Death = between 1/4 and 1/2 of all Europeans were killed
  • 5 to 14 percent of the world
  • during the 20th century, fifteen countries are known to have developed bioweapons programs, including the US, UK, and France
  • 100 years of improvements to bioweaponry should alarm us
  • most dangerous escapes are not microbes, but information: information hazards
  • unilateralists curse: takes just one overly optimistic estimate to lead to the information being released
  • information hazards are especially important for boorish, due to its high ratio of misuse risk to accident risk
  • deep learning gives networks the ability to learn subtle concepts and distinctions
  • concerns about AI entrenching social discrimination, producing mass unemployment, supporting oppressive surveillance, violating norms of war
  • unmatched intelligence led to unmatched power and thus control of our own destiny (humans)
  • specification of which acts and states produce reward for the agent is known as its reward function
  • assumes, with AI, that the builders of the systems are striving to align it to human values
  • AI researchers don’t know how to make a system which, upon noticing this misalignment, updates its ultimate values to align with ours rather than updating its instrumental goals to overcome us
  • transition to a world where humans are no longer the most intelligent entities on earth could easily be the greatest ever change in humanity’s place in the universe
  • enforced dystopia: Orwell, totalitarian state achieving global dominance and absolute control, locking the world into a miserable condition
  • key aspects of the future of the civilization are being locked in in such that they are almost impossible to change
  • unforeseen risks are thus important to understanding the relative value of broad versus narrowly targeted efforts
  • unaligned AI = greatest risk, in the author’s view
  • Global Burden of Disease = major study, influenced the author’s work in the field; study of global public health
  • if something is a risk factor, its opposite will be a security factor
  • the more a problem is important, tractable or neglected, the more cost-effective it is to work on it, and thus the higher its priority
  • the importance of a problem is the value in solving it
  • focus on risks that are soon, sudden, and sharp
  • we should prioritize sharp risks — those that are less likely to be preceded by warning shots — for they are more likely to remain neglected over the long run
  • most existential risk comes from human action: from activities that we can choose to stop, or to govern effectively
  • humanity’s point of view: what all humans would do if we were sufficiently coordinated and had humanity’s longterm interests at heart
  • a place where existential risk is low and stays low = existential security
  • protect humanity’s potential, preserve it
  • existential security is about reducing total existential risk by as many percentage points as possible
  • visions for humanity would be best realization of our potential: Long Reflection
  • which is the best kind of future for humanity?
  • ultimate aim is fully achieving humanity’s potential
  • first get ourselves to safety — to achieve existential security
  • make decisions of grave importance without access to robust probabilities for the risks involved
  • each nation is inadequately incentivized to take actions that reduce risk and to avoid actions that produce risk, preferring instead to free-ride on others
  • current predicament stems from humanity’s power outstripping the slow and unsteady growth of our wisdom
  • responsible deployment and governance of new technology
  • Montreal Protocol = successful governance, phasing out chemicals that were deleting the Ozone
  • differential technological development = speeding up the development of protective technologies relative to dangerous ones
  • state risks = asteroids, comets
  • transition risks = arise during a transition to a new technology or social regime
  • more transition than state risks
  • contraceptive pill made possible by a single philanthropist, Katharine McCormick
  • within 100K years, Earth’s natural systems will have scrubbed our atmosphere clean of over 90 percent of the carbon we have released, leaving the climate mostly restored and rebalanced
  • most of the intrinsic value of the world lies in the rest of our ecosystem, humanity’s instrumental value may yet be profound
  • planets are other Earth’s; starts are other suns, many with their own planets
  • 100B Suns, our earth is just one island amidst billions
  • sunlight hitting Earth’s surface each day carries 5,000 times more energy than modern civilization requires
  • gives in two hours what we use in a year
  • solar collectors in orbit around the Sun?
  • humanity’s potential — scope of what we might someday be able to achieve
  • pure time preference: preference for one benefit over another simply because it comes earlier, and it is a third reason for discounting future benefits (economists don’t like)
  • population ethics: increasing someone’s wellbeing is good, reducing it is bad
  • Total View: the moral value of future wellbeing is just the total amount of wellbeing in the future
  • value of having a thousand more generations is a thousand times the value of our generation
  • person-affecting restriction: that an outcome can’t be better than another (or at least not in terms of wellbeing) unless it is better for someone
  • when combining risks … total risk departs more and more from being the sum of the risks; there can be substantial increasing marginal returns if we eliminate more and more risks; much more important to work on the largest risks




I love books, I have a ton of them, and I take notes on all of them. I wanted to share all that I have learned and will continue to learn. I hope you enjoy.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How to Distinguish True And False Information

Why capitalism is the truly moral system


The Complex Adaptive System Behind the Extended Order, Part 1

What Virtues Do You Stand For?

The New Women in Hemingway’s The Sun Also Rises

Reply to: Academia as a Therapy for Despair

Nihilistic Materialism

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Adam Marks

Adam Marks

I love books, I have a ton of them, and I take notes on all of them. I wanted to share all that I have learned and will continue to learn. I hope you enjoy.

More from Medium

15 Hiking Memes That Will Make You Laugh

Psoriasis | Things that work, 02

Books about Ukrainians: Baba’s Kitchen Medicines

Are Wind Powered Vehicles Future of Travel?