I was recently thinking about extinction risk and (extremely) small probabilities and I came up with a concept I wanted to share. I tentatively call it ‘The problem of collective ruin’ based on the gambler’s ruin. Maybe this idea already exists, but I haven’t seen it applied to extinction risk yet, although the unilateralist’s curse comes close.
In the gambler’s ruin, the problem is as follows (from Wikipedia):
a persistent gambler who raises his bet to a fixed fraction of bankroll when he wins, but does not reduce it when he loses, will eventually and inevitably go broke, even if he has a positive expected value on each bet.
For example, imagine someone continuous playing “Triple or nothing” against an opponent who never has to stop.1 What this means is that local expected value maximisation can be the wrong choice, because there is the presence of an absorbing state: once you’re broke, you can’t continue gambling.
In the gambler’s ruin, the decisions are in a sequence and there is perfect information between the decision makers (there is only one). However, we can easily imagine this as a parallel decision for multiple decision-makers, where we replace “money” by utility and make the absorbing state existential catastrophe (which is per definition2 an absorbing state).
Let’s consider the following stylized decision dilemma between two options:
- Option A: Yields a tangible benefit but has a probability of 1 in a million to lead to existential catastrophe.
- Option B: Do nothing
To an individual decision-maker, it’s tempting to discount the extremely low probability of A causing extinction. We know from behavioral economics that in practice, humans tend to treat small probabilities as non-existent.3 Furthermore, humans tend to treat extremely large values (like the value of existential catastrophe) as ‘merely large.’ However, if millions of decision-makers make this choice, option A is obviously not a good idea (exceptions notwithstanding4): the probability of extinction will asymptotically approach 1.
Of course, real-life options would also need to take into account the small probability of lowering extinction risk. There are many near-term positive benefits that have flow-through effects. For example, consider the tangible benefit of improving the happiness of a friend. Perhaps, this will lead your friend to take actions that affect the probability of extinction. In that case, it’s unclear whether risking the 1 in a million chance of extinction was worth it.
Furthermore, this is essentially a Prisoner’s Dilemma/Tragedy of the Commons with many decision-makers, only now we’ve explicitly added an absorbing state. Anyway, the lesson to learn from this seems rather difficult to apply, but here it is:
“Don’t do anything that could increase the risk of existential catastrophe by however little probability if the thing has no obvious positive flow-through effects to outweigh the risk.”
Good luck with that!
- This is similar to the St. Petersburg paradox.
- Often defined as “risks of processes or events which could permanently curtail the potential of humanity” (Bostrom, 2016). See here. It being an absorbing state makes people so worried about existential risk as opposed to other long-term trajectory changes.
- As demonstrated in Tsverky’s and Kahneman’s work on prospect theory.
- An exception would be if one sees ‘existential catastrophe’ as a better outcome than whatever else happens (suffering, et cetera).