If I had to describe the moral theory I have most credence in, it would be close to classical utilitarianism. The only property with intrinsic value I’m confident about is subjective experience (thus, I approximate a moral hedonist). I believe pain and pleasure can be measured objectively, although we do not have the tools yet to do so. I also believe that suffering and happiness can be represented symmetrically on a scale: there is no fundamental reason to prioritise preventing suffering over increasing happiness. Furthermore, I believe we can simply aggregate the value of all subjective experiences and that it’s best to maximize this aggregate value.
Despite my skepticism towards it, I recently attended a retreat focused on suffering-focused ethics. It is the new “umbrella term for views that place primary or particular importance on the prevention of suffering,” encompassing also negative utilitarianism and, by implication, anti-natalism. Although my skepticism about these views was not relieved, I found the retreat very valuable: I learned a lot and I made new allies in the mission to improve the long-term future. I think the suffering-focused ethics community has something to offer to classical utilitarians, simply because it offers a different perspective from smart altruistic people on a similar set of problems. This can help us to uncover blind spots, make some ideas more salient, and clarify the reasons we have for our beliefs.1 In addition, I think epistemic modesty and moral cooperation are two more reasons to pay attention to the ideas of their community. Let me share my key takeaways here.
In the modern world, extreme suffering (probably) outweighs extreme happiness in prevalence, duration, and intensity
I think the amount of small and medium pleasures (e.g. enjoying great meal) and displeasures (e.g. suffering from an injured shoulder) are probably within the same order of magnitude. However, as we go towards the extreme ends of the spectrum I think the picture gets more skewed. States of extremely positive wellbeing, such as bliss, seem not only less common and shorter than intense suffering,2 but also unable to outweigh the intensity of extreme suffering. Thus currently, there seems to be an empirical asymmetry between extreme suffering and extreme wellbeing.
Many people (within EA) appear to share this view: this poll (results in figure) asked people’s tradeoffs for 1 day of drowning in lava vs. x duration of bliss. Although there are many methodological problems with this poll (and at least 10% of the respondents agree with me), it does show people’s intuitions that intense suffering is (currently) more intense than intense bliss.
Given this, I think it is plausible that relieving intense suffering should be the highest altruistic priority for those focused on the near-term (e.g. within the next 100 years), and it is an open question what that implies for cause selection.
Nonetheless, this does not say much about about the distribution of sentient states in the long-term future. If it is possible to ‘program’ consciousness, then such asymmetries can likely be removed from the source code of digital beings (whether it will removed is another question). The empirical asymmetry would then be an example of the horrors of biological evolution best left behind. Yet, if this asymmetry is the result of competitive evolutionary pressures, this might be a reason to expect competitive multipolar scenarios3 to be very negative in value.
The future is not guaranteed to be positive
One of the simplest lessons is perhaps making salient that the future is not necessarily positive in value. An overly simplistic view of x-risk reduction is ‘we should reduce extinction risk, because then there will be an awesome future ahead of us.’ Instead, we should state a more nuanced view of longtermism, for example in the following way:
‘the future can be enormously vast, we can probably influence the long-term future, and we should try to make it go as well as possible. Even if we expect the future to be negative, we should still try to influence it for the better.’
I think this framing also makes longtermism much more persuasive to people who know little about population ethics (basically everyone!). If we want to gather a broad support for longtermism, I think this is a much more promising strategy than trying to convince people that the future will be close to utopian.
Furthermore, it gives rises to interesting questions like ‘how valuable do we think the future will be (in expectation)?’, ‘do we even think the future is and ‘how much of the expected value of the future is influenced by the value of ‘optimized’ scenarios?’ (i.e. scenarios in which some process is optimizing for something that is (dis)valuable). But these questions are something to go into another time.
Our understanding of wellbeing and consciousness is very limited
It might go without saying, but we still know very little about consciousness and what well-being consists of. Sure, we have reliably identified a lot of factors that make humans feel good and bad, but what are the dimensions of emotions? Are positive affect and negative affect even on the same scale? There is no scientific consensus about this. And isn’t happiness always about something? Should it be about something? What would this something be? How do we compare the value of ecstasy to the equanimity of a monk? How can we create a reliable measure of valence? What are the components of valence, and is valence the only thing that matters? The more I think about this, the more confused I get.
Given that I, like many other EA’s, am trying to optimize for subjective experience, it’s humbling to understand so little about it. Nonetheless, it doesn’t mean that we don’t know what to do. In the case of longtermism, a good heuristic is still ‘prevent very bad irreversible things until we are wise enough to figure out what to actually promote’ (also known as ‘reduce existential risk’).
Staring into the darkness of this world can be both empowering and sobering
Lastly, during the retreat I noticed a more visible role of empathy than at other EA events. People seemed very aware of all the suffering in the world, and strongly focused on their goal of reducing and preventing suffering. Now, such a negative focus can be detrimental: it could feel demotivating, overwhelming, and depressing. However, I think there can also be a lot of value in acknowledging the darkness of this world, whether you believe it’s only a small patch or a vast expanse. It is empowering: there is an strong and urgent pull to act now, and I think a focus on the darkness in this world provides a solid defense against value drift. It is also sobering: the world ‘ain’t all sunshine and rainbows’ and it can be much worse yet.
In summary, classical utilitarians can learn quite a bit from the suffering-focused ethics community. Intense suffering seems more intense and prevalent than intense happiness; the future is not necessarily positive; we still know very little about consciousness and wellbeing; and acknowledging the suffering in this world can be a powerful and reliable motivation. In our mission to maximally improve the world, we should welcome this diversity of perspectives.
Footnotes
- It seems to have already made salient concepts such as suffering risks, wild-animal suffering, and pain relief – although the last might come from simply focusing on subjective wellbeing rather than prioritizing preventing suffering.
- My reasons for believing states of extremely positive wellbeing are rare and short are based on some intuitions of what people commonly experience, as well as evolutionary; positive states seem to be associated more with ‘obtaining something evolutionary useful’ and once it’s obtained, the emotion serves no further function. On the other hand, negative states seem to correspond to ‘something evolutionarily harmful to be relieved’ which supposedly is able to last longer. However, I am still significantly uncertain and confused about these beliefs, so take them with a grain of salt.
- By multipolar scenarios I here mean ‘stable scenarios in which we have achieved some form of superintelligence in which power is distributed over multiple agents with different goals’.
I enjoyed your thoughts in this article. However, I noticed there is no social media share options, and you are not active on Twitter.
Thanks for the post, Siebe, you made very good points. It was a pleasure having our talks at the retreat.
Thanks for your post. I really enjoyed reading it and I am inspired to attend such a retreat myself. Furthermore, you purposely went to a retreat, which did not match your philosophy and learned from it. Perhaps, it would be an interesting meta cause to “nudge” people to look outside their areas of interest.
E.g. At each retreat, save 2 spots for “other thinkers” 😉
I like the idea of intentionally inviting diverging opinions to a retreat (depending on the goal of the retreat)!