What I, a classical utilitarian, learned from suffering-focused ethics

If I had to describe the moral theory I have most credence in, it would be close to classical utilitarianism. The only property with intrinsic value I’m confident about is subjective experience (thus, I approximate a moral hedonist). I believe pain and pleasure can be measured objectively, although we do not have the tools yet to do so. I also believe that suffering and happiness can be represented symmetrically on a scale: there is no fundamental reason to prioritise preventing suffering over increasing happiness. Furthermore, I believe we can simply aggregate the value of all subjective experiences and that it’s best to maximize this aggregate value.

Despite my skepticism towards it, I recently attended a retreat focused on suffering-focused ethics. It is the new “umbrella term for views that place primary or particular importance on the prevention of suffering,” encompassing also negative utilitarianism and, by implication, anti-natalism. Although my skepticism about these views was not relieved, I found the retreat very valuable: I learned a lot and I made new allies in the mission to improve the long-term future. I think the suffering-focused ethics community has something to offer to classical utilitarians, simply because it offers a different perspective from smart altruistic people on a similar set of problems. This can help us to uncover blind spots, make some ideas more salient, and clarify the reasons we have for our beliefs.1 In addition, I think epistemic modesty and moral cooperation are two more reasons to pay attention to the ideas of their community. Let me share my key takeaways here.

In the modern world, extreme suffering (probably) outweighs extreme happiness in prevalence, duration, and intensity

I think the amount of small and medium pleasures (e.g. enjoying  great meal) and displeasures (e.g. suffering from an injured shoulder) are probably within the same order of magnitude. However, as we go towards the extreme ends of the spectrum I think the picture gets more skewed. States of extremely positive wellbeing, such as bliss, seem not only less common and shorter than intense suffering,2 but also unable to outweigh the intensity of extreme suffering. Thus currently, there seems to be an empirical asymmetry between extreme suffering and extreme wellbeing.

Many people (within EA) appear to share this view: this poll (results in figure) asked people’s tradeoffs for 1 day of drowning in lava vs. x duration of bliss. Although there are many methodological problems with this poll (and at least 10% of the respondents agree with me), it does show people’s intuitions that intense suffering is (currently) more intense than intense bliss.

Given this, I think it is plausible that relieving intense suffering should be the highest altruistic priority for those focused on the near-term (e.g. within the next 100 years), and it is an open question what that implies for cause selection.

Nonetheless, this does not say much about about the distribution of sentient states in the long-term future. If it is possible to ‘program’ consciousness, then such asymmetries can likely be removed from the source code of digital beings (whether it will removed is another question). The empirical asymmetry would then be an example of the horrors of biological evolution best left behind. Yet, if this asymmetry is the result of competitive evolutionary pressures, this might be a reason to expect competitive multipolar scenarios3 to be very negative in value.

The future is not guaranteed to be positive

One of the simplest lessons is perhaps making salient that the future is not necessarily positive in value. An overly simplistic view of x-risk reduction is ‘we should reduce extinction risk, because then there will be an awesome future ahead of us.’ Instead, we should state a more nuanced view of longtermism, for example in the following way:

‘the future can be enormously vast, we can probably influence the long-term future, and we should try to make it go as well as possible. Even if we expect the future to be negative, we should still try to influence it for the better.’

I think this framing also makes longtermism much more persuasive to people who know little about population ethics (basically everyone!). If we want to gather a broad support for longtermism, I think this is a much more promising strategy than trying to convince people that the future will be close to utopian.

Furthermore, it gives rises to interesting questions like ‘how valuable do we think the future will be (in expectation)?’, ‘do we even think the future is  and ‘how much of the expected value of the future is influenced by the value of ‘optimized’ scenarios?’ (i.e. scenarios in which some process is optimizing for something that is (dis)valuable). But these questions are something to go into another time.

Our understanding of wellbeing and consciousness is very limited

It might go without saying, but we still know very little about consciousness and what well-being consists of. Sure, we have reliably identified a lot of factors that make humans feel good and bad, but what are the dimensions of emotions? Are positive affect and negative affect even on the same scale? There is no scientific consensus about this. And isn’t happiness always about something? Should it be about something? What would this something be? How do we compare the value of ecstasy to the equanimity of a monk? How can we create a reliable measure of valence? What are the components of valence, and is valence the only thing that matters? The more I think about this, the more confused I get.

Given that I, like many other EA’s, am trying to optimize for subjective experience, it’s humbling to understand so little about it. Nonetheless, it doesn’t mean that we don’t know what to do. In the case of longtermism, a good heuristic is still ‘prevent very bad irreversible things until we are wise enough to figure out what to actually promote’ (also known as ‘reduce existential risk’).

Staring into the darkness of this world can be both empowering and sobering

Lastly, during the retreat I noticed a more visible role of empathy than at other EA events. People seemed very aware of all the suffering in the world, and strongly focused on their goal of reducing and preventing suffering. Now, such a negative focus can be detrimental: it could feel demotivating, overwhelming, and depressing. However, I think there can also be a lot of value in acknowledging the darkness of this world, whether you believe it’s only a small patch or a vast expanse. It is empowering: there is an strong and urgent pull to act now, and I think a focus on the darkness in this world provides a solid defense against value drift. It is also sobering: the world ‘ain’t all sunshine and rainbows’ and it can be much worse yet.

 

In summary, classical utilitarians can learn quite a bit from the suffering-focused ethics community. Intense suffering seems more intense and prevalent than intense happiness; the future is not necessarily positive; we still know very little about consciousness and wellbeing; and acknowledging the suffering in this world can be a powerful and reliable motivation. In our mission to maximally improve the world, we should welcome this diversity of perspectives.

 

Footnotes

Doing Good Science: More Than Good Methods

Epistemic status: initial thoughts, but probably incomplete. But I prefer sharing incomplete but useful thoughts over not sharing them at all. I think these thoughts roughly generalize to non-scientific research (e.g. industry’s R&D or philosophy).

Recently somebody at a dinner asked me what I think is effective science (that is, science that is maximally good for the world). I had some initial thoughts, but the question kept simmering in my mind. Here I set out an initial model of what I think is good science. Numerous people before have thought about this question and I am probably ignoring a lot of their work. However, I hope this still adds something useful. This question is important for doing good science, evaluating good science, and funding good science.

I think this problem is important, neglected, and tractable. It is important because scientific developments compound: they will affect the trajectory of the future and obtaining new insights sooner improves decision making. I think it is neglected as most granting institutions seem to adopt a morally partial view (i.e. they favour a specific group such as Dutch citizens, or a specific topic such as history of philosophy). Impartial granting and evaluation is done, but only exceptionally. I think the problem is relatively tractable, although it is one of the most complex ways in which one can do good. Science has a long distance to actual impact (in comparison to e.g. planting trees or making policy), so there are many paths it can take towards impact. However, my impression is that we can evaluate the value of research much better than ‘total ignorance’, even if we cannot single out the best possible research.

I want to define ‘Good Research’ as good questions with good answers that are used to improve the world. In formula form, this looks as follows:

Value of research =
quality of the question * quality of the answer * quality of use

Note that the value of research is a product, not a sum. This means that scoring low on only one dimension (e.g. use) drastically reduces the total value of the research. This principle (AKA the Anna Karenina Principle) normally gives rise to a power-law distribution, even if values are normally distributed over the three factors. This can be shown in two different ways, so I’ll just draw both. The point here is to get all relevant factors right.

I should also note that each value can take a negative value, but I don’t include that in my model for now. However, this means that even seemingly good research can have negative consequences. Responsible scientists need to realize that, and take the necessary steps to prevent scoring negatively on any factor wherever possible.

I also believe that the limited attention that has gone into evaluating good science has gone mostly into the quality of answers, making sure they are reliable (e.g. they replicate) and valid (i.e. actually capture the phenomenon it is meant to capture), and attention has also gone into science implementation, e.g. by collaborations with industry, government, and civil society. I suspect the least amount of attention has gone into making sure we ask the right questions. I will now go into each factor, flagging some initial considerations and breaking the factor up further.

Quality of the question

Richard Hamming is (in)famous for asking many scientists “what is the most important problem in your field?”. After getting an answer, he asked “and why are you not working on it?” Unsurprisingly, not everyone liked Hamming. However, his observation is on point: many scientists do not work on the most important problems in their field. Hamming acknowledged that asking the right questions is a skill that needs years of cultivation (see this talk of his), and requires being connected to one’s own field, other fields, and to society. Without (the right) feedback one shouldn’t expect to figure out the most important questions to ask. (This is also why I think being involved in the effective altruism community is so important for me.) However, where Hamming focused on problems that were scientifically important, I (and I suppose many readers with me) care about more than scientific progress: I want it to benefit the lives of humans and other sentient beings.

This brings values into the picture. I regard one of the main contributions of philosophy of science in the 20th century to be that science is value-laden, not value-neutral. Consequentially, scientists need to engage in practical ethics to decide what is most important: is it the wellbeing of humans, of all sentient beings, reducing global inequality, reducing existential risk, promoting biodiversity, or something else? Given that some moral views (especially total utilitarianism, which I find very plausible) claim that there are orders of magnitude difference in value between these things, it is important to get them right. If you think that we already have our academic priorities aligned with our moral priorities, I present you this graph from Nick Bostrom (2013):

Other components of the question quality might be its scientific importance: working out the details of one of many theories does not move the field forward as much as questioning the assumptions of a paradigm, which has huge scientific implications (e.g. Kahneman & Tsversky questioning classical economics assumptions). I have heard others say that some people are especially good at this: newcomers and scientists from other fields. Interdisciplinary research also seems neglected: applying insights to new areas is something not many people are able to. There are possibly other factors I have not identified here.

Quality of the question =
moral importance * scientific importance * ..?

Quality of the answer

This is a factor I am much less qualified to talk about than other scientists and philosophers of science. They include factors like reliability, validity, and can be improved by individual creativity and rigor, and systems such as preregistration to avoid p-hacking and the file drawer effect, and other methods to avoid false positives. The formula might look as follows, but I’m significantly uncertain about it:

Quality of the answer =
reliability * validity * generalizability * simplicity

Quality of use

Research is worthless if is always stays within academia. It needs to positively affect the world. Insights will generally be used outside of academia by government, industry, or civil society. However, science can also indirectly influence the world via influencing other science. I suspect this is how most science impacts the world. To connect insights to actors the insights need to be accessible. Initiatives to improve accessibility are open science, science outreach to general public, and other translation such as literature reviews.

Moreover, quality of use is also not value-neutral. A policy can be implemented very rigorously and efficiently to achieve a worthless, or even an abominable, goal. Effective science therefore needs to ensure their research is used by competent actors with the right values. An approximation of the quality of use is as follows:

Quality of use =
accessibility of answer * values of user * quality of implementation

Conclusion

Doing good with science is complex, neglected, but tractable. I have set out a very rough, incomplete and partially incorrect model to make sense of this. I encourage others do further work on this and would be happy to contribute further.

My preliminary model of impactful science looks like this, and is subject to change (click to enlarge):

How to learn from our mistakes: recognize, admit, analyze, address

It’s common to hear that we should learn from our mistakes, but it’s rare to hear how we’re supposed to do that. I made a mistake this week, so let’s use that as an example to derive some steps to learn from our mistakes:

This week I had intended to work on my thesis every morning, but I procrastinated a lot. I was starting a new section but did not know where to start (“don’t know where to start” is a perfect cue for my procrastination). After I realized I made a mistake – took me a couple of days – rather than just feeling bad anymore  I now automatically started looking for clues (with help of a roommate). I had been waiting for feedback on my first section’s draft, the basis for the second section. What’s more, I suddenly remembered I had had this problem before! I’m never productive when I’m waiting for feedback, because I don’t really know what to do: should I move on, should I wait? I resolved to spend my feedback period for exploration: reading related books or articles to get new ideas, get a broader perspective, and to remember why I found the topic interesting to start with. Let’s see if that works.

Let’s dive into it.

Step 0. Recognize

We don’t always recognize that we’re making a mistake. There’s no mistake to learn from if we don’t recognize it. This is a sense that can be developed and highly worth investing in! For example, at first I did not recognize “feeling bad” was the consequence of a response I could have handled differently! You can also set up your environment to more easily recognize mistakes. Recently I have started a weekly conversation with an accountability buddy to set goals for the next week and analyze how the pursuit of last week’s goals went. This goes both ways and it’s guaranteed to recognize some mistakes. More informally just talking to friends about negative things helps as well!

Step 1. Admit

We need to admit we made a mistake. This is not only meant for mistakes with consequences for others, but also for personal mistakes that only affect ourselves. It’s important to focus on the behavior (“I made a mistake”) than on the person (“I am a bad person”). Why focus on behavior rather than the person? Mostly because psychologists think it’s best. If I have to give my own reason, I think that focusing on the behavior creates a dynamic mindset (“things change, so I can change things”), while focusing on the person creates a static mindset (“things are as they are, no way to do anything about it”). This is probably not the full picture, but the heuristic of focusing on behavior, not the person seems useful.

Admitting seems logical, but it’s not that easy. It requires breaking out of shame. It’s easier to ignore it (“just forget about it”) or even deny it was your mistake. Ignoring a mistake is very tempting once the consequences have been dealt with. However, if the cause is still there, we’ll likely make the same mistake in the future. Dealing only with the consequences is often not a long-term solution.

Step 2. Analyze

Once a mistake is admitted it becomes easier to look at it; the sting is out of it. Mostly you’re not going to sit down and write an analysis report about your mistake (although if you journal, you might). However, some good questions to ask are: what happened? Why did it happen? Has this happened before? Could I have seen it coming? Have I talked to someone about this earlier and what did they say? Look for patterns.

Step 3. Address

A problem can be addressed on different levels. The more general a solution, the more worthwhile it is to spend time thinking about it and developing it. If the mistake is specific (e.g. “I spent too much time on Facebook’s Newsfeed”) develop a micro-solution (downloading a newsfeed blocker). A medium-level problem (“when waiting for feedback, I don’t know what to do”) requires a less specific solution (“go explore related content”). A highly generalized version of the problem is that I need alternative strategies when I am not directly productive. I actually already have a response to this: I ask myself “what can I do now that will make me more productive in the future?” Depending on the situation this means I could do sports, do little tasks that I otherwise would have to do later, organize my stuff, or read related content. The challenge with general solutions is knowing when to apply them, and applying them enough.

Optional step 4: share

Talking about mistakes and our solutions on the different levels helps others deal with (or recognize) their mistakes to! Promoting a culture of sharing helps everyone move forward quicker. As the promo-hipsters on Facebook say: ”sharing IS caring <3”

Conclusion

To summarize, we can learn from our mistakes and fix problems, but clear steps are better than vague advice. Set up systems to recognize a mistake, admit that you made it (focus on behavior), analyze the problem, and address the problem on the appropriate levels. Now go make a mess!


NB: Not every mistake has a cause that should be priority to address. Sometimes there are more important things in life and the appropriate reaction is “well that’s just how it is for now, sorry but deal with it”. However, this cannot be a permanent attitude to a problem.

NB2: I know I don’t refer a lot and that’s bad form. I often base my ideas on earlier reading but don’t know exactly which ideas to attribute to which background. I took inspiration from James Clear (specifically Treat failure like a scientist), Carol Dweck’s growth mindset, and Brené Brown’s The Power of Vulnerability.

Everything follows the path of least resistance

Epistemic status: feeling strongly I am onto something, but also confused how to apply it to all cases. I describe my ideas in my own words, and am not communicating them clearly. Given the strength of my claim, it’s probably wrong. But let me defend it anyway.

Everything follows the path of least resistance. And when I say “everything” I don’t mean “most things”, I mean everything.

People who are lazy are sometimes described as taking the path of least resistance. Of course, I agree (because I believe that everything takes the path of least resistance). However, this implies that hard-working, conscientious people are taking a more difficult path. They’re eschewing the path of least resistance. That’s wrong. Hard-working people have done at least one of both things, and probably both:

  1. They have made the hard path easier. Going to work an hour early has become habitual, or asking difficult questions when they are confused has become habitual.
  2. They have made it easy path(s) harder. They feel negatively about being lazy, are afraid of being judged, or simply don’t know how to be lazy. It’s not a habit.

Why does water flow downward and through the valley? It’s the path of least resistance. (And interestingly, just like habits, the more water has flown somewhere previously, the more likely this path will be the path of least resistance for new water).

Why do people when confronted with their own immoral behavior, often change their belief rather than their behavior? Because it’s the path of least resistance.

Why is it so hard to stick with very hard problems? Because it’s high resistance. Why have some exceptional research been able to really focus on the hard problems? Because they have made it easier, and have erected barriers to the other paths.

Why is it so hard to change organizations? Because sticking with habits is the path of least resistance. To change organizations, you must create resistance towards the current state of being and create believe that the change is not so hard after all.

Now there remain at least three questions:

  1. Why does everything follow the path of least resistance?
  2. Is there not random movement, not following anything?
  3. What about payoffs? Surely a hard path with a good payoff will be taken.

I think the answers to the questions are related to evolution. Yes, there is random movement. But everything faces a selection pressure: animals will mate, ideas will be spread. The selection favors the ones taking the path of least resistance, because they will be most successful: the most offspring, the most energy left to do other valuable things. Regarding the payoff, for people a higher payoff will make a path more attractive (thus lower resistance). For thing without intent, the expected payoff (probability of achieving * actual payoff if achieved) will determine attractiveness for large enough samples.

If I’m right, this has the implication that a global optimum cannot be reached without changing the landscape. You got to erect barriers to the easier optima, and pave the way to the global optimum. More practically, I think this can serve as a tool for understanding confusing phenomena: ‘why does X do Y?’ Because it’s following the path of least resistance. You then need to figure out what the other paths are, what their resistances are, and why they are higher.

I challenge anyone to show me an example of something not following the path of least resistance. I believe that if I understand the phenomenon enough, I can show it actually does.

In the meantime, I will think more about this. I believe I need to read more about evolution (in its abstract form, not necessarily biological evolution).

Ambition is empty without direction

I used to find ambition a dirty word. It’s something for Slytherins: calculating, egoistic people who want to be successful and want high status and who want to be powerful. My opinion has changed gradually over time, and I have now arrived at an almost opposite position; I believe ambition is enormously important for the good of the world, and I want to understand it better. Here is how I currently view ambition.

Ambition is setting goals that are hard to achieve which require a lot of effort. But it’s mostly related to input and output. Ambitious people want to achieve a lot, and are willing to put in a lot. But what are the goals about? They can be about personal success, the success of a group’s agenda, the success to get power for the sake of power, or the goals can be to do good. Someone’s ambition is determined by the difficulty of the goal, but goals have different directions. A good metaphor are arrows. Ambition is the size of the arrow, but values are the direction of the arrow. To do good, we need to multiply the direction of the arrows with the size of the arrow.

Obviously I have simplified things. I have forced values onto one-dimension. As if the “rightness” of values can be captured so simply. I’m not sure if that can be done. Nonetheless, I think this is a powerful metaphor. Hard work and ambitious goals are not enough in life, and neither are having the right values. Both need to be present in order to make a large and positive impact. I would like to see do-gooders think more about how they can create a largest impact. My dedication to effective altruism is well-known. But I would also like to see ambitious people think more about the values they are ambitious for, to see them engage with moral and political philosophy which attempt to find the right values to have. So be ambitious, but try to be it in the right direction.

A schematic display of conversation

This is my first attempt of externalising the ideas I have built I through having many conversations, primarily by those with my best friend, Justin. Regularly, when we talk, we take a sort of meta perspective: we look at or discuss the conversation from a distance. We’ll say things such as: “how did we get to this topic?” or “let’s go back a bit”. I think this is not at all unique to us, but we do it a lot. I think this is a very important part of the skill of conversing, so I will attempt to explain how I/we look at conversations. This is not an attempt to formalise conversations, or to show the “true structure” of conversations. I believe that would require a lot of knowledge about epistemology, which I do not have.

First off, let’s start with a sentence: “John was not in class today”. From this sentence, we can go several directions. Let’s use two responses to keep the example simple: “What did you do in class?” and “Why was John not in class today?” I will represent this as follows:

In a real conversation, you can only take one direction at a time. Therefore, you are always – consciously or subconsciously – making decisions of where the conversation is going. Different conversations can differ in their entertaining, bonding, or informational value. A good converser is able to steer the conversation into high value directions. Below is a larger conversation scheme, which I will use to highlight some interesting concepts:

So a possible conversation could go like this: “John was not in class today.” “Why was John not in class today?” “He rather spends his time reading fantasy books.” “Oh, I like fantasy too, especially when there’s multiple races involved!” and then the conversation can go on and on, about fantasy, about different races and the portrayal of the human race in fantasy, about fiction vs. non-fiction etc. However, one of the persons can also go back in the scheme, and talk or ask about what happened in the class, that would look like this:

Furthermore, this scheme is very simple. Every node consists of one sentence only, and two responses. What’s more, the different nodes can be grouped according to topics. A more complicated scheme looks like this:

So what are the implications of this? First of all, if you are aware of how conversations are structured, you can use this to create a more valuable conversation. You can return to a previous topic and take a different direction, or you can not mention the first thing that comes to mind, because you want to steer clear from a certain topic. Furthermore, if both you and your conversation partner(s) are aware of how conversations can be viewed, your meta conversation skills will allow to collaborate and create valuable conversations! I hope to write more about conversations in the future, and also venture more into how this model can be applied to thinking.

I am aware that I have left at least several things out of this post, for example:

  • not every relation is the same
  • who says what matters
  • you can arrive at points from different angles
  • different people have different associations (creative, stoned and knowledgeable people may have more associations, and thus more directions for a conversation to go in)

Actually this scheme is just a scheme of relations between concepts, and conversations are concept schemes that are passed through, because time passes.

On Blogging

Blogging is hot. It would surprise me if you don’t know a single person who maintains or has maintained his/her own blog. I have decided to start one as well. It seems to fit the self-centeredness of the current generation. But is it self-centered? And if so, is that a bad thing? I think it is self-centered. Blogging is personal. People share their stories, their ideas, their values. Nevertheless, that doesn’t have to be negative. Sure, many people have trouble writing in an enticing way (but maybe they’ve started a blog to improve that!). But blogging is also connected to the most fundamental pieces of our humanity: writing and sharing ideas. As individuals, we are incapable of great accomplishments such as landing on a moon, or building flying machines. Together, through the sharing of ideas, we can accomplish more. Writing allows us to write our thoughts and memories down. It takes it out of our brain – which has a limited long-term and a limited working memory – and makes it much more durable. Writing has brought us progress.

So, my blog will bring you great accomplishments and progress. No, of course not. But it might give you some insights now and then, even if I only manage to produce a good article 1 in 10 tries. I welcome you to my blog.