Level 2 chaos, risk communication, and faith

Reading time: 4 minutes.

In communities which value reason and evidence, atheism is the norm. The core reason seems the rejection of faith. There is no evidence of God and there more plausible theories for why the world is as it is (e.g. evolutionary theory, Big Bang). Yet, theists remind each other that it is important to believe anyway, to have faith.

Faith is an odd concept in the paradigm of rationality and truth-seeking. It seems the opposite of the most common definition of knowledge: well-justified belief. But, faith has been dismissed too quickly; it still has a role to play.

When I read the wonderful book Sapiens, the concept of level 1 and level 2 chaotic systems stuck with me. It has been especially memorable because I haven’t encountered these terms anywhere else. This surprises me, because they make a useful distinction.

Roughly speaking, a system is level 1 chaotic when approximate knowledge of the system’s configuration does not yield approximately accurate predictions. Beyond a certain time horizon (called the Lyapunov time) too many divergent paths are possible and predictions become near impossible. At a certain scale, the weather is a good example. Specific weather forecasts generally become inaccurate beyond 2 weeks. Or sometimes, as a learned when I visited Blackpool, beyond 2 hours. Apparently Blackpool’s weather is extremely chaotic!

A system is level 2 chaotic when, besides the properties of level 1 systems, predictions about the system also change the system itself! The stock market is a great example. When I predict the price of an asset to rise in the future, I am going to buy it now. This signals increased demand, which causes prices to rise. It becomes really complex when I base my expectations on other people’s expectations (instead of the underlying asset value). Almost every social system is level 2 chaotic, because knowledge tends to affect the behaviour of people. In one way, this makes level 2 chaotic systems harder to predict, because it’s even more complex. In another, it is easier. When a system is sufficiently chaotic, predictions will have significant influence on the outcome. Humans, of course, have caught on to this a long time ago. Self-fulfilling prophecies are a common motif in classical stories, and politicians frequently create policies that create or exacerbate the need for the policy (e.g. the War on Drugs). However, a prediction can also reduce the likelihood of something happening (in complexity terms: a negative feedback loop). When I work on the risk assessment of global catastrophes, I sure hope to create a negative feedback loop!

Why does this matter? First, predictions should be communicated carefully. When we predict that there is a 5% risk of global catastrophe in the next 100 years (NB: made-up number!) it should be clear that this is, conditional on a certain set of actions. The IPCC, for example, uses ‘business-as-usual’ as one of its conditions. Second, the role of expectations in complex social systems should be acknowledged. If we fear a technology race towards a dangerous advanced technology, that creates the very dynamic we fear. Everyone now wants to be first. In innovation studies, it’s consensus that the dominant design (i.e. the design a technology converges on) is not predetermined. Instead, the dominant design is influenced by the expectations of companies and their future customers (and the expectations of the companies about the expectations of the future customers – see how complex this can become?).

Let’s return to faith as a tool for the reason- and evidence-based communities. In level 2 chaotic systems, faith works. Sure, it doesn’t work 100%. But personally believing that you will succeed at a project can make you grittier. Collectively believing that people will support each other in times of disaster can foster cooperation. To effectively create social change, we should not ignore that faith is useful. Even if blind faith is not.

Finally, allow me to philosophize about faith and God. Through the lense of social constructivism, concepts and knowledge don’t exist objectively ‘out there in the world’, nor are they completely arbitrary. Instead, we create concepts by collectively ascribing meaning to something. When talking about something sacred, it seems people often fall back into the objectivism/nihilism camps: either the sacred is from ‘out of this world’ or it doesn’t exist. But we can create the sacred. This doesn’t mean that God exists, but something sacred can. And hey, just because God doesn’t exist yet doesn’t mean that God will never exist! Maybe if we just believe …


What I, a classical utilitarian, learned from suffering-focused ethics

If I had to describe the moral theory I have most credence in, it would be close to classical utilitarianism. The only property with intrinsic value I’m confident about is subjective experience (thus, I approximate a moral hedonist). I believe pain and pleasure can be measured objectively, although we do not have the tools yet to do so. I also believe that suffering and happiness can be represented symmetrically on a scale: there is no fundamental reason to prioritise preventing suffering over increasing happiness. Furthermore, I believe we can simply aggregate the value of all subjective experiences and that it’s best to maximize this aggregate value.

Despite my skepticism towards it, I recently attended a retreat focused on suffering-focused ethics. It is the new “umbrella term for views that place primary or particular importance on the prevention of suffering,” encompassing also negative utilitarianism and, by implication, anti-natalism. Although my skepticism about these views was not relieved, I found the retreat very valuable: I learned a lot and I made new allies in the mission to improve the long-term future. I think the suffering-focused ethics community has something to offer to classical utilitarians, simply because it offers a different perspective from smart altruistic people on a similar set of problems. This can help us to uncover blind spots, make some ideas more salient, and clarify the reasons we have for our beliefs.1 In addition, I think epistemic modesty and moral cooperation are two more reasons to pay attention to the ideas of their community. Let me share my key takeaways here.

In the modern world, extreme suffering (probably) outweighs extreme happiness in prevalence, duration, and intensity

I think the amount of small and medium pleasures (e.g. enjoying  great meal) and displeasures (e.g. suffering from an injured shoulder) are probably within the same order of magnitude. However, as we go towards the extreme ends of the spectrum I think the picture gets more skewed. States of extremely positive wellbeing, such as bliss, seem not only less common and shorter than intense suffering,2 but also unable to outweigh the intensity of extreme suffering. Thus currently, there seems to be an empirical asymmetry between extreme suffering and extreme wellbeing.

Many people (within EA) appear to share this view: this poll (results in figure) asked people’s tradeoffs for 1 day of drowning in lava vs. x duration of bliss. Although there are many methodological problems with this poll (and at least 10% of the respondents agree with me), it does show people’s intuitions that intense suffering is (currently) more intense than intense bliss.

Given this, I think it is plausible that relieving intense suffering should be the highest altruistic priority for those focused on the near-term (e.g. within the next 100 years), and it is an open question what that implies for cause selection.

Nonetheless, this does not say much about about the distribution of sentient states in the long-term future. If it is possible to ‘program’ consciousness, then such asymmetries can likely be removed from the source code of digital beings (whether it will removed is another question). The empirical asymmetry would then be an example of the horrors of biological evolution best left behind. Yet, if this asymmetry is the result of competitive evolutionary pressures, this might be a reason to expect competitive multipolar scenarios3 to be very negative in value.

The future is not guaranteed to be positive

One of the simplest lessons is perhaps making salient that the future is not necessarily positive in value. An overly simplistic view of x-risk reduction is ‘we should reduce extinction risk, because then there will be an awesome future ahead of us.’ Instead, we should state a more nuanced view of longtermism, for example in the following way:

‘the future can be enormously vast, we can probably influence the long-term future, and we should try to make it go as well as possible. Even if we expect the future to be negative, we should still try to influence it for the better.’

I think this framing also makes longtermism much more persuasive to people who know little about population ethics (basically everyone!). If we want to gather a broad support for longtermism, I think this is a much more promising strategy than trying to convince people that the future will be close to utopian.

Furthermore, it gives rises to interesting questions like ‘how valuable do we think the future will be (in expectation)?’, ‘do we even think the future is  and ‘how much of the expected value of the future is influenced by the value of ‘optimized’ scenarios?’ (i.e. scenarios in which some process is optimizing for something that is (dis)valuable). But these questions are something to go into another time.

Our understanding of wellbeing and consciousness is very limited

It might go without saying, but we still know very little about consciousness and what well-being consists of. Sure, we have reliably identified a lot of factors that make humans feel good and bad, but what are the dimensions of emotions? Are positive affect and negative affect even on the same scale? There is no scientific consensus about this. And isn’t happiness always about something? Should it be about something? What would this something be? How do we compare the value of ecstasy to the equanimity of a monk? How can we create a reliable measure of valence? What are the components of valence, and is valence the only thing that matters? The more I think about this, the more confused I get.

Given that I, like many other EA’s, am trying to optimize for subjective experience, it’s humbling to understand so little about it. Nonetheless, it doesn’t mean that we don’t know what to do. In the case of longtermism, a good heuristic is still ‘prevent very bad irreversible things until we are wise enough to figure out what to actually promote’ (also known as ‘reduce existential risk’).

Staring into the darkness of this world can be both empowering and sobering

Lastly, during the retreat I noticed a more visible role of empathy than at other EA events. People seemed very aware of all the suffering in the world, and strongly focused on their goal of reducing and preventing suffering. Now, such a negative focus can be detrimental: it could feel demotivating, overwhelming, and depressing. However, I think there can also be a lot of value in acknowledging the darkness of this world, whether you believe it’s only a small patch or a vast expanse. It is empowering: there is an strong and urgent pull to act now, and I think a focus on the darkness in this world provides a solid defense against value drift. It is also sobering: the world ‘ain’t all sunshine and rainbows’ and it can be much worse yet.


In summary, classical utilitarians can learn quite a bit from the suffering-focused ethics community. Intense suffering seems more intense and prevalent than intense happiness; the future is not necessarily positive; we still know very little about consciousness and wellbeing; and acknowledging the suffering in this world can be a powerful and reliable motivation. In our mission to maximally improve the world, we should welcome this diversity of perspectives.



Doing Good Science: More Than Good Methods

Epistemic status: initial thoughts, but probably incomplete. But I prefer sharing incomplete but useful thoughts over not sharing them at all. I think these thoughts roughly generalize to non-scientific research (e.g. industry’s R&D or philosophy).

Recently somebody at a dinner asked me what I think is effective science (that is, science that is maximally good for the world). I had some initial thoughts, but the question kept simmering in my mind. Here I set out an initial model of what I think is good science. Numerous people before have thought about this question and I am probably ignoring a lot of their work. However, I hope this still adds something useful. This question is important for doing good science, evaluating good science, and funding good science.

I think this problem is important, neglected, and tractable. It is important because scientific developments compound: they will affect the trajectory of the future and obtaining new insights sooner improves decision making. I think it is neglected as most granting institutions seem to adopt a morally partial view (i.e. they favour a specific group such as Dutch citizens, or a specific topic such as history of philosophy). Impartial granting and evaluation is done, but only exceptionally. I think the problem is relatively tractable, although it is one of the most complex ways in which one can do good. Science has a long distance to actual impact (in comparison to e.g. planting trees or making policy), so there are many paths it can take towards impact. However, my impression is that we can evaluate the value of research much better than ‘total ignorance’, even if we cannot single out the best possible research.

I want to define ‘Good Research’ as good questions with good answers that are used to improve the world. In formula form, this looks as follows:

Value of research =
quality of the question * quality of the answer * quality of use

Note that the value of research is a product, not a sum. This means that scoring low on only one dimension (e.g. use) drastically reduces the total value of the research. This principle (AKA the Anna Karenina Principle) normally gives rise to a power-law distribution, even if values are normally distributed over the three factors. This can be shown in two different ways, so I’ll just draw both. The point here is to get all relevant factors right.

I should also note that each value can take a negative value, but I don’t include that in my model for now. However, this means that even seemingly good research can have negative consequences. Responsible scientists need to realize that, and take the necessary steps to prevent scoring negatively on any factor wherever possible.

I also believe that the limited attention that has gone into evaluating good science has gone mostly into the quality of answers, making sure they are reliable (e.g. they replicate) and valid (i.e. actually capture the phenomenon it is meant to capture), and attention has also gone into science implementation, e.g. by collaborations with industry, government, and civil society. I suspect the least amount of attention has gone into making sure we ask the right questions. I will now go into each factor, flagging some initial considerations and breaking the factor up further.

Quality of the question

Richard Hamming is (in)famous for asking many scientists “what is the most important problem in your field?”. After getting an answer, he asked “and why are you not working on it?” Unsurprisingly, not everyone liked Hamming. However, his observation is on point: many scientists do not work on the most important problems in their field. Hamming acknowledged that asking the right questions is a skill that needs years of cultivation (see this talk of his), and requires being connected to one’s own field, other fields, and to society. Without (the right) feedback one shouldn’t expect to figure out the most important questions to ask. (This is also why I think being involved in the effective altruism community is so important for me.) However, where Hamming focused on problems that were scientifically important, I (and I suppose many readers with me) care about more than scientific progress: I want it to benefit the lives of humans and other sentient beings.

This brings values into the picture. I regard one of the main contributions of philosophy of science in the 20th century to be that science is value-laden, not value-neutral. Consequentially, scientists need to engage in practical ethics to decide what is most important: is it the wellbeing of humans, of all sentient beings, reducing global inequality, reducing existential risk, promoting biodiversity, or something else? Given that some moral views (especially total utilitarianism, which I find very plausible) claim that there are orders of magnitude difference in value between these things, it is important to get them right. If you think that we already have our academic priorities aligned with our moral priorities, I present you this graph from Nick Bostrom (2013):

Other components of the question quality might be its scientific importance: working out the details of one of many theories does not move the field forward as much as questioning the assumptions of a paradigm, which has huge scientific implications (e.g. Kahneman & Tsversky questioning classical economics assumptions). I have heard others say that some people are especially good at this: newcomers and scientists from other fields. Interdisciplinary research also seems neglected: applying insights to new areas is something not many people are able to. There are possibly other factors I have not identified here.

Quality of the question =
moral importance * scientific importance * ..?

Quality of the answer

This is a factor I am much less qualified to talk about than other scientists and philosophers of science. They include factors like reliability, validity, and can be improved by individual creativity and rigor, and systems such as preregistration to avoid p-hacking and the file drawer effect, and other methods to avoid false positives. The formula might look as follows, but I’m significantly uncertain about it:

Quality of the answer =
reliability * validity * generalizability * simplicity

Quality of use

Research is worthless if is always stays within academia. It needs to positively affect the world. Insights will generally be used outside of academia by government, industry, or civil society. However, science can also indirectly influence the world via influencing other science. I suspect this is how most science impacts the world. To connect insights to actors the insights need to be accessible. Initiatives to improve accessibility are open science, science outreach to general public, and other translation such as literature reviews.

Moreover, quality of use is also not value-neutral. A policy can be implemented very rigorously and efficiently to achieve a worthless, or even an abominable, goal. Effective science therefore needs to ensure their research is used by competent actors with the right values. An approximation of the quality of use is as follows:

Quality of use =
accessibility of answer * values of user * quality of implementation


Doing good with science is complex, neglected, but tractable. I have set out a very rough, incomplete and partially incorrect model to make sense of this. I encourage others do further work on this and would be happy to contribute further.

My preliminary model of impactful science looks like this, and is subject to change (click to enlarge):

Everything follows the path of least resistance

Epistemic status: feeling strongly I am onto something, but also confused how to apply it to all cases. I describe my ideas in my own words, and am not communicating them clearly. Given the strength of my claim, it’s probably wrong. But let me defend it anyway.

Everything follows the path of least resistance. And when I say “everything” I don’t mean “most things”, I mean everything.

People who are lazy are sometimes described as taking the path of least resistance. Of course, I agree (because I believe that everything takes the path of least resistance). However, this implies that hard-working, conscientious people are taking a more difficult path. They’re eschewing the path of least resistance. That’s wrong. Hard-working people have done at least one of both things, and probably both:

  1. They have made the hard path easier. Going to work an hour early has become habitual, or asking difficult questions when they are confused has become habitual.
  2. They have made it easy path(s) harder. They feel negatively about being lazy, are afraid of being judged, or simply don’t know how to be lazy. It’s not a habit.

Why does water flow downward and through the valley? It’s the path of least resistance. (And interestingly, just like habits, the more water has flown somewhere previously, the more likely this path will be the path of least resistance for new water).

Why do people when confronted with their own immoral behavior, often change their belief rather than their behavior? Because it’s the path of least resistance.

Why is it so hard to stick with very hard problems? Because it’s high resistance. Why have some exceptional research been able to really focus on the hard problems? Because they have made it easier, and have erected barriers to the other paths.

Why is it so hard to change organizations? Because sticking with habits is the path of least resistance. To change organizations, you must create resistance towards the current state of being and create believe that the change is not so hard after all.

Now there remain at least three questions:

  1. Why does everything follow the path of least resistance?
  2. Is there not random movement, not following anything?
  3. What about payoffs? Surely a hard path with a good payoff will be taken.

I think the answers to the questions are related to evolution. Yes, there is random movement. But everything faces a selection pressure: animals will mate, ideas will be spread. The selection favors the ones taking the path of least resistance, because they will be most successful: the most offspring, the most energy left to do other valuable things. Regarding the payoff, for people a higher payoff will make a path more attractive (thus lower resistance). For thing without intent, the expected payoff (probability of achieving * actual payoff if achieved) will determine attractiveness for large enough samples.

If I’m right, this has the implication that a global optimum cannot be reached without changing the landscape. You got to erect barriers to the easier optima, and pave the way to the global optimum. More practically, I think this can serve as a tool for understanding confusing phenomena: ‘why does X do Y?’ Because it’s following the path of least resistance. You then need to figure out what the other paths are, what their resistances are, and why they are higher.

I challenge anyone to show me an example of something not following the path of least resistance. I believe that if I understand the phenomenon enough, I can show it actually does.

In the meantime, I will think more about this. I believe I need to read more about evolution (in its abstract form, not necessarily biological evolution).

Ambition is empty without direction

I used to find ambition a dirty word. It’s something for Slytherins: calculating, egoistic people who want to be successful and want high status and who want to be powerful. My opinion has changed gradually over time, and I have now arrived at an almost opposite position; I believe ambition is enormously important for the good of the world, and I want to understand it better. Here is how I currently view ambition.

Ambition is setting goals that are hard to achieve which require a lot of effort. But it’s mostly related to input and output. Ambitious people want to achieve a lot, and are willing to put in a lot. But what are the goals about? They can be about personal success, the success of a group’s agenda, the success to get power for the sake of power, or the goals can be to do good. Someone’s ambition is determined by the difficulty of the goal, but goals have different directions. A good metaphor are arrows. Ambition is the size of the arrow, but values are the direction of the arrow. To do good, we need to multiply the direction of the arrows with the size of the arrow.

Obviously I have simplified things. I have forced values onto one-dimension. As if the “rightness” of values can be captured so simply. I’m not sure if that can be done. Nonetheless, I think this is a powerful metaphor. Hard work and ambitious goals are not enough in life, and neither are having the right values. Both need to be present in order to make a large and positive impact. I would like to see do-gooders think more about how they can create a largest impact. My dedication to effective altruism is well-known. But I would also like to see ambitious people think more about the values they are ambitious for, to see them engage with moral and political philosophy which attempt to find the right values to have. So be ambitious, but try to be it in the right direction.