Level 2 chaos, risk communication, and faith

Reading time: 4 minutes.

In communities which value reason and evidence, atheism is the norm. The core reason seems the rejection of faith. There is no evidence of God and there more plausible theories for why the world is as it is (e.g. evolutionary theory, Big Bang). Yet, theists remind each other that it is important to believe anyway, to have faith.

Faith is an odd concept in the paradigm of rationality and truth-seeking. It seems the opposite of the most common definition of knowledge: well-justified belief. But, faith has been dismissed too quickly; it still has a role to play.

When I read the wonderful book Sapiens, the concept of level 1 and level 2 chaotic systems stuck with me. It has been especially memorable because I haven’t encountered these terms anywhere else. This surprises me, because they make a useful distinction.

Roughly speaking, a system is level 1 chaotic when approximate knowledge of the system’s configuration does not yield approximately accurate predictions. Beyond a certain time horizon (called the Lyapunov time) too many divergent paths are possible and predictions become near impossible. At a certain scale, the weather is a good example. Specific weather forecasts generally become inaccurate beyond 2 weeks. Or sometimes, as a learned when I visited Blackpool, beyond 2 hours. Apparently Blackpool’s weather is extremely chaotic!

A system is level 2 chaotic when, besides the properties of level 1 systems, predictions about the system also change the system itself! The stock market is a great example. When I predict the price of an asset to rise in the future, I am going to buy it now. This signals increased demand, which causes prices to rise. It becomes really complex when I base my expectations on other people’s expectations (instead of the underlying asset value). Almost every social system is level 2 chaotic, because knowledge tends to affect the behaviour of people. In one way, this makes level 2 chaotic systems harder to predict, because it’s even more complex. In another, it is easier. When a system is sufficiently chaotic, predictions will have significant influence on the outcome. Humans, of course, have caught on to this a long time ago. Self-fulfilling prophecies are a common motif in classical stories, and politicians frequently create policies that create or exacerbate the need for the policy (e.g. the War on Drugs). However, a prediction can also reduce the likelihood of something happening (in complexity terms: a negative feedback loop). When I work on the risk assessment of global catastrophes, I sure hope to create a negative feedback loop!

Why does this matter? First, predictions should be communicated carefully. When we predict that there is a 5% risk of global catastrophe in the next 100 years (NB: made-up number!) it should be clear that this is, conditional on a certain set of actions. The IPCC, for example, uses ‘business-as-usual’ as one of its conditions. Second, the role of expectations in complex social systems should be acknowledged. If we fear a technology race towards a dangerous advanced technology, that creates the very dynamic we fear. Everyone now wants to be first. In innovation studies, it’s consensus that the dominant design (i.e. the design a technology converges on) is not predetermined. Instead, the dominant design is influenced by the expectations of companies and their future customers (and the expectations of the companies about the expectations of the future customers – see how complex this can become?).

Let’s return to faith as a tool for the reason- and evidence-based communities. In level 2 chaotic systems, faith works. Sure, it doesn’t work 100%. But personally believing that you will succeed at a project can make you grittier. Collectively believing that people will support each other in times of disaster can foster cooperation. To effectively create social change, we should not ignore that faith is useful. Even if blind faith is not.

Finally, allow me to philosophize about faith and God. Through the lense of social constructivism, concepts and knowledge don’t exist objectively ‘out there in the world’, nor are they completely arbitrary. Instead, we create concepts by collectively ascribing meaning to something. When talking about something sacred, it seems people often fall back into the objectivism/nihilism camps: either the sacred is from ‘out of this world’ or it doesn’t exist. But we can create the sacred. This doesn’t mean that God exists, but something sacred can. And hey, just because God doesn’t exist yet doesn’t mean that God will never exist! Maybe if we just believe …

 

Doing Good Science: More Than Good Methods

Epistemic status: initial thoughts, but probably incomplete. But I prefer sharing incomplete but useful thoughts over not sharing them at all. I think these thoughts roughly generalize to non-scientific research (e.g. industry’s R&D or philosophy).

Recently somebody at a dinner asked me what I think is effective science (that is, science that is maximally good for the world). I had some initial thoughts, but the question kept simmering in my mind. Here I set out an initial model of what I think is good science. Numerous people before have thought about this question and I am probably ignoring a lot of their work. However, I hope this still adds something useful. This question is important for doing good science, evaluating good science, and funding good science.

I think this problem is important, neglected, and tractable. It is important because scientific developments compound: they will affect the trajectory of the future and obtaining new insights sooner improves decision making. I think it is neglected as most granting institutions seem to adopt a morally partial view (i.e. they favour a specific group such as Dutch citizens, or a specific topic such as history of philosophy). Impartial granting and evaluation is done, but only exceptionally. I think the problem is relatively tractable, although it is one of the most complex ways in which one can do good. Science has a long distance to actual impact (in comparison to e.g. planting trees or making policy), so there are many paths it can take towards impact. However, my impression is that we can evaluate the value of research much better than ‘total ignorance’, even if we cannot single out the best possible research.

I want to define ‘Good Research’ as good questions with good answers that are used to improve the world. In formula form, this looks as follows:

Value of research =
quality of the question * quality of the answer * quality of use

Note that the value of research is a product, not a sum. This means that scoring low on only one dimension (e.g. use) drastically reduces the total value of the research. This principle (AKA the Anna Karenina Principle) normally gives rise to a power-law distribution, even if values are normally distributed over the three factors. This can be shown in two different ways, so I’ll just draw both. The point here is to get all relevant factors right.

I should also note that each value can take a negative value, but I don’t include that in my model for now. However, this means that even seemingly good research can have negative consequences. Responsible scientists need to realize that, and take the necessary steps to prevent scoring negatively on any factor wherever possible.

I also believe that the limited attention that has gone into evaluating good science has gone mostly into the quality of answers, making sure they are reliable (e.g. they replicate) and valid (i.e. actually capture the phenomenon it is meant to capture), and attention has also gone into science implementation, e.g. by collaborations with industry, government, and civil society. I suspect the least amount of attention has gone into making sure we ask the right questions. I will now go into each factor, flagging some initial considerations and breaking the factor up further.

Quality of the question

Richard Hamming is (in)famous for asking many scientists “what is the most important problem in your field?”. After getting an answer, he asked “and why are you not working on it?” Unsurprisingly, not everyone liked Hamming. However, his observation is on point: many scientists do not work on the most important problems in their field. Hamming acknowledged that asking the right questions is a skill that needs years of cultivation (see this talk of his), and requires being connected to one’s own field, other fields, and to society. Without (the right) feedback one shouldn’t expect to figure out the most important questions to ask. (This is also why I think being involved in the effective altruism community is so important for me.) However, where Hamming focused on problems that were scientifically important, I (and I suppose many readers with me) care about more than scientific progress: I want it to benefit the lives of humans and other sentient beings.

This brings values into the picture. I regard one of the main contributions of philosophy of science in the 20th century to be that science is value-laden, not value-neutral. Consequentially, scientists need to engage in practical ethics to decide what is most important: is it the wellbeing of humans, of all sentient beings, reducing global inequality, reducing existential risk, promoting biodiversity, or something else? Given that some moral views (especially total utilitarianism, which I find very plausible) claim that there are orders of magnitude difference in value between these things, it is important to get them right. If you think that we already have our academic priorities aligned with our moral priorities, I present you this graph from Nick Bostrom (2013):

Other components of the question quality might be its scientific importance: working out the details of one of many theories does not move the field forward as much as questioning the assumptions of a paradigm, which has huge scientific implications (e.g. Kahneman & Tsversky questioning classical economics assumptions). I have heard others say that some people are especially good at this: newcomers and scientists from other fields. Interdisciplinary research also seems neglected: applying insights to new areas is something not many people are able to. There are possibly other factors I have not identified here.

Quality of the question =
moral importance * scientific importance * ..?

Quality of the answer

This is a factor I am much less qualified to talk about than other scientists and philosophers of science. They include factors like reliability, validity, and can be improved by individual creativity and rigor, and systems such as preregistration to avoid p-hacking and the file drawer effect, and other methods to avoid false positives. The formula might look as follows, but I’m significantly uncertain about it:

Quality of the answer =
reliability * validity * generalizability * simplicity

Quality of use

Research is worthless if is always stays within academia. It needs to positively affect the world. Insights will generally be used outside of academia by government, industry, or civil society. However, science can also indirectly influence the world via influencing other science. I suspect this is how most science impacts the world. To connect insights to actors the insights need to be accessible. Initiatives to improve accessibility are open science, science outreach to general public, and other translation such as literature reviews.

Moreover, quality of use is also not value-neutral. A policy can be implemented very rigorously and efficiently to achieve a worthless, or even an abominable, goal. Effective science therefore needs to ensure their research is used by competent actors with the right values. An approximation of the quality of use is as follows:

Quality of use =
accessibility of answer * values of user * quality of implementation

Conclusion

Doing good with science is complex, neglected, but tractable. I have set out a very rough, incomplete and partially incorrect model to make sense of this. I encourage others do further work on this and would be happy to contribute further.

My preliminary model of impactful science looks like this, and is subject to change (click to enlarge):