Discover more from Frontline BeSci
The case for a complexity mindset in behavioural science
Does the replication crisis suggest we move from a mechanistic, to a complexity mindset?
One of the shocks for the behavioural science community in 2021 was the retraction by Dan Ariely and colleagues of a study that claimed that people are more honest in their reports if they sign a declaration of truthfulness at the beginning of a document rather than at the end of it. The method had been adopted by the IRS, the US tax collection agency, and at least one big insurance company- thus involving likely substantial investment to make these changes.
While Ariely and his co-authors had previously reported that they were unable to replicate the findings it was the work of other behavioural scientists, Uri Simonsohn, Leif Nelson and Joe Simmons (often described as “data detectives”) that uncovered evidence for questionable data collection.
Much has already been said about this so we shall focus on the wider question of replicability of studies that much of the applied behavioural science industry drawn on. This is an issue that we clearly need to understand, not least given a challenging data simulation study suggests more than half of published results of scientific research are false. The Many Labs replication project found more than half the results published in leading psychology journals couldn’t be replicated.
Implications for practitioners
As applied practitioners, what are we to make of this? One response would be to not trust any finding published in social psychology journals until there is evidence that a finding has been replicated. Another would be to conclude the ‘science eco-system’ is working, as it was the work of behavioural science ‘data detectives’ that are identifying where and when there are issues.
Whichever route we have more sympathy with, it is clearly important to be up to date with the way different studies have been replicated and are able to inform our practitioner work.
Reasons for replication failure
The broader question though, is surely to ask why are seeing such seemingly high levels of replication failure? Part of this may well be due to the pressure on academics to publish papers that show positive effects. In a survey from 2012, a majority of psychologists reported testing their theory by including more than one outcome variable and then only reporting results for the outcome that delivers statistical significance. This of course inflates the likelihood of drawing a conclusion that is less likely to replicate .
But a broader way to understand the replication issue is to do with what Lisa Feldman-Barrett recently called the ‘mechanistic mindsets’. She suggests that psychologists often assume that human thoughts, feelings, behaviours and other psychological outcomes are a function of one or two strong factors or causes.
On this basis we would ignore factors such as country of the participants, their gender, cultural influences, their experiences on the day of the experiment and so on. These sorts of things are considered to be noise and as such their influence is ignored. On this basis, if the study does not provide the same findings on a repeated basis, then the original study would be considered flawed and the findings false.
However, as Feldman-Barret points out, perhaps we can question the assumption that psychological outcomes are not the result of a small number of strong factors but instead emerge from a number of weak, interacting factors. She suggests we call this the complexity mindset. As she says:
“The brain and the body are complex, dynamic systems. Any single variable in the system will have a weak effect. More importantly, we can’t manipulate one variable and assume that the others remain unaffected.”
As such, Feldman-Barret suggests that we may need to overhaul the lab experiment, “psychology’s most cherished experimental method” to account for this complexity.
A related point is that if social patterns are in flux with styles, ideologies, public opinion, and customs subject to historical shifts, then the theory (and mechanisms) that we call on to examine them may need to adapt. As long-time commentator on these issues, Kenneth Gergen asks:
“…what if social life is not itself stable; what if social patterns are in a state of continuous and possibly chaotic transformation?”
The somewhat radical suggestion here is that we may be in need of a Kuhn-style shift in the way we understand human behaviour: what was a useful explanation from the 1960’s may not be very helpful now. A recent paper could be seen as reflecting this, showing the way in which the authors consider behavioural science has been evolving, addressing different themes with somewhat different methods and applications at each wave.
Rather than seeing the ‘replication crisis’ as a failure of social science, a sign of unreliability, we could see it a meaningful finding in its own right. It is perhaps indicating that the context has changed, the subtle (and unmeasured) range of variables influencing the outcomes are no longer the same as when the experiment first took place.
As such we need to look beyond simple cause and effect but look more widely to understand the range of influences on behaviour. This points to the need for an ‘up-stream’ focus, using more holistic frameworks (such as the MAPPS framework) to understand and unpack behaviour in a more nuanced way.
This would mean that we also perhaps need to think a little more creatively about testing, challenging ourselves to think beyond the RCT. Testing specific behaviours with tactical interventions does not suit every practitioner behaviour change challenge of course. Other approaches can be explored such as N-of-1 studies with individuals or small groups, offering what is perhaps a more ecologically valid and agile form of testing.
Finally, we can consider expanding the frame of reference that is used in applied research to identify the breadth of factors that may be shaping behaviour. Disciplines such as sociocultural psychology, or frameworks such as Social Representation Theory and Social Amplification of Risk can be drawn on, showing the way in which the wider environment influences behaviour (and indeed vice-versa).